Prosecution Insights
Last updated: April 19, 2026
Application No. 17/893,518

BATCH FUNCTIONS FRAMEWORK

Non-Final OA §103
Filed
Aug 23, 2022
Examiner
HU, SELINA ELISA
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
American Express Travel Related Services Company, Inc.
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+11.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to applicant’s amendment filed on 12/04/2025. Claims 1-20 are pending and examined. Response to Arguments Applicant’s arguments filed 12/04/2025 with respect to 35 U.S.C. 112(b) have been fully considered and are persuasive. Therefore, the rejections of claims 4-5, 11-12, and 18-19 under 35 U.S.C. 112(b) have been withdrawn. Applicant’s arguments filed 12/04/2025 with respect to 35 U.S.C. 103 have been fully considered and are not persuasive. Applicant argues that Gupta, Bequet, Mulholland, and Akash, alone or in combination, do not render obvious the amended claims and that they are therefore allowable. Examiner respectfully disagrees, see the 103 rejections below for a detailed analysis pertaining to the amended claims. Although Gupta, Bequet, Mulholland, and Akash alone may not explicitly teach all of the amended claims, the additional reference of Birnbaum in combination with Gupta does disclose the two-step cascade feature as described in the amendments and arguments. Examiner interprets the caching system utilizing combinations of heterogenous cache nodes to store incremental batches of image data of Birnbaum to an in-memory data grid caching data. The input image data from the caching system used by the multiple nodes to perform operations of Birnbaum therefore correlates to an in-memory data grid used by the first and second batch function executed across first and second pods. The examiner further interprets the operations which can be divided into task batches across multiple nodes being applied incrementally as data flows between nodes of Birnbaum to distributing data from the first pod executing the first batch function to the second pod executing the second batch function. Lastly, the data flow being mediated by the caching system and the operation outputs being saved in a cache for intersecting tasks of Birnbaum correlates to the in-memory data grid distributing the cached data from the first pod executing the first batch function to the second pod executing the second batch function. Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with Birnbaum because multiple strategies can be employed in a caching system to improve the efficiency and performance. These optimizations include selecting the size of the increments and their ordering to best match the page size of the cache nodes and task result buffers, fitting allocations to the available memory and disk space, and predictive prefetching of data increments based on observed user access patterns. Nodes can also be initially configured to match the formatting of input image data and the requested task which is further split into batch functions. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (U.S. Patent No. US 20190370058 A1), hereinafter “Gupta,” in view of Bequet et al. (U.S. Patent No. US 20220253335 A1), hereinafter “Bequet,” Mulholland et al. (U.S. Patent No. US 20230112338 A1), hereinafter “Mulholland,” Akash et al. (U.S. Patent No. US 20220244850 A1), hereinafter “Akash” and Birnbaumet al. (U.S. Patent No. US 20220058052 A1), hereinafter “Birnbaum.” With regards to Claim 1, Gupta teaches: A computer implemented method, comprising: receiving a plurality of batch jobs, wherein each of the plurality of batch jobs defines a first batch function for execution on a cluster computing platform (Paragraphs 12 and 51, “The system may enable on-demand triggering and execution of batch job workflows across data systems that may implement and utilize different technologies. As discussed further herein, a batch job workflow may comprise one or more interrelated jobs for a given business process or technical process… Distributed computing cluster may be, for example, a Hadoop® cluster configured to process and store big data sets with some of nodes comprising a distributed storage system and some of nodes comprising a distributed processing system.” The multiple batch job workflow received by the system corresponds to a plurality of batch jobs. The Hadoop distributed computing cluster corresponds to the cluster computing platform) Gupta does not explicitly teach: deploying the first batch function to a first pod according to the schedule; executing the first batch function on the first pod executing on a first computing node in the cluster computing platform; as the first batch function completes the execution, releasing the first pod executing on the first computing node in the cluster computing platform However, Bequet teaches: deploying the first batch function to a first pod according to the schedule (Paragraph 92, “Each pod may include at least one container environment to which at least one thread of execution is assigned to execute an instance of a routine therein. Some of the pods may be employed in executing instances of task routines to perform corresponding tasks of job flows. Others of the pods may be employed in executing instances of various routines that control the performance of job flows, including the derivation and effectuation of an order of performance of tasks of a job flow through the execution of instances of task routines. The order in which task routines within such isolated environments are executed to effectuate the derived order of performance of their corresponding tasks may be coordinated through a set of message queues.” Some of the pods being employed to execute tasks of job flows based on the order in which task routines are executed corresponds to deploying the batch function to the pods according to the schedule); executing the first batch function on the first pod executing on a first computing node in the cluster computing platform (Paragraph 92, “Each pod may include at least one container environment to which at least one thread of execution is assigned to execute an instance of a routine therein. Some of the pods may be employed in executing instances of task routines to perform corresponding tasks of job flows. Others of the pods may be employed in executing instances of various routines that control the performance of job flows, including the derivation and effectuation of an order of performance of tasks of a job flow through the execution of instances of task routines.” Some of the pods which include a container environment being employed to execute tasks of job flows corresponds to executing the batch function on the pod executing on a computing node in the cluster computing platform); as the first batch function completes the execution, releasing the first pod executing on the first computing node in the cluster computing platform (Paragraphs 30-31 and 95, “within the performance container, in response to at least storage of multiple execution completion messages within the task queue that are indicative of completion of execution of the subset of task routines, performing operations including providing, to the resource allocation routine, an indication of cessation of the need for provision of the derived quantity of task containers…The dynamic allocation of the multiple containers may include dynamic allocation of multiple pods… the uninstantiation of a container in which a routine is currently being executed might be delayed until that routine has reached the end of its execution therein. Alternatively or additionally, the uninstantiation of a container may be coordinated with the cessation of execution of a routine therein.” The uninstantiation of containers or pods associated with the task routine in coordination with the cessation of execution of a routine and/or in response to the storage of multiple execution completion messages which indicate the completion of execution of task routines corresponds to releasing the pods as the batch function completes execution in the cluster computing platform). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with deploying the first batch function to a first pod according to the schedule; executing the first batch function on the first pod executing on a first computing node in the cluster computing platform; as the first batch function completes the execution, releasing the first pod executing on the first computing node in the cluster computing platform as taught by Bequet because pods can be dynamically alterable based on availability and resources, which allows control over the performance of job flow and can support parallelized execution of large quantities of software across multiple devices. Additionally, the uninstatiation of pods in response to completing batch function execution can maintain resource allocation requirements such as upper or lower limits for quantities of each type of pod to ensure one type of limited resource is not excessively consumed (Bequet: paragraphs 92-93 and 99). Gupta does not explicitly teach: and wherein each of the plurality of batch jobs includes a timing for executing the first batch function; compiling a schedule for executing the first batch function based on the timing However, Mulholland teaches: and wherein each of the plurality of batch jobs includes a timing for executing the first batch function (Paragraphs 26 and 28, “Workload processes may include individual tasks or recurring tasks… any batch task or backup task may be subject to the described job scheduling algorithm… at operation 102, providing metadata associated with each workload process relating to execution timing parameters of the tasks within the workload process. In one embodiment, the execution timing parameters may include a minimum frequency of tasks (for example, how often a task needs to be run, such as daily, hourly, etc.) and an expected duration of the workload process. In another embodiment, the execution timing parameters may define a desired execution time of tasks with a tolerance window to allow for flexible allocation.” Each workload process having execution timing parameters, which include frequency of execution and execution time, corresponds to the timing for executing the batch function); compiling a schedule for executing the first batch function based on the timing (Paragraph 41, “At operation 106, the method 100 schedules multiple workload processes concurrently or in temporal proximity based on the correlations between workload processes and the defined execution timing parameters of the workload processes to achieve an optimized deduplication ratio.” The method 100 scheduling multiple workload processes based on the defined execution timing parameters of each workload process corresponds to the schedule for executing each batch function based on the timing); Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with wherein each of the plurality of batch jobs includes a timing for executing the first batch function and compiling a schedule for executing the first batch function based on the timing as taught by Mulholland because the execution timing parameters allow for flexibility of task execution in the workload process and optimizes deduplication. (Mulholland: paragraph 29). Gupta does not explicitly teach: wherein the first computing node is re-used for executing a second batch function currently being deployed to a second pod executing on a second computing node in the cluster computing platform, However, Akash teaches: wherein the first computing node is re-used for executing a second batch function currently being deployed to a second pod executing on a second computing node in the cluster computing platform (Fig. 4, paragraphs 67-68 and 71, “In a step 406, a first component of a first node of the cluster may be designated as a master component, and, in a step 408, one or more second components of one or more second nodes of the cluster may be designated as agent components, for example, as described in more detail elsewhere herein. In a step 410, the first component may execute the first group of one or more services as part of a first OS process… Concurrently to the performance of the step 410, or parts thereof, each second component may execute the second group of one or more services as part of a respective second OS process... In a step 420, the first component and the remaining second components may execute the second group of one or more services as part of a first OS process and respective second OS process, respectively, and, concurrently to the performance of the step 420, the determined second component may execute the first group of one or more services as part of its second OS process.” The first node of the cluster executing the first group of one or more services correlates to the computing node. The second component executing a second group of one or more services to the second node correlates to a second batch function currently being deployed to a second pod executing on a second computing node in the cluster computing platform. The first component then executing the second group of one or more services as part of its OS process correlates to the computing node being reused for executing a second batch function currently deployed to a second pod and node), Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with wherein the first computing node is re-used for executing a second batch function currently being deployed to a second pod executing on a second computing node in the cluster computing platform as taught by Akash because load balancing or maintaining HA requirements of objectives may require changes to nodes executing groups of one or more services. Changing the roles of node components can use event-driven frameworks to ensure the system these objectives are met (Akash: paragraphs 65 and 69). Gupta does not explicitly teach: wherein executing the first batch function and the second batch function further comprises: caching, into an in-memory data grid, data used by the first batch function and the second batch function being executed across the first pod and the second pod; and distributing, by the in-memory data grid, the cached data from the first pod executing the first batch function to the second pod executing the second batch function. However, Birnbaum teaches: wherein executing the first batch function and the second batch function further comprises: caching, into an in-memory data grid, data used by the first batch function and the second batch function being executed across the first pod and the second pod (Fig. 2-3, paragraphs 23-24 and 27, “Given tasks 214, 216, the caching system 206 is optimized for large amounts of both internal and external image data, consisting of a flexible graph of cache nodes 208, 210, 212 and backing stores 200, 202, 204, . . . designed to efficiently utilize combinations of heterogeneous resources such as RAM cache node 208, SSD cache node 210 and HDD cache node 212 or networked image servers 218. The caching system configuration consists of a plurality of backing stores 200, 202, 204, . . . and cache nodes 208, 210, 212. Both backing stores and cache nodes are an abstract collection of image data pages. The number of pages in the collection is limited for cache nodes but is unlimited for backing stores. The backing store and/or a function (operation) provide the data source of each image data node. As the scheduler executes tasks, a copy of each incremental batch of image data is stored in the caching system for efficient access subsequently. The caching system fills and trims each cache node as necessary to accommodate the large amounts of data that exceed the capacity of the cache nodes. The batches of data increments are allocated and distributed among the cache nodes in an optimized manner that provides consistent responsive access to the whole image data... Nodes are initially configured to match the formatting of input image data 100 and of requested task 102. Nodes are added when operations require endpoints. Operations are applied incrementally or in parallel as data flows between nodes according to heuristics and the requested task 102, with data mediated by the caching system configuration.” The caching system utilizing combinations of heterogenous cache nodes to store incremental batches of image data correlates to an in-memory data grid caching data. The input image data from the caching system used by the multiple nodes to perform operations correlates to an in-memory data grid used by the first and second batch function executed across first and second pods); and distributing, by the in-memory data grid, the cached data from the first pod executing the first batch function to the second pod executing the second batch function (Fig. 2-3, paragraphs 24, 27, 31 and 40, “As the scheduler executes tasks, a copy of each incremental batch of image data is stored in the caching system for efficient access subsequently. The caching system fills and trims each cache node as necessary to accommodate the large amounts of data that exceed the capacity of the cache nodes. The batches of data increments are allocated and distributed among the cache nodes in an optimized manner that provides consistent responsive access to the whole image data… FIG. 3 shows an example data flow graph according to the present invention. The data flow graph consists of format-specific image nodes 300, 306, 308, 316, 320, 332, 340 and data nodes 318, 322, 330, 334, 338 and directed acyclic operations (functions) 302, 304, 310, 312, 314, 324, 326, 328, 336 between them. Each operation is between two of the above-mentioned nodes. Example data include measurements, histograms, graphs, processing pipeline sequences, etc. Nodes are initially configured to match the formatting of input image data 100 and of requested task 102. Nodes are added when operations require endpoints. Operations are applied incrementally or in parallel as data flows between nodes according to heuristics and the requested task 102, with data mediated by the caching system configuration. Overwriting of a connected node is a secondary scenario but it can also be configured either to gradually disconnect downstream nodes or to push the new data through the connections to form a live subgraph... A task partition 400 is performed to divide the at least one requested task 102 into parallelizable task batches 402… A task execution 410 is performed to assign runnable batches 406 to available execution units for execution or to free an execution unit when its execution is completed… A complex data flow graph may produce a complex batch graph, however in practice most operational dependencies are resolved well before most downstream batches are scheduled, if not by prior tasks then by earlier batches of the same task, since operation outputs are saved in cache and/or backing store for the duration of intersecting tasks, if their configuration allows it.” The operations which can be divided into task batches across multiple nodes being applied incrementally as data flows between nodes correlates to distributing data from the first pod executing the first batch function to the second pod executing the second batch function. The data flow being mediated by the caching system and the operation outputs being saved in a cache for intersecting tasks correlates to the in-memory data grid distributing the cached data from the first pod executing the first batch function to the second pod executing the second batch function). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with wherein executing the first batch function and the second batch function further comprises: caching, into an in-memory data grid, data used by the first batch function and the second batch function being executed across the first pod and the second pod; and distributing, by the in-memory data grid, the cached data from the first pod executing the first batch function to the second pod executing the second batch function as taught by Birnbaum because multiple strategies can be employed in a caching system to improve the efficiency and performance. These optimizations include selecting the size of the increments and their ordering to best match the page size of the cache nodes and task result buffers, fitting allocations to the available memory and disk space, and predictive prefetching of data increments based on observed user access patterns. Nodes can also be initially configured to match the formatting of input image data and the requested task which is further split into batch functions (Birnbaum: paragraphs 25, 27 and 31). With regards to Claims 8 and 15, the method of Claim 1 performs the same steps as the machine and manufacture of Claims 8 and 15 respectively, and Claims 8 and 15 are therefore rejected using the same rationale set forth above in the rejection of Claim 1. With regards to Claim 2, Gupta in view of Bequet, Mulholland, Akash and Birnbaum teach the method of Claim 1 as referenced above. Mulholland further teaches: The computer implemented method of claim 1, wherein receiving the plurality of batch jobs further comprises: receiving a data structure defining each of the plurality of batch jobs, wherein the data structure includes the timing (Paragraphs 26 and 28, “Workload processes may include individual tasks or recurring tasks… any batch task or backup task may be subject to the described job scheduling algorithm… at operation 102, providing metadata associated with each workload process relating to execution timing parameters of the tasks within the workload process. In one embodiment, the execution timing parameters may include a minimum frequency of tasks (for example, how often a task needs to be run, such as daily, hourly, etc.) and an expected duration of the workload process. In another embodiment, the execution timing parameters may define a desired execution time of tasks with a tolerance window to allow for flexible allocation.” Each workload process having execution timing parameters, which include frequency of execution and execution time, corresponds to the timing for executing the batch function); Mulholland does not explicitly teach that the data structure is a verticle data structure. However, the verticle file format is a popular file format that is widely utilized in the field of the art, as evidenced by Akash who teaches: receiving a verticle data structure defining each of the plurality of batch jobs (Paragraph 25, “each service of the first group and the second group may be defined as a unit of instructions (“executable unit”) that is capable of being instantiated and executed independently of other executable units defining other services. For example, in some embodiments, a file server node and/or components of the file server node, including control path components (e.g., master and agent) may be implemented using an event-driven and/or asynchronous application framework such as, for example, Vert.x made available from the Eclipse Foundation, and each independently executable service (which may be considered a micro-service) may be implemented as a Vert.x executable unit called a “Verticle.”” The service of the first and second group each being defined independently with a Vert.x executable unit or Verticle corresponds to a verticle data structure defining each of the plurality of batch jobs). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with receiving a verticle data structure defining each of the plurality of batch jobs, wherein the verticle data structure includes the timing as taught by Akash and Mulholland and because Vert.x structures allow each service of a group to be instantiated and executed independently of other executable units through an event-driven or asynchronous application framework, which prevents threads from being blocked due to computational delay (Akash: paragraphs 25-26). Additionally, the execution timing parameters allow for flexibility of task execution in the workload process and optimizes deduplication. (Mulholland: paragraph 29). With regards to Claims 9 and 16, the method of Claim 2 performs the same steps as the machine and manufacture of Claims 9 and 16 respectively, and Claims 9 and 16 are therefore rejected using the same rationale set forth above in the rejection of Claim 2. With regards to Claim 3, Gupta in view of Bequet, Mulholland, Akash and Birnbaum teach the method of Claim 1 as referenced above. Gupta further teaches: The computer implemented method of claim 1, further comprising: storing result data generated from execution of the first batch function in an in-memory data grid (Paragraph 20, “user terminal 105 may be in electronic communication with batch job execution platform 110. User terminal 105 may be configured to allow a user to interact with batch job execution platform 110…view execution results of jobs and/or batch job workflows executed by batch job execution platform 110, and/or the like. User terminal 105 may comprise any suitable hardware, software, and/or database components capable of sending, receiving, and storing data.” The execution results of jobs and/or batch job workflows being stored on the user terminal correlate to storing result data from the first batch function in an in-memory data grid). With regards to Claims 10 and 17, the method of Claim 3 performs the same steps as the machine and manufacture of Claims 10 and 17 respectively, and Claims 10 and 17 are therefore rejected using the same rationale set forth above in the rejection of Claim 3. With regards to Claim 4, Gupta in view of Bequet, Mulholland, Akash and Birnbaum teach the method of Claim 3 as referenced above. Gupta further teaches: The computer implemented method of claim 3, further comprising: identifying the second batch function configured to use the result data when executed (Paragraphs 27 and 43, “task dependencies may comprise data indicating associated tasks that are dependent on a given task for execution (e.g., based on a task ID). In that regard, the task dependencies data may be used by execution manager 230 to ensure that tasks are executed in the correct order in a given job, or across a plurality of jobs in one or more batch job workflows… In response to receiving the task execution result, task manager 233 determines a second task to be executed (step 326) based on the task schedule. For example, the second task may comprise a task that is dependent on the previously executed task... task manager 233 may iterate through all of the tasks in the task schedule until each task has been executed. In response to completing execution of all tasks, task manager 233 may generate a workflow execution result. The workflow execution result may comprise the task execution results of each task executed for the corresponding batch job workflow.” The task manager iterates through all tasks in the task schedule for each batch job workflow, with the task dependencies corresponding to using the result data when executed. The process spans over a plurality of tasks and jobs in one or more batch job workflows and therefore correlates to identifying the second batch function to use the result data); retrieving the result data from the in-memory data grid (Paragraphs 12 and 35, “in response to the job comprising a data processing request on a given system, a first task may be retrieving the requested data, a second task may be preprocessing the retrieved data… In response to completing execution of the task, the technology wrapper 245-1, 245-2, 245-n may be configured to return a task execution result to execution manager 230.” The first task returning the task execution result to the execution manager and the second task retrieving the stored results from the first task correlates to retrieving the result data from the in-memory data grid); and passing the result data to the second pod configured to execute the second batch function (Paragraph 32, “Job manager 237 may transmit the task execution result to task manager 233. In response to a dependent task existing corresponding to the completed task, job manager 237 may receive the dependent task from task manager 233, and may be configured to invoke the corresponding technology wrapper to execute the dependent task.” The job manager transmitting the task execution result and then executing the dependent task in response to a dependent task corresponding to the completed task correlates to passing the result data to the pod configured to execute the second batch function). With regards to Claims 11 and 18, the method of Claim 4 performs the same steps as the machine and manufacture of Claims 11 and 18 respectively, and Claims 11 and 18 are therefore rejected using the same rationale set forth above in the rejection of Claim 4. With regards to Claim 5, Gupta in view of Bequet, Mulholland, Akash and Birnbaum teach the method of Claim 1 as referenced above. Gupta further teaches: The computer implemented method of claim 1, wherein the second pod configured to execute the second batch function is configured to operate in parallel with the first pod executing the first batch function before the first batch function completes the execution (Paragraph 41, “In response to a task not being dependent on another task, the task may be scheduled to be executed in parallel with other tasks, such that two or more tasks may be executed at the same time on a given system or across multiple systems.” The task being scheduled to execute in parallel with other tasks across a given system or multiple systems so the tasks are executed at the same time correlates to the second batch function operating in parallel with the first batch function before the first batch function completes execution). With regards to Claims 12 and 19, the method of Claim 5 performs the same steps as the machine and manufacture of Claims 12 and 19 respectively, and Claims 12 and 19 are therefore rejected using the same rationale set forth above in the rejection of Claim 5. With regards to Claim 6, Gupta in view of Bequet, Mulholland, Akash and Birnbaum teach the method of Claim 1 as referenced above. Gupta further teaches: The computer implemented method of claim 1, further comprising: storing, in an in-memory data grid, a deployment identification corresponding to the first computing node executing the first batch function (Paragraphs 26-27, “execution database 215 may be configured to store and maintain configuration data and scheduler data… The configuration data may comprise data regarding the configuration of each job and/or task… The configuration data may be grouped and/or ordered by a job ID, a task ID, or similar identifier stored as data in each data entry… The scheduler data may comprise data regarding the execution of each job and/or task. For example, the scheduler data may comprise job dependencies, task dependencies, a technology wrapper assignment, and/or a system assignment.” The configuration and scheduler data which includes technology wrapper and system assignment identifiers being stored in the execution database correlates to storing a deployment identification in the in-memory data grid); Gupta does not explicitly teach: and deleting, from the in-memory data grid, the deployment identification in response to the first batch function completing the execution. However, Bequet teaches: and deleting, from the in-memory data grid, the deployment identification in response to the first batch function completing the execution (Paragraphs 169, 182, 837 “the one or more federated devices may generate an instance log for storage within a federated area that documents the performances of the analysis, including identifiers of data objects used and/or generated, identifiers of task routines executed, and the identifier of the job flow definition that specifies the task routines to be executed to perform the analysis as a job flow… the one or more federated devices that received the request to perform the particular job flow may delete the blocks of the result report upon completion of the performance of the particular job flow... the processor may delete such result report(s) and/or instance log(s) from the specified federated area and/or from one or more other federated areas that branch from the specified federated area.” The instance log’s identifiers correlate to the deployment identification in the in-memory data grid. Deleting blocks of the result report and/or instance logs upon completion of the job flow correlates to deleting the deployment identification in response to the batch function completing execution). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with deleting, from the in-memory data grid, the deployment identification in response to the first batch function completing the execution as taught by Bequet because deleting the information that is not used as input for any other task in the job flow can reduce storage requirements in a federated area (Bequet: paragraph 182). With regards to Claim 13, the method of Claim 6 performs the same steps as the machine of Claim 13, and Claim 13 is therefore rejected using the same rationale set forth above in the rejection of Claim 6. With regards to Claim 7, Gupta in view of Bequet, Mulholland, Akash and Birnbaum teach the method of Claim 1 as referenced above. Bequet further teaches: The computer implemented method of claim 1, further comprising: generating a graphical user interface to display metrics corresponding to the execution of the batch function execution (Paragraph 146, “a job flow definition may be augmented with graphical user interface (GUI) instructions that are to be executed during a performance of the job flow that it defines to provide a GUI that provides a user an opportunity to specify one or more aspects of the performance of the job flow at runtime. By way of example, such a GUI may provide a user with an opportunity to select one or more data objects to be used as inputs to that performance, to select which one of multiple versions of a task routine is to be used to perform a task, and/or select a federated area into which to store a result report to be output by that performance. In so doing, the GUI may include instructions to display lists of objects, characteristics of objects, DAGs of objects, etc. in response to specific inputs received from a user.” The GUI allowing users to select and display certain characteristics, lists, and graphs related to the job flow corresponds to generating a graphical user interface displaying metrics corresponding to batch function execution). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Gupta with generating a graphical user interface to display metrics corresponding to the execution of the first batch function execution as taught by Bequet because the GUI allows a user to interact with the system and can support user input for customizable instructions at runtime in response to the metrics during the performance of the job flow (Bequet: paragraph 146-147). With regards to Claims 14 and 20, the method of Claim 7 performs the same steps as the machine and manufacture of Claims 14 and 20 respectively, and Claims 14 and 20 are therefore rejected using the same rationale set forth above in the rejection of Claim 7. Prior Art Made of Record The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Bahramshahry et al. (U.S. Patent No. US 20200026579 A1); teaching a method performed by a system having at least a processor and a memory therein, wherein the method comprises: allocating a cache within the memory of the system; identifying, via a workload discovery engine, pending workload tasks to be scheduled for execution from one or more workload queues and updating the cache; identifying, via a compute resource discovery engine, a plurality of computing resources available to execute the workload tasks and updating the cache; identifying, via a virtual capacity discovery engine, a plurality of virtual resources available to the scheduler in support of executing the workload tasks and updating the cache; executing a scheduler via the processor of the system, wherein the scheduler performs at least the following operations: retrieving information from the cache specifying (i) the one or more computing resources available to execute the workload tasks and (ii) the plurality of workload tasks to be scheduled for execution and (iii) the plurality of virtual resources available; determining, for each of the plurality of workload tasks to be scheduled, any virtual resource requirements to execute the respective workload task; selecting one of the plurality of workload tasks for execution based on both (i) a computing resource being available to execute the selected workload task and based further on (ii) a virtual resource required for execution of the selected workload task being available within the virtual resource pool; and scheduling the selected workload task for execution with the computing resource and allocating the virtual resource exclusively to the computing resource for the duration of execution of the selected workload task. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELINA HU whose telephone number is (571)272-5428. The examiner can normally be reached Monday-Friday 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SELINA HU/ Examiner, Art Unit 2193 /Chat C Do/Supervisory Patent Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Aug 23, 2022
Application Filed
May 06, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Aug 27, 2025
Final Rejection — §103
Dec 04, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Dec 29, 2025
Non-Final Rejection — §103
Feb 12, 2026
Interview Requested
Mar 31, 2026
Applicant Interview (Telephonic)
Mar 31, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585485
Warm migrations for virtual machines in a cloud computing environment
2y 5m to grant Granted Mar 24, 2026
Patent 12563114
CONTENT INITIALIZATION METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month