Prosecution Insights
Last updated: April 19, 2026
Application No. 17/199,608

WORKFLOW MEMOIZATION TO REUSE RESULTS OF PAST WORKFLOW INSTANCES

Non-Final OA §103§112
Filed
Mar 12, 2021
Examiner
LIN, HSING CHUN
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
59%
Grant Probability
Moderate
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
64 granted / 108 resolved
+4.3% vs TC avg
Strong +80% interview lift
Without
With
+79.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
145
Total Applications
across all art units

Statute-Specific Performance

§101
17.1%
-22.9% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
34.0%
-6.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-25 are pending in this application. Response to Arguments Applicant’s arguments regarding the rejections of claims 1-25 under 35 U.S.C. 112b have been fully considered and are persuasive. The rejections have been withdrawn. However, new 35 U.S.C. 112b rejections are applied to claims 1-25 based on the amendments. Applicant's arguments regarding the 35 U.S.C. 103 rejections of claims 1-25 have been fully considered but they are unpersuasive. Regarding the 35 U.S.C. 103 rejection, the applicant argues the following in the remarks: Soundararajan fails to teach at least some of the input data being arguments to the node's executable that are values used by the node's executable during execution. Hence Soundararajan's paragraph [0045] or anywhere else does not teach an "embedding generated by encoding at least the node's executable and input data to the node", and "a matching embedding that matches the generated embedding." Soundararajan's "program context" does not appear to include "input data to the node" let alone "the node's executable." Examiner has thoroughly considered Applicant' s arguments, but respectfully finds them unpersuasive for at least the following reasons: As to point (a), the examiner respectfully disagrees. Soundararajan recites in [0043] “the tag is a hash 334 of the function's starting program counter, input signature, and output signature. The input signature may be a list of the registers and memory locations, or their stored values, accessed by the function”. The function accesses stored values in registers and memory locations of an input signature when the function is performed. As to point (b), a response to this argument was supplied in the last office action. As to point (c), Soundararajan’s program context was not applied to include both input data and the node’s executable. Soundararajan’s program counter teaches the node’s executable. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claims 1, 15, 24, and 25 (line numbers refer to claim 1): Lines 6-7 recite “some of the input data being arguments to the node’s executable that are values used by the node’s executable during execution” but lines 14-15 recite “without running the node in the workflow”. Therefore, it is unclear why the node’s executable is being executed if the point of memoization, which is what the invention is directed to, is to not run the node. Claims 2-14 and 16-23 are dependent claims of claims 1 and 15, respectively, and fail to resolve the deficiencies of claims 1 and 15, so they are rejected for the same reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-9, 13, 15, 16, 17, 19, 20, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Soundararajan et al. (US 20220206816 A1 hereinafter Soundararajan), in view of Isard et al. (US 20110067030 A1 hereinafter Isard), in view of Li (US 20180081717 A1), and further in view of Edwards et al. (US 20180218153 A1 hereinafter Edwards). Soundararajan, Isard, Li, and Edwards were cited in a previous office action. As per claim 1, Soundararajan teaches a computer-implemented method performed by at least one processor (Fig. 11, 1070, 1080 processors), the method comprising: generating an embedding associated with a node in a workflow, the embedding generated by encoding at least the node's executable and input data to the node, the embedding that combines at least the node’s executable and the input data to the node, at least some of the input data being arguments to the node's executable that are values used by the node's executable during execution (Figs. 2, 3, 4; [0043] the tag is a hash 334 of the function's starting program counter, input signature, and output signature. The input signature may be a list of the registers and memory locations, or their stored values, accessed by the function. Similarly, the output signature may be a list of registers or memory locations to which results produced by the function are stored. The starting program counter 322 and the output program counter 324 may be the memory address of the respective call and return uops that define the function block; [0045] the tag is a hash of the starting program counter and the program context values of a call instruction; [0020] A function call is a programming construct used in applications; [0043] The program context 326 is the branch information received from the branch prediction unit for identifying the application program that made the function call; [0046] Typically, the input values to the call instruction are provided by the instructions preceding the call instruction in the instruction stream; [0042] The signature may include the live-in and live-out values (i.e. inputs and outputs) along with other context information associated with the function call. The live-in values may include registers and memory locations accessed by the function. They may also include the actual load values stored in these locations; The instant specification recites in [0048] that embedding refers to a vector of hash-function outputs. Paragraph [0086] of the specification recites that executable of the task includes a path to the executable.); retrieving from a database of embeddings, a matching embedding that matches the generated embedding according to a match criterion, the database of embeddings storing embeddings associated with previously run nodes; retrieving from a storage, output data associated with the matching embedding, the output data for use as the node's output; and using the output data as input to another node in the workflow without running the node in the workflow on the at least one processor, thereby saving resources of the at least one processor by not running the node in the workflow (Figs. 2, 3, 4, 6; [0045] compares a hash of the incoming call instruction's program counter and program context against tags 420 in the table 410 to obtain a match; [0049] If the data match, at 612, instructions that are dependent on the execution of the instance is provided with the output data from the memorization table; [0020] if a function call's input parameters and output values are learned and captured in a table, future function calls with the same input parameters can be simulated simply by using the stored output values from the table, thereby avoiding redundant executions; [0043] a Memorization Table 210 for storage; [0031] When a sufficient confidence level associated with a function is reached, such as based on the number of occurrences, the entire body of the function, minus a few exceptions, are skipped by the pipeline. For example, when instance 3 108 of Function 1 entering the pipeline 100 is detected at 150, its function block is removed from the pipeline at 160. Instructions that depend on the data produced from the execution of Function 1 are provided with data from the stored live-outs obtained from instance 1 and/or instance 2's execution. Eliminating repeated instructions from the processing pipeline brings performance and power gains by saving time and resources that otherwise would have to be spent for their execution; [0043] The program context 326 is the branch information received from the branch prediction unit for identifying the application program that made the function call; [0046] Typically, the input values to the call instruction are provided by the instructions preceding the call instruction in the instruction stream; [0042] The signature may include the live-in and live-out values (i.e. inputs and outputs) along with other context information associated with the function call. The live-in values may include registers and memory locations accessed by the function. They may also include the actual load values stored in these locations). Soundararajan fails to teach generating an embedding associated with a node in a workflow responsive to determining that the node in the workflow is in a ready state to run on the at least one processor, the embedding being a single bitstring; a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching. However, Isard teaches generating an embedding associated with a node in a workflow responsive to determining that the node in the workflow is in a ready state to run on the at least one processor ([0049] encoding the set of the worker tasks that are ready to run; [0016] Each compute node may comprise a single computer with one or more processors and may run one or more applications; [0018] A task represents the execution of a single process or multiple processes on a compute node). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan with the teachings of Isard so that tasks ready to run can be accelerated (see Isard [0048] edges may encode other affinities, such as to nodes having particular resources that assist and/or accelerate a task's computation (e.g., particular data caches, computation units such as GPUs, etc.); [0049] encoding the set of the worker tasks that are ready to run and their preferred locations, as well as the running locations). Soundararajan and Isard fail to teach the embedding being a single bitstring; a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching. However, Li teaches the embedding being a single bitstring ([0025] For example, any cryptographic hash function that includes a suitable, well-defined procedure or mathematical function for mapping data of any arbitrary size (e.g., a string of symbols) to any fixed-size bit string can also be employed.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan and Isard with the teachings of Li to promote efficiency (see Li [0037] For example, at the adapter trimming step 722, each workflow yields the same hash value (“SGJW”), and thus only one of these adapter trimming steps needs to be executed. The unexecuted steps may be marked as complete, i.e., no need for execution (shown in grey in FIG. 7B; [0030] This approach therefore improves the processor's efficiency in reconstructing the workflow and avoids re-executing one or more portions thereof). Soundararajan, Isard, and Li fail to teach a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching. However, Edwards teaches a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching ([0079] Thus, an exact correspondence between two fuzzy hashes is not required for them to match, but rather a degree of similarity is calculated between the fuzzy hashes of the process and the process model, or between the fuzzy hashes of respective structural features, and they are deemed to match if the similarity is within a predetermined distance. Thus, a fuzzy has may act as a measure of content similarity). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, and Li with the teachings of Edwards to easily match executables (see Edwards [0082] Using fuzzy hash matching makes it possible to identify and match not only executable regions corresponding to libraries and executables loaded from a file, but also executable regions corresponding to dynamically generated strings of executable code and other executables which have not been loaded from a file, or which are not in the module list of the process and which could not otherwise easily be matched.). As per claim 2, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches wherein the output data is retrieved from a storage storing output files and data associated with the embeddings in the database ([0050] tables (e.g., ROI table, memorization table, FE memorization predictor table) are described herein, it will be apparent to one of ordinary skill in the art that any suitable type of data storage structure may be used instead; [0043] Memorization table 310 includes one or more entries 312 for storing memorized function signatures. Each memorized function signature is associated with a function of a particular set of inputs, outputs, and context information. Each table entry 312 includes a plurality of fields to store information about the memorized function. According to an embodiment, the fields store information such as the tag 320, start program counter 322, end program counter 324, program context 326, memorized uop information 328, memory uops signature 330, and occurrence count 332. The tag 320 is used to identify the memorized function. In one embodiment, the tag is a hash 334 of the function's starting program counter, input signature, and output signature). As per claim 3, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches wherein the match criterion includes identical matching ([0045] compares a hash of the incoming call instruction's program counter and program context against tags 420 in the table 410 to obtain a match). As per claim 4, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Edwards teaches wherein the match criterion includes fuzzy matching based on a configurable similarity threshold ([0079] an exact correspondence between two fuzzy hashes is not required for them to match, but rather a degree of similarity is calculated between the fuzzy hashes of the process and the process model, or between the fuzzy hashes of respective structural features, and they are deemed to match if the similarity is within a predetermined distance; [0082] Using fuzzy hash matching). As per claim 6, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches wherein the embedding includes encoding of the node’s metadata with the node’s executable and the node’s input ([0043] the tag is a hash 334 of the function's starting program counter, input signature, and output signature; [0045] the tag is a hash of the starting program counter and the program context values of a call instruction). As per claim 7, Soundararajan, Isard, Li, and Edwards teach the method of claim 6. Soundararajan teaches wherein the node’s metadata includes at least one container image name, container image hash and environment variables ([0045] the tag is a hash of the starting program counter and the program context values of a call instruction; [0043] The program context 326 is the branch information received from the branch prediction unit for identifying the application program that made the function call.). As per claim 8, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches wherein the output data is retrieved from a storage ([0043] Memorization table 310 includes one or more entries 312 for storing memorized function signatures. Each memorized function signature is associated with a function of a particular set of inputs, outputs, and context information.). Additionally, Li teaches a remote storage ([0043] In some embodiments, the source of read sequence data and/or a previously generated hash table is the remote storage device 814.). As per claim 9, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches wherein the output data is retrieved from a local storage ([0101] a memory 1034, which may be portions of main memory locally attached to the respective processors; [0050] tables (e.g., ROI table, memorization table, FE memorization predictor table) are described herein, it will be apparent to one of ordinary skill in the art that any suitable type of data storage structure may be used instead; [0043] Memorization table 310 includes one or more entries 312 for storing memorized function signatures. Each memorized function signature is associated with a function of a particular set of inputs, outputs). As per claim 13, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches wherein the output data is associated with a node from a different workflow different from the workflow ([0020] multiple instances of the same function call may be made from different locations within the same application or from different applications… when the same input parameters are used in different call to the same function (i.e. different instances of the same function call), the outputs produced across the different calls should be the same). As per claim 15, it is a system claim of claim 1, so it is rejected for similar reasons. Additionally, Soundararajan teaches at least one processor; a storage device coupled with the at least one processor (Fig. 11; [0107] FIG. 11 illustrates that the processors 1070, 1080 may include integrated memory). As per claim 16, it is a system claim of claim 3, so it is rejected for similar reasons. As per claim 17, it is a system claim of claim 4, so it is rejected for similar reasons. As per claim 19, it is a system claim of claim 6, so it is rejected for similar reasons. As per claim 20, it is a system claim of claim 7, so it is rejected for similar reasons. As per claim 24, it is a computer program product claim of claim 1, so it is rejected for similar reasons. Additionally, Soundararajan teaches a computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions readable by at least one process to cause the at least one processor to ([0112] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein; [0107] FIG. 11 illustrates that the processors 1070, 1080 may include integrated memory). Claims 5 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Soundararajan, Isard, Li, and Edwards, as applied to claims 1 and 15 above, in view of Francois et al (WO2007010100A2 hereinafter Francois). The mappings of Francois are made with a translation of WO2007010100A2. Francois was cited in a previous office action. As per claim 5, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan, Isard, Li, and Edwards fail to teach wherein the embedding is generated recursively by traversing nodes of the workflow upstream starting from the node and performing encoding at each node visited in the traversing, wherein the embedding associated with the node incorporates encodings of nodes visited in the traversing. However, Francois teaches wherein the embedding is generated recursively by traversing nodes of the workflow upstream starting from the node and performing encoding at each node visited in the traversing, wherein the embedding associated with the node incorporates encodings of nodes visited in the traversing ([0187] The addition of the child concept A to the father concept E simply consists of adding a hash table for the child concept A, this hash table becoming a leaf for the hierarchical generic data structure represented in Figure 5c or, more particularly, for the branch of the latter. The key assigned to the hash table of the child concept A is the numerical value 1 and the access path is then the concatenation of the previous access path of the parent node E and the key with value 1, the access path becoming 1121, as shown in Figure 5c; [0142] The tree data structure is itself constituted by a recursive hash table comprising as many entries as there are successive child nodes under the upper bound making it possible to establish a direct correspondence between an access path defined in the identifier associated with an element and the element considered.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, Li, and Edwards with the teachings of Francois to efficiently determine the relationships between nodes (see Francois [0121] All of the aforementioned properties are then particularly advantageous because they make it possible to efficiently and quickly find the fathers, sons, brothers and other concepts close to another concept and, finally, make it possible to determine whether one concept subsumes another or not.). As per claim 18, it is a system claim of claim 5, so it is rejected for similar reasons. Claims 10 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Soundararajan, Isard, Li, and Edwards, as applied to claims 1 and 15 above, in view of Meijer et al. (US 8108848 B2 hereinafter Meijer). Meijer was cited in the IDS filed on 03/12/2021. As per claim 10, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan, Isard, Li, and Edwards fail to teach further including authenticating a process that is retrieving the output data for security. However, Meijer teaches further including authenticating a process that is retrieving the output data for security (Col. 7 lines 53-62 Furthermore, custom function component 310 can aid program security. For example, arguments and/or return values can be encrypted or signed for non-repudiation and/or tamper proofing. Further, upon occurrence of one or more events or after a period of time, inter alia, a function can be made non-playable or playable. For instance, the number of times a function is played or executed can be memorized and utilized to allow or prevent execution. In essence, logical signatures of functions can look the same but extra security can be added in this manner.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, Li, and Edwards with the teachings of Meijer to improve security (see Meijer Col. 7 lines 53-54 Furthermore, custom function component 310 can aid program security). As per claim 21, it is a system claim of claim 10, so it is rejected for similar reasons. Claims 11 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Soundararajan, Isard, Li, and Edwards, as applied to claims 1 and 15 above, in view of Khalvati et al. (US 9076240 B2 hereinafter Khalvati). Khalvati was cited in a previous office action. As per claim 11, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan, Isard, Li, and Edwards fail to teach wherein outputs of multiple cached tasks with matching embeddings are retrieved and filtered based on a filter criterion. However, Khalvati teaches wherein outputs of multiple cached tasks with matching embeddings are retrieved and filtered based on a filter criterion (Col. 17 lines 13-16 the measure of similarity or comparability for pixel neighborhoods (windows) plays a significant role in the efficiency of the memorization module; Col. 7 lines 15-20 The measure of similarity and comparability is a tolerance for enabling reuse of a previous result for a similar, but not equal, new image processing task. For example, a low similarity (high comparability) requirement will allow higher reuse while a high similarity (low comparability) requirement will allow lower reuse; Col. 7 lines 61-64 The compression and hashing engine generates a string corresponding to the window and a reuse table location corresponding to the string, and checks whether a matching string is present at that location in the reuse table). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, Li, and Edwards with the teachings of Khalvati to achieve a speedup (see Khalvati Col. 7 lines 20-23 To achieve high speedups, the hit rate should be maximized where the memoization overhead cost should be minimized. These are affected by defining an optimal measure of similarity and comparability). As per claim 22, it is a system claim of claim 11, so it is rejected for similar reasons. Claims 12 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Soundararajan, Isard, Li, and Edwards, as applied to claims 1 and 15 above, in view of Bowers et al. (US 10395181 B2 hereinafter Bowers). Bowers was cited in the IDS filed on 03/12/2021. As per claim 12, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches using the output data in lieu of running the node ([0002] if a function's input parameters and output values are learned and captured in a table, repeated executions of the function can be avoided because the output values can simply be obtained from the table). Soundararajan, Isard, Li, and Edwards fail to teach wherein the output data is further post-processed before using the output data. However, Bowers teaches wherein the output data is further post-processed before using the output data (Col. 3 lines 6-7 post-processing of output data; Col. 15 lines 14- 25 For example, an input schema or an output schema can have a corresponding input summary generation schema or a corresponding output summary generation schema. A I/O summary generation schema can indicate how to sample, aggregate, and analyze a data set matching the corresponding I/O schema to produce a summary. This summary can include a set of one or more numbers and/or data strings, a table, or an illustration. For example, the I/O summary can include a bar graph, a line graph, a histogram, a pie chart, a learning curve, other graph or illustration type, or any combination thereof). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, Li, and Edwards with the teachings of Bowers to compare expected results and actual results (see Bowers Col. 7 lines 62-65 Post-processing for analysis can include computing statistical measures, computing comparative measures (e.g., between the test results and expected results)). As per claim 23, Soundararajan, Isard, Li, and Edwards teach the system of claim 15. Soundararajan teaches wherein outputs of multiple cached tasks with matching embeddings are retrieved and outputs associated with the multiple matching embeddings, to generate the output data before using the output data in lieu of running the node ([0002] if a function's input parameters and output values are learned and captured in a table, repeated executions of the function can be avoided because the output values can simply be obtained from the table; [0045] compares a hash of the incoming call instruction's program counter and program context against tags 420 in the table 410 to obtain a match; [0049] If the data match, at 612, instructions that are dependent on the execution of the instance is provided with the output data from the memoization table). Soundararajan, Isard, Li, and Edwards fail to teach outputs are post-processed to generate the output data before using the output data. However, Bowers teaches outputs are post-processed to generate the output data before using the output data (Col. 3 lines 6-7 post-processing of output data; Col. 15 lines 14- 25 For example, an input schema or an output schema can have a corresponding input summary generation schema or a corresponding output summary generation schema. A I/O summary generation schema can indicate how to sample, aggregate, and analyze a data set matching the corresponding I/O schema to produce a summary. This summary can include a set of one or more numbers and/or data strings, a table, or an illustration. For example, the I/O summary can include a bar graph, a line graph, a histogram, a pie chart, a learning curve, other graph or illustration type, or any combination thereof.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, Li, and Edwards with the teachings of Bowers to compare expected results and actual results (see Bowers Col. 7 lines 62-65 Post-processing for analysis can include computing statistical measures, computing comparative measures (e.g., between the test results and expected results)). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Soundararajan, Isard, Li, and Edwards, as applied to claim 1 above, in view of SevenBridges (About Memoization (WorkReuse)). SevenBridges was cited in the IDS filed on 03/12/2021. As per claim 14, Soundararajan, Isard, Li, and Edwards teach the method of claim 1. Soundararajan teaches output files, which the node would create if the node were to be run on the processor ([0002] if a function's input parameters and output values are learned and captured in a table, repeated executions of the function can be avoided because the output values can simply be obtained from the table). Soundararajan, Isard, Li, and Edwards fail to teach wherein the output data includes intermediary files and output files. However, SevenBridges teaches wherein the output data includes intermediary files and output files (paragraph 4 Once memoization is triggered and job outputs are reused in a new context, appropriate job workspace directory, with all intermediate files, will also be created (files are going to be copied, so you are not going to be charged twice for the same intermediate files) for the new job). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, Li, and Edwards with the teachings of SevenBridges to have access to intermediate files that can be reused for repeated jobs (see SevenBridges Intermediate files: These files are, however, crucial for later executions of the jobs that are consuming/producing them, as the memoization mechanism will reuse them, instead of having to execute those jobs again). Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Soundararajan, in view of Isard, in view of SevenBridges, in view of Marquardt et al. (US 10055578 B1 hereinafter Marquardt), and further in view of Edwards. Marquardt was cited in a previous office action. As per claim 25, Soundararajan teaches a computer-implemented method performed by at least one processor (Fig. 11, 1070, 1080 processors), the method comprising: generating a memoization embedding associated with a node in a workflow, the memoization embedding that combines at least environment variables associated with the node and input data to the node, at least some of the input data being arguments to the node's executable that are values used by the node's executable during execution ([0043] the tag is a hash 334 of the function's starting program counter, input signature, and output signature. The input signature may be a list of the registers and memory locations, or their stored values, accessed by the function. Similarly, the output signature may be a list of registers or memory locations to which results produced by the function are stored. The starting program counter 322 and the output program counter 324 may be the memory address of the respective call and return uops that define the function block; [0020] A function call is a programming construct used in applications; [0043] The program context 326 is the branch information received from the branch prediction unit for identifying the application program that made the function call; [0046] Typically, the input values to the call instruction are provided by the instructions preceding the call instruction in the instruction stream; [0042] The signature may include the live-in and live-out values (i.e. inputs and outputs) along with other context information associated with the function call. The live-in values may include registers and memory locations accessed by the function. They may also include the actual load values stored in these locations; The instant specification recites in [0048] that embedding refers to a vector of hash-function outputs. Paragraph [0086] of the specification recites that executable of the task includes a path to the executable.); retrieving from a database of embeddings, a matching embedding that matches the generated embedding according to a match criterion, the database of embeddings storing embeddings associated with previously run nodes; retrieving from a storage, output data associated with the matching embedding, the output data for use as the node's output, the output data including at least output that would be produced if the node in the workflow were to be run on the processor; and using the output data as input to another process in the workflow without running the node in the workflow on the at least one processor, thereby saving resources of the at least one processor by not running the node in the workflow (Fig. 6; [0045] compares a hash of the incoming call instruction's program counter and program context against tags 420 in the table 410 to obtain a match; [0049] If the data match, at 612, instructions that are dependent on the execution of the instance is provided with the output data from the memoization table; [0045] the output signature may be a list of registers or memory locations to which results produced by the function are stored; [0002] if a function's input parameters and output values are learned and captured in a table, repeated executions of the function can be avoided because the output values can simply be obtained from the table; [0043] a Memoization Table 210 for storage; [0043] a Memorization Table 210 for storage; [0031] When a sufficient confidence level associated with a function is reached, such as based on the number of occurrences, the entire body of the function, minus a few exceptions, are skipped by the pipeline. For example, when instance 3 108 of Function 1 entering the pipeline 100 is detected at 150, its function block is removed from the pipeline at 160. Instructions that depend on the data produced from the execution of Function 1 are provided with data from the stored live-outs obtained from instance 1 and/or instance 2's execution. Eliminating repeated instructions from the processing pipeline brings performance and power gains by saving time and resources that otherwise would have to be spent for their execution; [0043] The program context 326 is the branch information received from the branch prediction unit for identifying the application program that made the function call; [0046] Typically, the input values to the call instruction are provided by the instructions preceding the call instruction in the instruction stream; [0042] The signature may include the live-in and live-out values (i.e. inputs and outputs) along with other context information associated with the function call. The live-in values may include registers and memory locations accessed by the function. They may also include the actual load values stored in these locations). Soundararajan fails to teach generating an embedding associated with a node in a workflow responsive to determining that the node in the workflow is in a ready state to run on the at least one processor, the embedding being a single bitstring that combines at least a container image name, container image hash; intermediary data the node in the workflow would create in producing the output if the node in the workflow were to be run on the processor, a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching. However, Isard teaches generating an embedding associated with a node in a workflow responsive to determining that the node in the workflow is in a ready state to run on the at least one processor ([0049] encoding the set of the worker tasks that are ready to run; [0016] Each compute node may comprise a single computer with one or more processors and may run one or more applications; [0018] A task represents the execution of a single process or multiple processes on a compute node). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan with the teachings of Isard so that tasks ready to run can be accelerated (see Isard [0048] edges may encode other affinities, such as to nodes having particular resources that assist and/or accelerate a task's computation (e.g., particular data caches, computation units such as GPUs, etc.); [0049] encoding the set of the worker tasks that are ready to run and their preferred locations, as well as the running locations). Soundararajan and Isard fail to teach the embedding being a single bitstring that combines at least a container image name, container image hash; intermediary data the node in the workflow would create in producing the output if the node in the workflow were to be run on the processor, a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching. However, SevenBridges teaches teach intermediary data the node in the workflow would create in producing the output if the node in the workflow were to be run on the processor (paragraph 4 Once memoization is triggered and job outputs are reused in a new context, appropriate job workspace directory, with all intermediate files, will also be created (files are going to be copied, so you are not going to be charged twice for the same intermediate files) for the new job; paragraph 2 If memoization is enabled, tasks will use pre-calculated results, instead of generating new ones. This, however, relies on the existence of intermediate files. Specifically, reuse of previous task results will be possible for the duration of that task’s intermediate files retention; paragraph 5 Intermediate files are files that are created during the course of job execution). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan and Isard with the teachings of SevenBridges to have access to intermediate files that can be reused for repeated jobs (see SevenBridges Intermediate files: These files are, however, crucial for later executions of the jobs that are consuming/producing them, as the memoization mechanism will reuse them, instead of having to execute those jobs again). Soundararajan, Isard, and SevenBridges fail to teach the embedding being a single bitstring that combines at least a container image name, container image hash, a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching. However, Marquardt teaches the embedding being a single bitstring that combines at least a container image name, container image hash (Col. 6 lines 3-11 The signature 130 may be calculated by the operating system 112 based on or over the inactive software container 114a. The signature 130 may comprise a checksum value, a hash value, or some other digital value calculated based on the inactive software container 114a. The signature 130 may be calculated over the container artifacts 136. The signature 130 may be calculated over the application identity 132. The signature 130 may be calculated over the container artifacts 136 and the application identity 132; Col. 6 lines 20-21 the signature 130 may be created as a finite length bit string; Col. 6 lines 36-43 The request comprises the inactive software container 114a and information for executing one or more applications in the inactive software container 114a. The information for executing comprises an identification of the one or more applications. The information may further comprise an address of an executable image of the application and/or applications where the operating system 112 can fetch the application image or images.). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, and SevenBridges with the teachings of Marquardt to create a unique signature (see Marquardt Col. 3 lines 50-52 The operating system may generate a signature that is uniquely or quasi-uniquely associated with a software container when it is created). Soundararajan, Isard, SevenBridges, and Marquardt fail to teach a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching. However, Edwards teaches a configuration associated with the node including an option that indicates whether the embedding that is generated is used in fuzzy matching ([0079] Thus, an exact correspondence between two fuzzy hashes is not required for them to match, but rather a degree of similarity is calculated between the fuzzy hashes of the process and the process model, or between the fuzzy hashes of respective structural features, and they are deemed to match if the similarity is within a predetermined distance. Thus, a fuzzy has may act as a measure of content similarity). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined Soundararajan, Isard, SevenBridges, and Marquardt with the teachings of Edwards to easily match executables (see Edwards [0082] Using fuzzy hash matching makes it possible to identify and match not only executable regions corresponding to libraries and executables loaded from a file, but also executable regions corresponding to dynamically generated strings of executable code and other executables which have not been loaded from a file, or which are not in the module list of the process and which could not otherwise easily be matched.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HSING CHUN LIN whose telephone number is (571)272-8522. The examiner can normally be reached Mon - Fri 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.L./Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Mar 12, 2021
Application Filed
Sep 22, 2023
Non-Final Rejection — §103, §112
Dec 07, 2023
Interview Requested
Dec 14, 2023
Examiner Interview Summary
Dec 22, 2023
Response Filed
Apr 03, 2024
Final Rejection — §103, §112
May 23, 2024
Interview Requested
May 30, 2024
Examiner Interview Summary
Jun 11, 2024
Response after Non-Final Action
Jul 09, 2024
Response after Non-Final Action
Jul 10, 2024
Applicant Interview (Telephonic)
Jul 11, 2024
Examiner Interview Summary
Jul 15, 2024
Request for Continued Examination
Jul 18, 2024
Response after Non-Final Action
Dec 15, 2024
Non-Final Rejection — §103, §112
Mar 06, 2025
Interview Requested
Mar 12, 2025
Applicant Interview (Telephonic)
Mar 12, 2025
Examiner Interview Summary
Mar 18, 2025
Response Filed
Jul 11, 2025
Final Rejection — §103, §112
Sep 15, 2025
Response after Non-Final Action
Oct 16, 2025
Request for Continued Examination
Oct 20, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §103, §112
Apr 01, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554523
REDUCING DEPLOYMENT TIME FOR CONTAINER CLONES IN COMPUTING ENVIRONMENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12547458
PLATFORM FRAMEWORK ORCHESTRATION AND DISCOVERY
2y 5m to grant Granted Feb 10, 2026
Patent 12468573
ADAPTIVE RESOURCE PROVISIONING FOR A MULTI-TENANT DISTRIBUTED EVENT DATA STORE
2y 5m to grant Granted Nov 11, 2025
Patent 12461785
GRAPHIC-BLOCKCHAIN-ORIENTATED SHARDING STORAGE APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Nov 04, 2025
Patent 12443425
ISOLATED ACCELERATOR MANAGEMENT INTERMEDIARIES FOR VIRTUALIZATION HOSTS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+79.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month