Prosecution Insights
Last updated: April 19, 2026
Application No. 18/214,748

Cross-Component Optimizing Compiler Systems

Non-Final OA §103
Filed
Jun 27, 2023
Examiner
SOLTANZADEH, AMIR
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Advanced Micro Devices, Inc.
OA Round
3 (Non-Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
340 granted / 421 resolved
+25.8% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
456
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 421 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Allowable Subject Matter Claim 16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-8, 11, 13-14 and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (US 20230176840 A1) in view of Gao (US 20200293295 A1) further in view of Herr (US 20190324755 A1) and Leopoldseder (US11392356B1). Regarding Claim 1, Zhou (US 20230176840 A1) teaches A compiler system comprising: a processor configured to execute instructions to implement: machine learning models to: receive components of source code to be compiled (Para 0068, receives an input program that defines a graph of operations (410). As described above, a computation graph has nodes and edges with node in the graph representing an operation, which can be implemented in software by a particular operation module; Para 0069, The system can then provide the input program to a compiler optimization network having a graph-embedding network and a policy network) Examiner Comments: The nodes are interpreted to the claimed components; and generate component prediction functions for the components of the source code (Para 0071, The system generates an optimization action for each of one or more nodes encoded in the graph embedding representation (430). The system can use a policy network that is configured to generate an optimization action for each of one or more nodes encoded in the graph embedding representation) Examiner Comments: The optimization action is interpted to the claimed predication function, an optimizing algorithm to: receive the component prediction functions from the machine learning models (Para 0045, The compiler optimization network 200 includes a graph embedding network 210 followed by a policy network 220, which can be jointly trained in an end-to-end fashion. The graph embedding network 210 can learn a graph representation h.sub.G of a computational graph, and the decision policy 220 can learn a optimization strategy p(y|G) over a given graph representation; Para 0046, The goal for the compiler optimization network can to compute an action distribution for a node in the graph based on the actions of all previous nodes in an auto-regressive manner); compose a composite prediction function based on the component prediction functions (Para 0063, recurrent attention policy network can be used that not only applies a segment-level recurrent attention to the graph representation spatially, but also generates recurrent actions for multiple optimization problems through residual connections and parameter sharing across multiple recurrent attention layers; Para 0060, the segment-level recurrent attention is much faster than a LSTM-based method). Zhou did not specifically teach select parameters for the components of the source code based on the composite prediction functions and domain-specific language compilers to compile the source code based on the selected parameters select parameters for the components of the source code based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Gao (US 20200293295 A1) teaches select parameters for the components of the source code based on the composite prediction functions (Para 0038, the source code of the auto-tuning enabled software program, receiving, at the compiler system, a recommended optimization scheme selected from a plurality of optimization schemes by a machine learning module based on optimization parameters stored in a knowledge database, the optimization scheme comprising one or more optimizations to be performed on the source code of the auto-tuning enabled software program); and [domain-specific language compilers] to compile the source code based on the selected parameters (Para 0038, compiling, at a compiler system, the source code in accordance with the one or more optimizations of the optimization scheme to generate the executable file for execution on a processor having a particular architecture). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou’s teaching to Gao’s in order to optimize source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency than existing systems and approaches by generating sub executable file by compiling source code with optimization scheme and outputting optimized executable file based on main and sub executable file (Gao [Summary]). Zhou and Gao did not specifically teach domain-specific language compilers, an optimizing algorithm to: receive the component prediction functions from the machine learning models; compose a composite prediction function based on the component prediction functions select parameters for the components of the source code based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Herr (US 20190324755 A1) teaches domain-specific language compilers (Para 0104, The variant generator 302 compiles one or more variant binary files corresponding to algorithm(s) in a DSL representation from the code translator 303; Abstract, identifying the intermediate code as having a first algorithmic intent that corresponds to a second algorithmic intent of the annotated code, a domain specific language (DSL)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Zhou, Gao and Herr did not teach select parameters for the components of the source code based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Leopoldseder (US11392356B1) teaches select parameters for the components of the source code based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application (Col. 2, lines 44-67: “a feature may be extracted from an initial compilation graph for an optimization parameter. … Values of the optimization parameter may be applied to the initial compilation graph to generate multiple versions of the initial compilation graph. Each version of the initial compilation graph corresponds to a value of the optimization parameter.” Col. 5, lines 43-67: “The machine learning model (144) includes functionality to learn relationships between features, optimization parameter values, and performance metrics” Col. 9, lines 45-55: “The machine learning model may predict the value of the optimization parameter with the highest assigned confidence level given the feature extracted” Col. 6, lines 20-40: “executing the versions of the initial compilation graph to obtain values of a performance metric, and selecting, as an optimized compilation graph and using the values of the performance metric, a version of the initial compilation graph.” Claim 19: “training the machine learning model by executing a plurality of versions … to obtain a plurality of values of a performance metric … the first training data further comprises the plurality of values of the first optimization parameter and the plurality of values of the performance metric.”) Examiner Comments: Leopoldseder teaches selecting optimization parameters for components (nodes/operations in the compilation graph) by using a machine learning prediction model (the “composite prediction function” formed from graph features across components) together with a performance metric as the objective function, explicitly to achieve application-level objectives such as reduced execution time or improved efficiency (one of a plurality of possible objectives like speed, power, memory). This directly maps because the graph represents the application’s components, the ML model predicts/composes outcomes across them, and selection is driven by the objective (performance metric) to optimize the overall application. It would have been obvious to a person of ordinary skill in the art before the effective filing date to modify Zhou, Gao and Herr’s teaching to Leopoldseder’s explicit use of a machine-learned prediction model to select parameters based on both the composed (graph-level) prediction and a performance objective function in order to enable more precise, data-driven selection of parameters that achieve end-to-end application objectives (e.g., overall performance, accuracy, or energy efficiency) rather than purely local decisions, as taught by Leopoldseder which yields the predictable result of a more effective cross-component optimizing compiler, improving overall application performance as recognized in both references and the present application’s background. Regarding Claim 2, Zhou, Gao, Herr and Leopoldseder teach The compiler system of claim 1. Zhou, and Gao did not teach wherein the component prediction functions are generated after compiling the components of the source code into intermediate representations. However, Herr (US 20190324755 A1) teaches wherein the component prediction functions are generated after compiling the components of the source code into intermediate representations (Para 0044, The example code translator can lift an algorithm intent from the one or more annotated code blocks to a lifted intermediate representation (e.g., lifted intermediate code). The example code translator can lower the one or more code blocks in the lifted intermediate representation into a DSL representation). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Regarding Claim 3, Zhou, Gao, Herr and Leopoldseder teach The compiler system of claim 2. Zhou, and Gao did not teach wherein the intermediate representations are compiled using the domain-specific language compilers. However, Herr (US 20190324755 A1) teaches wherein the intermediate representations are compiled using the domain-specific language compilers (Para 0044, The example code translator can lift an algorithm intent from the one or more annotated code blocks to a lifted intermediate representation (e.g., lifted intermediate code). The example code translator can lower the one or more code blocks in the lifted intermediate representation into a DSL representation; Abstract, identifying the intermediate code as having a first algorithmic intent that corresponds to a second algorithmic intent of the annotated code, a domain specific language (DSL)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Regarding Claim 7, Zhou, Gao, Herr and Leopoldseder teach The compiler system of claim 1. Zhou, and Gao did not teach wherein the machine learning models are trained on training data generated by compiling, by the domain-specific language compilers, the components of the source code. However, Herr (US 20190324755 A1) teaches wherein the machine learning models are trained on training data generated by compiling, by the domain-specific language compilers, the components of the source code (Para 0112, the variant generator 302 includes the cost model learner 404 to implement and/or otherwise facilitate the execution of one or more ML/AI techniques to generate trained ML/AI models associated with generating applications to be run on a heterogeneous system). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Regarding Claim 8, Zhou, Gao, Herr and Leopoldseder teach The compiler system of claim 7. Zhou, and Gao did not teach wherein the training data describe measured error and performance for the components of the source code with respect to different configurations of at least one tunable parameter of an individual component. However, Herr (US 20190324755 A1) teaches wherein the training data describe measured error and performance for the components of the source code with respect to different configurations of at least one tunable parameter of an individual component (Para 0082, the variant generator 302 analyzes the collected data and determines whether the variant used met a performance threshold. In some examples, training is performed until the performance threshold is met. For example, the performance threshold can correspond to an acceptable amount of L2 (least squares regression) error achieved for the selected aspect). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Regarding Claim 11, Zhou (US 20230176840 A1) teaches A method comprising: generating, by a local optimizer of a compiler system, a plurality of candidate configurations for individual components of source code (Para 0068, receives an input program that defines a graph of operations (410). As described above, a computation graph has nodes and edges with node in the graph representing an operation, which can be implemented in software by a particular operation module; Para 0069, The system can then provide the input program to a compiler optimization network having a graph-embedding network and a policy network); generating, by the local optimizer, per-component prediction functions for the plurality of candidate configurations using machine learning models (Para 0071, The system generates an optimization action for each of one or more nodes encoded in the graph embedding representation (430). The system can use a policy network that is configured to generate an optimization action for each of one or more nodes encoded in the graph embedding representation) receiving, by the global optimizer, the per-component prediction functions from the machine learning models of the local optimizer (Para 0045, The compiler optimization network 200 includes a graph embedding network 210 followed by a policy network 220, which can be jointly trained in an end-to-end fashion. The graph embedding network 210 can learn a graph representation h.sub.G of a computational graph, and the decision policy 220 can learn a optimization strategy p(y|G) over a given graph representation; Para 0046, The goal for the compiler optimization network can to compute an action distribution for a node in the graph based on the actions of all previous nodes in an auto-regressive manner); composing, by the global optimizer, a composite prediction function based on the per-component prediction functions (Para 0063, recurrent attention policy network can be used that not only applies a segment-level recurrent attention to the graph representation spatially, but also generates recurrent actions for multiple optimization problems through residual connections and parameter sharing across multiple recurrent attention layers; Para 0060, the segment-level recurrent attention is much faster than a LSTM-based method), and selecting the configurations based on the composite prediction function (Para 0071, The system generates an optimization action for each of one or more nodes encoded in the graph embedding representation (430). The system can use a policy network that is configured to generate an optimization action for each of one or more nodes encoded in the graph embedding representation; Para 0072, The system obtains an output optimization plan having one or more optimization actions for the input program (440). The optimization actions that are generated depends on the optimization problem being solved). Zhou did not specifically teach selecting, by a global optimizer of the compiler system, configurations for the individual components of the source code based on the per-component prediction functions and outputting, [via domain language-specific compilers] of the compiler system, executable code for the individual components of the source code based on the selected configurations selecting the configurations based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Gao (US 20200293295 A1) teaches selecting, by a global optimizer of the compiler system, configurations for the individual components of the source code based on the per-component prediction functions (Para 0038, the source code of the auto-tuning enabled software program, receiving, at the compiler system, a recommended optimization scheme selected from a plurality of optimization schemes by a machine learning module based on optimization parameters stored in a knowledge database, the optimization scheme comprising one or more optimizations to be performed on the source code of the auto-tuning enabled software program); and outputting, [via domain language-specific compilers] of the compiler system, executable code for the individual components of the source code based on the selected configurations (Para 0038, compiling, at a compiler system, the source code in accordance with the one or more optimizations of the optimization scheme to generate the executable file for execution on a processor having a particular architecture). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou’s teaching to Gao’s in order to optimize source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency than existing systems and approaches by generating sub executable file by compiling source code with optimization scheme and outputting optimized executable file based on main and sub executable file (Gao [Summary]). Zhou and Gao did not teach via domain language-specific compilers selecting the configurations based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Herr (US 20190324755 A1) teaches via domain language-specific compilers (Para 0104, The variant generator 302 compiles one or more variant binary files corresponding to algorithm(s) in a DSL representation from the code translator 303; Abstract, identifying the intermediate code as having a first algorithmic intent that corresponds to a second algorithmic intent of the annotated code, a domain specific language (DSL)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Zhou, Gao and Herr did not teach selecting the configurations based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Leopoldseder (US11392356B1) teaches selecting the configurations based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application (Col. 2, lines 44-67: “a feature may be extracted from an initial compilation graph for an optimization parameter. … Values of the optimization parameter may be applied to the initial compilation graph to generate multiple versions of the initial compilation graph. Each version of the initial compilation graph corresponds to a value of the optimization parameter.” Col. 5, lines 43-67: “The machine learning model (144) includes functionality to learn relationships between features, optimization parameter values, and performance metrics” Col. 9, lines 45-55: “The machine learning model may predict the value of the optimization parameter with the highest assigned confidence level given the feature extracted” Col. 6, lines 20-40: “executing the versions of the initial compilation graph to obtain values of a performance metric, and selecting, as an optimized compilation graph and using the values of the performance metric, a version of the initial compilation graph.” Claim 19: “training the machine learning model by executing a plurality of versions … to obtain a plurality of values of a performance metric … the first training data further comprises the plurality of values of the first optimization parameter and the plurality of values of the performance metric.”) Examiner Comments: Leopoldseder teaches selecting optimization parameters for components (nodes/operations in the compilation graph) by using a machine learning prediction model (the “composite prediction function” formed from graph features across components) together with a performance metric as the objective function, explicitly to achieve application-level objectives such as reduced execution time or improved efficiency (one of a plurality of possible objectives like speed, power, memory). This directly maps because the graph represents the application’s components, the ML model predicts/composes outcomes across them, and selection is driven by the objective (performance metric) to optimize the overall application. It would have been obvious to a person of ordinary skill in the art before the effective filing date to modify Zhou, Gao and Herr’s teaching to Leopoldseder’s explicit use of a machine-learned prediction model to select parameters based on both the composed (graph-level) prediction and a performance objective function in order to enable more precise, data-driven selection of parameters that achieve end-to-end application objectives (e.g., overall performance, accuracy, or energy efficiency) rather than purely local decisions, as taught by Leopoldseder which yields the predictable result of a more effective cross-component optimizing compiler, improving overall application performance as recognized in both references and the present application’s background. Regarding Claim 13, Zhou, Gao, Herr and Leopoldseder teach The method of claim 11. Zhou, and Gao did not teach wherein the plurality of candidate configurations for individual components of the source code comprises intermediate representations of respective individual components. However, Herr (US 20190324755 A1) teaches wherein the plurality of candidate configurations for individual components of the source code comprises intermediate representations of respective individual components (Para 0044, The example code translator can lift an algorithm intent from the one or more annotated code blocks to a lifted intermediate representation (e.g., lifted intermediate code). The example code translator can lower the one or more code blocks in the lifted intermediate representation into a DSL representation). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Regrading Claim 14, Zhou, Gao, Herr and Leopoldseder teach The method of claim 13. Zhou, and Gao did not teach wherein the intermediate representations of respective individual components are compiled by the domain language-specific compilers. However, Herr (US 20190324755 A1) teaches wherein the intermediate representations of respective individual components are compiled by the domain language-specific compilers (Para 0044, The example code translator can lift an algorithm intent from the one or more annotated code blocks to a lifted intermediate representation (e.g., lifted intermediate code). The example code translator can lower the one or more code blocks in the lifted intermediate representation into a DSL representation; Abstract, identifying the intermediate code as having a first algorithmic intent that corresponds to a second algorithmic intent of the annotated code, a domain specific language (DSL)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Regarding Claim 17, Zhou (US 20230176840 A1) teaches A method comprising: generating, [by a domain language-specific compiler], configurations of a component of source code (Para 0068, receives an input program that defines a graph of operations (410). As described above, a computation graph has nodes and edges with node in the graph representing an operation, which can be implemented in software by a particular operation module; Para 0069, The system can then provide the input program to a compiler optimization network having a graph-embedding network and a policy network); estimating, by machine learning models, a prediction function for each configuration (Para 0071, The system generates an optimization action for each of one or more nodes encoded in the graph embedding representation (430). The system can use a policy network that is configured to generate an optimization action for each of one or more nodes encoded in the graph embedding representation); composing, by an optimizing algorithm, a composite prediction function based on the prediction function for each configuration, prediction functions of other components of the source code, and a data flow of the source code (Para 0063, recurrent attention policy network can be used that not only applies a segment-level recurrent attention to the graph representation spatially, but also generates recurrent actions for multiple optimization problems through residual connections and parameter sharing across multiple recurrent attention layers; Para 0060, the segment-level recurrent attention is much faster than a LSTM-based method), selecting, by the optimizing algorithm one of the configurations of the component based on the composite prediction function (Para 0071, The system generates an optimization action for each of one or more nodes encoded in the graph embedding representation (430). The system can use a policy network that is configured to generate an optimization action for each of one or more nodes encoded in the graph embedding representation; Para 0072, The system obtains an output optimization plan having one or more optimization actions for the input program (440). The optimization actions that are generated depends on the optimization problem being solved); and outputting, [by the domain language-specific compiler], executable code for the component using the parameter corresponding to the selected configuration (Para 0038, compiling, at a compiler system, the source code in accordance with the one or more optimizations of the optimization scheme to generate the executable file for execution on a processor having a particular architecture). Zhou did not specifically teach each configuration including a difference in a parameter used for compiling the component domain language-specific compiler selecting, by the optimizing algorithm, one of the configurations of the component based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Gao (US 20200293295 A1) teaches each configuration including a difference in a parameter used for compiling the component (Para 0007, a value of at least one performance parameter in the second optimization scheme is different from a value of the at least one performance parameter in the first optimization scheme). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou’s teaching to Gao’s in order to optimize source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency than existing systems and approaches by generating sub executable file by compiling source code with optimization scheme and outputting optimized executable file based on main and sub executable file (Gao [Summary]). Zhou and Gao did not specifically teach domain language-specific compiler selecting, by the optimizing algorithm, one of the configurations of the component based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Herr (US 20190324755 A1) teaches domain language-specific compiler (Para 0104, The variant generator 302 compiles one or more variant binary files corresponding to algorithm(s) in a DSL representation from the code translator 303; Abstract, identifying the intermediate code as having a first algorithmic intent that corresponds to a second algorithmic intent of the annotated code, a domain specific language (DSL)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou and Gao’s teaching to Herr’s in order to improve the performance for the execution of the code by using a compiler to generate executable including variant binaries based on domain specific language code by domain specific language generator (Herr [Summary]). Zhou, Gao and Herr did not teach selecting, by the optimizing algorithm, one of the configurations of the component based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application. However, Leopoldseder (US11392356B1) teaches selecting, by the optimizing algorithm, one of the configurations of the component based on both the composite prediction function and an objective function to achieve at least one objective of a plurality of objectives for an application (Col. 2, lines 44-67: “a feature may be extracted from an initial compilation graph for an optimization parameter. … Values of the optimization parameter may be applied to the initial compilation graph to generate multiple versions of the initial compilation graph. Each version of the initial compilation graph corresponds to a value of the optimization parameter.” Col. 5, lines 43-67: “The machine learning model (144) includes functionality to learn relationships between features, optimization parameter values, and performance metrics” Col. 9, lines 45-55: “The machine learning model may predict the value of the optimization parameter with the highest assigned confidence level given the feature extracted” Col. 6, lines 20-40: “executing the versions of the initial compilation graph to obtain values of a performance metric, and selecting, as an optimized compilation graph and using the values of the performance metric, a version of the initial compilation graph.” Claim 19: “training the machine learning model by executing a plurality of versions … to obtain a plurality of values of a performance metric … the first training data further comprises the plurality of values of the first optimization parameter and the plurality of values of the performance metric.”) Examiner Comments: Leopoldseder teaches selecting optimization parameters for components (nodes/operations in the compilation graph) by using a machine learning prediction model (the “composite prediction function” formed from graph features across components) together with a performance metric as the objective function, explicitly to achieve application-level objectives such as reduced execution time or improved efficiency (one of a plurality of possible objectives like speed, power, memory). This directly maps because the graph represents the application’s components, the ML model predicts/composes outcomes across them, and selection is driven by the objective (performance metric) to optimize the overall application. It would have been obvious to a person of ordinary skill in the art before the effective filing date to modify Zhou, Gao and Herr’s teaching to Leopoldseder’s explicit use of a machine-learned prediction model to select parameters based on both the composed (graph-level) prediction and a performance objective function in order to enable more precise, data-driven selection of parameters that achieve end-to-end application objectives (e.g., overall performance, accuracy, or energy efficiency) rather than purely local decisions, as taught by Leopoldseder which yields the predictable result of a more effective cross-component optimizing compiler, improving overall application performance as recognized in both references and the present application’s background. Regarding Claim 18, Zhou, Gao, Herr and Leopoldseder teach The method of claim 17. Zhou did not teach wherein the source code defines an application, and wherein the at least one objective is an end-to-end performance objective of the application. However, Gao (US 20200293295 A1) teaches wherein the source code defines an application, and wherein the at least one objective is an end-to-end performance objective of the application (Para 0053, By leveraging machine learning technologies, some embodiments disclosed herein may provide systems and methods for optimizing source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency than existing systems and approaches). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou’s teaching to Gao’s in order to optimize source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency than existing systems and approaches by generating sub executable file by compiling source code with optimization scheme and outputting optimized executable file based on main and sub executable file (Gao [Summary]). Regarding Claim 19, Zhou, Gao, Herr and Leopoldseder teach The method of claim 18. Zhou did not specifically teach wherein the selecting, by the optimizing algorithm, the one of the configurations of the component based on the composite prediction function comprises executing, by the optimizing algorithm, a search space strategy to identify a configuration of the component, in combination with other configurations of the other components, that maximizes the at least one objective. However, Gao teaches wherein the selecting, by the optimizing algorithm, the one of the configurations of the component based on the composite prediction function comprises executing, by the optimizing algorithm, a search space strategy to identify a configuration of the component, in combination with other configurations of the other components, that maximizes the at least one objective (Para 0085, Search driver 2080 is configured to retrieve optimization parameters 2083 from knowledge database 2025 and configure a search space and search strategies based on the optimization parameters to find a recommended optimization scheme 2095 from the various optimization schemes determined by decision maker 2070 for compiling the source code of the auto-tuning enabled software program). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou’s teaching to Gao’s in order to optimize source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency than existing systems and approaches by generating sub executable file by compiling source code with optimization scheme and outputting optimized executable file based on main and sub executable file (Gao [Summary]). Claim(s) 4 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (US 20230176840 A1) in view of Gao (US 20200293295 A1), Herr (US 20190324755 A1) and Leopoldseder (US11392356B1) further in view of Paul (US 20180025301 A1). Regarding Claim 4, Zhou, Gao, Herr and Leopoldseder teach The compiler system of claim 1. Zhou, Gao, Herr and Leopoldseder did not teach wherein the component prediction functions estimate error and performance for the components of the source code with respect to values of the parameters. However, Paul (US 20180025301 A1) teaches wherein the component prediction functions estimate error and performance for the components of the source code with respect to values of the parameters (Para 0008, wherein the plurality of realizations approximate errors of the selected task and a relative quality among the plurality of realizations of the selected task, defining a realization index based on the considered realization of the selected task of the linear workflow, analyzing a plurality of configurations for the selected task using a configuration analyzing module, wherein each of the plurality of configurations depends on a plurality of implementations and a plurality of physical parts of the heterogeneous network, estimating execution time for each of the analyzed plurality of configurations using a decision module). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou, Gao, Herr and Leopoldseder’s teaching to Paul’s in order for the quality of the tasks of a heterogeneous environments are optimized by selecting a particular task from multiple tasks in a linear workflow of heterogeneous network (Paul [Summary]). Regarding Claim 15, Zhou, Gao, Herr and Leopoldseder teach The method of claim 11. Zhou, Gao, Herr and Leopoldseder did not teach wherein the per-component prediction functions estimate an error for respective candidate configurations of the plurality of candidate configurations. However, Paul (US 20180025301 A1) teaches wherein the per-component prediction functions estimate an error for respective candidate configurations of the plurality of candidate configurations (Para 0008, wherein the plurality of realizations approximate errors of the selected task and a relative quality among the plurality of realizations of the selected task, defining a realization index based on the considered realization of the selected task of the linear workflow, analyzing a plurality of configurations for the selected task using a configuration analyzing module, wherein each of the plurality of configurations depends on a plurality of implementations and a plurality of physical parts of the heterogeneous network, estimating execution time for each of the analyzed plurality of configurations using a decision module). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou, Gao, Herr and Leopoldseder’s teaching to Paul’s in order for the quality of the tasks of a heterogeneous environments are optimized by selecting a particular task from multiple tasks in a linear workflow of heterogeneous network (Paul [Summary]). Claim(s) 5, 12 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (US 20230176840 A1) in view of Gao (US 20200293295 A1), Herr (US 20190324755 A1) and Leopoldseder (US11392356B1) further in view of Yiftachel (US 8560827 B1). Regarding Claim 5, Zhou, Gao, Herr and Leopoldseder teach The compiler system of claim 1. Zhou, Gao, Herr and Leopoldseder did not teach wherein the parameters include an approximation algorithm, an approximation level, an algorithmic setting, and a hardware configuration. However, Yiftachel (US 8560827 B1) teaches wherein the parameters include an approximation algorithm, an approximation level, an algorithmic setting, and a hardware configuration (Col 8: ln 1-8, An optimization problem solver is chosen based on its suitability to solve the concrete problem. The optimization problem solver can be a linear or non-linear tool, it may be domain specific or specially tailored, an approximation algorithm and so forth. The choice also considers hardware resources available (memory, CPU) and the amount of time available in order to solve the optimization problem). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou, Gao, Herr and Leopoldseder’s teaching to Yiftachel’s in order to automatically determining system configuration parameter, by converting abstract model into concrete model based on inputs from objective function adapter, system behavior predictor and system state adapter (Yiftachel [Summary]). Regarding Claim 12, Zhou, Gao, Herr and Leopoldseder teach The method of claim 11. Zhou, Gao, Herr and Leopoldseder did not teach wherein each candidate configuration of the plurality of candidate configurations includes at least one different approximation algorithm, approximation level, or hardware configuration from other candidate configurations of the plurality of candidate configurations. However, Yiftachel (US 8560827 B1) teaches wherein each candidate configuration of the plurality of candidate configurations includes at least one different approximation algorithm, approximation level, or hardware configuration from other candidate configurations of the plurality of candidate configurations (Col 8: ln 1-8, An optimization problem solver is chosen based on its suitability to solve the concrete problem. The optimization problem solver can be a linear or non-linear tool, it may be domain specific or specially tailored, an approximation algorithm and so forth. The choice also considers hardware resources available (memory, CPU) and the amount of time available in order to solve the optimization problem). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou, Gao, Herr and Leopoldseder’s teaching to Yiftachel’s in order to automatically determining system configuration parameter, by converting abstract model into concrete model based on inputs from objective function adapter, system behavior predictor and system state adapter (Yiftachel [Summary]). Regarding Claim 20, Zhou, Gao, Herr and Leopoldseder teach The method of claim 17. Zhou, Gao, Herr and Leopoldseder did not teach wherein the parameter is at least one of an approximation algorithm, an approximation level, or a hardware configuration. However, Yiftachel (US 8560827 B1) teaches wherein the parameter is at least one of an approximation algorithm, an approximation level, or a hardware configuration (Col 8: ln 1-8, An optimization problem solver is chosen based on its suitability to solve the concrete problem. The optimization problem solver can be a linear or non-linear tool, it may be domain specific or specially tailored, an approximation algorithm and so forth. The choice also considers hardware resources available (memory, CPU) and the amount of time available in order to solve the optimization problem). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou, Gao, Herr and Leopoldseder’s teaching to Yiftachel’s in order to automatically determining system configuration parameter, by converting abstract model into concrete model based on inputs from objective function adapter, system behavior predictor and system state adapter (Yiftachel [Summary]). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (US 20230176840 A1) in view of Gao (US 20200293295 A1), Herr (US 20190324755 A1), Leopoldseder (US11392356B1) and Paul (US 20180025301 A1) further in view of Kinsman (US 20120023149 A1). Regarding Claim 6, Zhou, Gao, Herr, Leopoldseder and Paul teach The compiler system of claim 4. Zhou, Gao, Herr, Leopoldseder and Paul did not teach wherein the component prediction functions estimate the error for the components of the source code by predicting magnitudes of output errors based on magnitudes of input errors. However, Kinsman (US 20120023149 A1) teaches wherein the component prediction functions estimate the error for the components of the source code by predicting magnitudes of output errors based on magnitudes of input errors (Para 0151, the role of iterative analysis within the overall flow is to close the gap between forward propagation of input error, backward propagation of output error constraints and convergence constraints over the iterations). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou, Gao, Herr, Leopoldseder and Paul’s teaching to Kinsam’s in order to enable automated bit-width allocation for application by determining a type of computation to be performed, the variables to be used in computation and the constraints for computation are determined (Kinsam [Summary]). Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou (US 20230176840 A1) in view of Gao (US 20200293295 A1), Herr (US 20190324755 A1) and Leopoldseder (US11392356B1) further in view of Chen (US 7529888 B2). Regarding Claim 9, Zhou, Gao, Herr and Leopoldseder teach The compiler system of claim 1. Zhou, Gao, Herr and Leopoldseder did not teach wherein the optimizing algorithm further selects the parameters for the components of the source code based on an estimated error and performance for the components of the source code with respect to the objective function. However, Chen (US 7529888 B2) teaches wherein the optimizing algorithm further selects the parameters for the components of the source code based on an estimated error and performance for the components of the source code with respect to an objective function (Claim 8, wherein the optimized application is an application optimized by a compiler to relax cache coherence of delay-tolerant global data by allowing a tolerable error rate to permit limited retrieval of older data from non-updated cache, thereby improving memory performance). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have combined Zhou, Gao, Herr and Leopoldseder’s teaching to Chen’s so that sufficient data coherency is maintained while allowing for bounded errors and memory access latencies are reduced and throughput of domain specific application that are tolerant of errors caused by delayed updates of cached values is improved (Chen [Summary]). Regarding Claim 10, Zhou, Gao, Herr, Leopoldseder and Chen teach The compiler system of Claim 9. Zhou, Gao, and Herr did not specifically teach wherein the objective function defines a performance metric for the application to achieve the at least one objective of the plurality of objectives. However, Leopoldseder teaches wherein the objective function defines a performance metric for the application to achieve the at least one objective of the plurality of objectives (Col. 6, lines 20-40: “executing the versions of the initial compilation graph to obtain values of a performance metric, and selecting, as an optimized compilation graph and using the values of the performance metric, a version of the initial compilation graph.” Claim 19: “training the machine learning model by executing a plurality of versions … to obtain a plurality of values of a performance metric … the first training data further comprises the plurality of values of the first optimization parameter and the plurality of values of the performance metric.”) Examiner Comments: Leopoldseder explicitly uses a performance metric (e.g., execution time or resource usage) as the objective function that drives selection of the best configuration for the overall application, directly satisfying the limitation of defining a performance metric to achieve application-level objectives. It would have been obvious to a person of ordinary skill in the art before the effective filing date to modify Zhou, Gao and Herr’s teaching to Leopoldseder’s explicit use of a machine-learned prediction model to select parameters based on both the composed (graph-level) prediction and a performance objective function in order to enable more precise, data-driven selection of parameters that achieve end-to-end application objectives (e.g., overall performance, accuracy, or energy efficiency) rather than purely local decisions, as taught by Leopoldseder which yields the predictable result of a more effective cross-component optimizing compiler, improving overall application performance as recognized in both references and the present application’s background. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the arguments do not apply to the previous cited sections of the references used in the previous office action. The current office action is now citing additional references to address the newly added claimed limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR SOLTANZADEH whose telephone number is (571)272-3451. The examiner can normally be reached M-F, 9am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Mui can be reached on (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR SOLTANZADEH/Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Mar 12, 2025
Non-Final Rejection — §103
Jul 07, 2025
Examiner Interview Summary
Jul 07, 2025
Applicant Interview (Telephonic)
Jul 21, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Jan 13, 2026
Examiner Interview Summary
Jan 13, 2026
Applicant Interview (Telephonic)
Feb 06, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602225
IDENTIFYING THE TRANLATABILITY OF HARD-CODED STRINGS IN SOURCE CODE VIA POS TAGGING
2y 5m to grant Granted Apr 14, 2026
Patent 12591414
CENTRALIZED INTAKE AND CAPACITY ASSESSMENT PLATFORM FOR PROJECT PROCESSES, SUCH AS WITH PRODUCT DEVELOPMENT IN TELECOMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12561134
Function Code Extraction
2y 5m to grant Granted Feb 24, 2026
Patent 12561136
METHOD, APPARATUS, AND SYSTEM FOR OUTPUTTING SOFTWARE DEVELOPMENT INSIGHT COMPONENTS IN A MULTI-RESOURCE SOFTWARE DEVELOPMENT ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12561118
SYSTEM AND METHOD FOR AUTOMATED TECHNOLOGY MIGRATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+16.9%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 421 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month