Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,543

AI-BASED TECHNIQUES FOR GUIDING AN INSTRUCTION SCHEDULER

Non-Final OA §103
Filed
Dec 28, 2023
Examiner
GOORAY, MARK A
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Advanced Micro Devices, Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
305 granted / 400 resolved
+21.3% vs TC avg
Strong +63% interview lift
Without
With
+63.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
23 currently pending
Career history
423
Total Applications
across all art units

Statute-Specific Performance

§101
20.4%
-19.6% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 400 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7-9, and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (WO 2020/142195 A1) and further in view of Gao et al. (US 2020/0293295 A1) As per claim 1, Li et al. teaches the invention as claimed including, “A computer-implemented compilation method comprising: scheduling a basic block of a computer program, including: obtaining first and second representations of the basic block;” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). Compiler can include a basic block generating component for generating from a listing of source code, basic blocks of intermediate language instructions. Source code can include text drafted in a programming language according to a specified syntax, where the compiler can interpret the source code and generate corresponding machine language understood by the processor and/or operating system (0017). The basic block generation component can generate one or more basic blocks from the source code. Each basic block is a sequence of instructions corresponding to source code (0021). “selecting K instruction scheduling procedures from a set of N instruction scheduling procedures, wherein the selecting of the K instruction scheduling procedures is based on analysis of the first representation of the basic block by one or more models, wherein 1 < K < N, and wherein N is at least 2; generating K candidate schedules of the basic block, wherein generating the K candidate schedules includes applying the K instruction scheduling procedures to the second representation of the basic block, and ordering a plurality of instructions of the second representation of the basic block in accordance with a candidate schedule included in the K candidate schedules of the basic block;” Li et al. teaches, for each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012) The set of heuristics can correspond to estimate performance metric for the instruction scheduling based on the given optimization for the basic block. Based on the set of heuristics the compile can schedule the instruction sequence within each basic block (0012). Compiler includes a heuristic determining component for determining heuristics associated with applying optimizations to each of the basic blocks generated from the source code. Can determine the heuristics as one or more performance metrics related to applying one of multiple possible optimizations to the basic blocks to schedule corresponding machine learning instructions. Compiler may include an instruction scheduling component for scheduling machine language instructions within the basic block (e.g., scheduling the sequence of instructions within the basic block) based on determining which optimizations for a given basic block results in a most optimal optimization (0019). Compiler can apply different optimizations, in this regard, in scheduling machine language instructions corresponding to the source code to achieve the selected optimization (0022). Also see 0024. Based on the first heuristics and the second heuristics, one of the first plurality of optimizations can be applied to the first basic block to schedule first instruction for the first basic block. Scheduling the instructions can refer to scheduling the sequence of instructions to be executed within each give basic block where the sequence of instructions is scheduled to achieve the corresponding optimization (0025). Instruction scheduling component can determine which combination of optimizations for each basic block allow for achieving the overall optimization goal and can accordingly select the optimizations for generating or reordering scheduled instructions for the basic block (0028-0029). “generating a portion of target code of the computer program based on the second representation of the basic block; and outputting the portion of the target code of the computer program.” Li et al. further teaches, the compiler can generate the intermediate language including the first instructions for the first basic block and the second instructions for the second basic block. The scheduling component can apply the optimizations to each basic block to generate a sequence of scheduled instructions to achieve the optimizations and the compiler can combine the basic blocks into intermediate language for executing the corresponding software application (0029). However, Li et al. does not explicitly appear to teach, “selecting K instruction scheduling procedures from a set of N instruction scheduling procedures, wherein the selecting of the K instruction scheduling procedures is based on analysis of the first representation of the basic block by one or more models, wherein 1 < K < N, and wherein N is at least 2; generating K candidate schedules of the basic block, wherein generating the K candidate schedules includes applying the K instruction scheduling procedures to the second representation of the basic block, and ordering a plurality of instructions of the second representation of the basic block in accordance with a candidate schedule included in the K candidate schedules of the basic block; generating a portion of target code of the computer program based on the second representation of the basic block; and ” Gao et al. teaches, the compiler software system is configured to determine an optimization scheme for the source code of the software program using machine learning techniques based on various types of information obtained from one or more previous compiling iterations of the same or similar code (first representation) (0057-0058). Also see 0054. The automatically generated optimization scheme is subsequently implemented by the compiler when compiling the source code (second representation) of the software program in order to generate a second executable file. Process repeated until specific performance target hit (0059). The compiler system uses machine learning to analyze the various optimization schemes determined by the decision maker and determine a recommended optimization scheme for optimizing the source code of the software program for execution on a particular processor (0087). Machine learning model obtains various data from knowledge database and determines recommend optimization scheme. The recommended optimization scheme may include exact locations of optimization to be applied to source code of the auto-tuning enabled software program (0108). Compiler then receives the recommended optimization scheme to guide any subsequent iterations of compiling the source code of the auto-turning enabled software program (0109). Machine learning module (model) is operable to determine a recommended optimization scheme for compiler to apply at the next iteration of compiling the source code of the software program using the mapping function. In some embodiments, a previously used optimization scheme already stored in knowledge database may be determined to be the recommended optimization scheme (from first representation) (0113). As shown in figure 7, machine learning module receives dynamic data and compilation report for the source code (first representation) of a software program and output a recommended optimization scheme using the function of machine learning module. The recommended optimization scheme is then used by the compiler when subsequently compiling the next interaction of the source code (second representation) of software program to generate optimized executable file (0114). Compiler is configured to compile the source code of the auto-tuning enabled software program in accordance with an optimization scheme to generate an executable file for a processor having a particular architecture, and outputs the executable file for execution on a processor (0106). Also see 0112. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Li et al. with Gao et al. because both teach determine optimizations which include scheduling to perform on source code. Li et al. teaches determining a optimization scheme to perform optimization and schedule instructions withing blocks of source code. Gao et al. teaches the use of machine learning in order to determine optimizations to perform on source code. Gao et al. teaches using information from previous/history compilations such as (first representations) to determine optimizations to perform on source code (second representation). Both Li et al. and Gao et al. are able to determine optimization and generate executable code from the optimized source code. The simple substitution of machine learning from Gao et al. will allow Li et al. to make a more knowledgeable selection of optimizations to perform and therefore would have been obvious to try. As per claim 2, Li et al. and Gao et al. further teach, “The method of claim 1, wherein the target code comprises object code executable by a central processing unit (CPU), an application processing unit (APU), an accelerated processing unit (APU), an inference processing unit (IPU), a graphics processing unit (GPU), a tensor processing unit (TPU), a field-programmable gate array (FPGA), programmable logic device (PLD), system-on-a-chip (SoC), network interface controller (NIC), data processing unit (DPU), data transform unit (DTU), a hardware accelerator, and/or a mobile processor.” See Li et al. paragraph 0017, 0032, 0038 and figure 1. See Gao et al. 0061, executable on a processor (0064-0065). Also see 0078-0079 and 0106. As per claim 3, Li et al. and Gao et al. further teach, “The method of claim 2, further comprising executing the object code by the central processing unit (CPU), the application processing unit (APU), the accelerated processing unit (APU), the inference processing unit (IPU), the graphics processing unit (GPU), the tensor processing unit (TPU), the field-programmable gate array (FPGA), programmable logic device(PLD), system-on-a-chip (SoC), network interface controller (NIC), data processing unit (DPU), data transform unit (DTU), the hardware accelerator, and/or the mobile processor. See Li et al. paragraph 0017, 0032, 0038 and figure 1. See Gao et al. 0061, executable on a processor (0064-0065). Also see 0078-0079 and 0106. As per claim 4, Li et al. further teaches, “The method of claim 1, wherein the basic block is a first basic block, wherein scheduling the first basic block is performed by a first instruction scheduler of a compiler, and wherein scheduling a second basic block of the computer program includes, by a second instruction scheduler of the compiler: generating N candidate schedules of the second basic block, wherein generating the N candidate schedules includes applying the N instruction scheduling procedures to a representation of the second basic block; selecting a candidate schedule from the N candidate schedules of the second basic block based on an analysis of the generated N candidate schedules of the second basic block; and ordering a plurality of instructions of the representation of the second basic block in accordance with the selected candidate schedule.” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). Also see 0021. Compiler can include a basic block generating component for generating from a listing of source code, basic blocks of intermediate language instructions. Source code can include text drafted in a programming language according to a specified syntax, where the compiler can interpret the source code and generate corresponding machine language understood by the processor and/or operating system (0017). For each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012) The set of heuristics can correspond to estimate performance metric for the instruction scheduling based on the given optimization for the basic block. Based on the set of heuristics the compiler can schedule the instruction sequence within each basic block (0012). Based on the first heuristics and the second heuristics, one of the first plurality of optimizations can be applied to the first basic block to schedule first instruction for the first basic block and one of the second plurality of optimizations can be applied to the second basic block to schedule second instructions for the basic block. Scheduling the instructions can refer to scheduling the sequence of instructions to be executed within each give basic block where the sequence of instructions is scheduled to achieve the corresponding optimization (0025). The compiler can generate the intermediate language including the first instructions for the first basic block and the second instructions for the second basic block. The scheduling component can apply the optimizations to each basic block to generate a sequence of scheduled instructions to achieve the optimizations and the compiler can combine the basic blocks into intermediate language for executing the corresponding software application (0029). As per claim 7, Li et al. and Gao et al further teaches, “The method of claim 1, wherein the selecting of the K instruction scheduling procedures from the set of N instruction scheduling procedures is performed after obtaining the analysis of the first representation of the basic block by the one or more models and before the generating of the K candidate schedules of the basic block.” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). Also see 0021. Compiler can include a basic block generating component for generating from a listing of source code, basic blocks of intermediate language instructions. Source code can include text drafted in a programming language according to a specified syntax, where the compiler can interpret the source code and generate corresponding machine language understood by the processor and/or operating system (0017). For each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012) The set of heuristics can correspond to estimate performance metric for the instruction scheduling based on the given optimization for the basic block. Based on the set of heuristics the compiler can schedule the instruction sequence within each basic block (0012). Based on the first heuristics and the second heuristics, one of the first plurality of optimizations can be applied to the first basic block to schedule first instruction for the first basic block and one of the second plurality of optimizations can be applied to the second basic block to schedule second instructions for the basic block. Scheduling the instructions can refer to scheduling the sequence of instructions to be executed within each give basic block where the sequence of instructions is scheduled to achieve the corresponding optimization (0025). The compiler can generate the intermediate language including the first instructions for the first basic block and the second instructions for the second basic block. The scheduling component can apply the optimizations to each basic block to generate a sequence of scheduled instructions to achieve the optimizations and the compiler can combine the basic blocks into intermediate language for executing the corresponding software application (0029). Gao et al. teaches, the compiler software system is configured to determine an optimization scheme for the source code of the software program using machine learning techniques based on various types of information obtained from one or more previous compiling iterations of the same or similar code (first representation) (0057-0058). Also see 0054. The automatically generated optimization scheme is subsequently implemented by the compiler when compiling the source code (second representation) of the software program in order to generate a second executable file. Process repeated until specific performance target hit (0059). The compiler system uses machine learning to analyze the various optimization schemes determined by the decision maker and determine a recommended optimization scheme for optimizing the source code of the software program for execution on a particular processor (0087). Machine learning model obtains various data from knowledge database and determines recommend optimization scheme. The recommended optimization scheme may include exact locations of optimization to be applied to source code of the auto-tuning enabled software program (0108). Compiler then receives the recommended optimization scheme to guide any subsequent iterations of compiling the source code of the auto-turning enabled software program (0109). Machine learning module (model) is operable to determine a recommended optimization scheme for compiler to apply at the next iteration of compiling the source code of the software program using the mapping function. In some embodiments, a previously used optimization scheme already stored in knowledge database may be determined to be the recommended optimization scheme (from first representation) (0113). As shown in figure 7, machine learning module receives dynamic data and compilation report for the source code (first representation) of a software program and output a recommended optimization scheme using the function of machine learning module. The recommended optimization scheme is then used by the compiler when subsequently compiling the next interaction of the source code (second representation) of software program to generate optimized executable file (0114). Compiler is configured to compile the source code of the auto-tuning enabled software program in accordance with an optimization scheme to generate an executable file for a processor having a particular architecture, and outputs the executable file for execution on a processor (0106). Also see 0112. As per claim 8, Li et al. and Gao et al. further teach, “The method of claim 1, wherein the first representation of the basic block is encoded in a machine-independent intermediate representation (IR) of a compiler, and wherein the second representation of the computer program is encoded in a machine-dependent IR of the compiler, in a target language associated with the compiler, or in an instruction set of a target processor.” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). For each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012). Gao et al. teaches the compiler software system is configured to determine an optimization scheme for the source code of the software program using machine learning techniques based on various types of information obtained from one or more previous compiling iterations of the same or similar code (0057). The compiler includes a high-level optimizer (which may make optimization which are processor independent and/or processor architecture independent) and a low-level optimizer (which may make optimizations which are dependent on characteristics of the processor and/or the processor architecture of the processor (0072). Also see 0055. As per claim 9, Gao et al. further teaches, “The method of claim 1, wherein scheduling the basic block further includes obtaining the analysis of the first representation of the basic block, including: generating one or more features based on the first representation of the basic block; providing the one or more features as inputs to the one or more models; and obtaining an output of the one or more models, the output indicating the K instruction scheduling procedures. Once the function of the machine learning m module is generated, machine learning module is generated. Machine learning module is operable to determine a recommended optimization scheme for the compiler to apply at the next iteration of compiling the source code of the auto-tuning enabled software program using the mapping function. In some embodiments, a previously used optimization scheme already stored in knowledge database may be determined to be the recommended optimization scheme (0113). As shown in figure 7, machine learning module receives dynamic data and compilation report for the source code (features based on the first representation of the basic block) of a software program and outputs a recommended optimization scheme using the function of machine learning module (output K instruction scheduling procedures). The recommended optimization scheme is then used by the compiler when subsequently compiling the next interaction of the source code of software program to generate optimized executable file (0114). As per claim 12, Li et al. and Gao et al. further teach, “The method of claim 1, wherein the target code comprises a plurality of instructions executable by a first type of processor, wherein the one or more models have been trained to identify a K-best subset of the set of N scheduling procedures for scheduling a basic block for execution by the first type of processor, and wherein the method further comprises retraining the one or more models to identify a K-best subset of the set of N scheduling procedures for scheduling a basic block for execution by a second type of processor.” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). For each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012). Gao et al. teaches determining an optimization scheme for source code of the software programing language using a machine learning techniques based on various types of information obtained from one or more previous compiling iterations of the same or similar code (0057). Machine learning module obtains various data from knowledge database and determines a recommended optimization scheme (0108). Figure 6 depicts an example workflow for performing machine learning to modify or generate a function for machine learning module for generating a recommended optimization scheme. Machine learning model may analyze training data to generate a function for generating an optimization scheme. Training data may include historical optimization data, which can include dynamic data form previously executed or simulated executables files, data from compiling reports of previous compiled software programs, which may include static analysis program characteristics data and optimization data, and previously determined optimization schemes. The training data may relate to one or more previously compiled instances of the software code of the auto-tuning enabled software program or other software programs, as well as one or more previously executed or simulated instance of execution of the executable file on a processor. Once the machine learning model has been trained, machine learning model can determine a recommended optimization schedule for the source code (0110). If a function for machine learning module has already been generated previously, machine learning module may use the optimization schemes, dynamic data, compiling reports as training data to further refine the function (0111). The dynamic data is data generated by simulating the execution of an executable file generated by the compiler on a particular processor (0058). Optimization of source code of a software program during compiling by a compiler is conducted with a specific processor in mind (0052). The compiler system using machine learning to analyze the various optimization schemes determined by a decision maker and determine a recommended optimization scheme for optimizing the source code of the software program for execution of a particular processor (0087). Methods for optimizing source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency (0053). The examiner states it would have been obvious for a new machine learning module to be trained/retrained for an entirely different source code such the code block claimed above. Li et al. teaches that each block is optimized/scheduled with different selected optimizations. Therefore, it would have been obvious for Gao et al. to create/retrain its machine learning module in order to determine optimizations for the second different code block. As per claim 13, Gao et al. further teaches, “ The method of claim 12, wherein retraining the one or more models includes fine-tuning the one or more models based on analysis of a plurality of candidate schedules of a plurality of basic blocks each encoded in a representation dependent on the second type of processor.” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). For each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012). Gao et al. teaches determining an optimization scheme for source code of the software programing language using a machine learning techniques based on various types of information obtained from one or more previous compiling iterations of the same or similar code (0057). Machine learning module obtains various data from knowledge database and determines a recommended optimization scheme (0108). Figure 6 depicts an example workflow for performing machine learning to modify or generate a function for machine learning module for generating a recommended optimization scheme. Machine learning model may analyze training data to generate a function for generating an optimization scheme. Training data may include historical optimization data, which can include dynamic data form previously executed or simulated executables files, data from compiling reports of previous compiled software programs, which may include static analysis program characteristics data and optimization data, and previously determined optimization schemes. The training data may relate to one or more previously compiled instances of the software code of the auto-tuning enabled software program or other software programs, as well as one or more previously executed or simulated instance of execution of the executable file on a processor. Once the machine learning model has been trained, machine learning model can determine a recommended optimization schedule for the source code (0110). If a function for machine learning module has already been generated previously, machine learning module may use the optimization schemes, dynamic data, compiling reports as training data to further refine the function (0111). The dynamic data is data generated by simulating the execution of an executable file generated by the compiler on a particular processor (0058). Optimization of source code of a software program during compiling by a compiler is conducted with a specific processor in mind (0052). The compiler system using machine learning to analyze the various optimization schemes determined by a decision maker and determine a recommended optimization scheme for optimizing the source code of the software program for execution of a particular processor (0087). Methods for optimizing source code of software programs during compiling such that execution of the resulting executable file on a processor having a particular architecture achieves greater performance or efficiency (0053). As per claim 14, Gao et al. further teaches, “The method of claim 1, further comprising: generating a first representation of the computer program based on source code of the computer program, the first representation of the computer program including first representations of a plurality of basic blocks including the first basic block; and generating a second representation of the computer program based on the first representation of the computer program, the second representation of the computer program including second representations of the plurality of basic blocks, the generating the second representation of the computer program including scheduling the plurality of basic blocks.” Gao et al. teaches, the compiler software system is configured to determine an optimization scheme for the source code of the software program using machine learning techniques based on various types of information obtained from one or more previous compiling iterations of the same or similar code (first representation) (0057-0058). Also see 0054. The automatically generated optimization scheme is subsequently implemented by the compiler when compiling the source code (second representation) of the software program in order to generate a second executable file. Process repeated until specific performance target hit (0059). The compiler system uses machine learning to analyze the various optimization schemes determined by the decision maker and determine a recommended optimization scheme for optimizing the source code of the software program for execution on a particular processor (0087). Machine learning model obtains various data from knowledge database and determines recommend optimization scheme. The recommended optimization scheme may include exact locations of optimization to be applied to source code of the auto-tuning enabled software program (0108). Compiler then receives the recommended optimization scheme to guide any subsequent iterations of compiling the source code of the auto-turning enabled software program (0109). Machine learning module (model) is operable to determine a recommended optimization scheme for compiler to apply at the next iteration of compiling the source code of the software program using the mapping function. In some embodiments, a previously used optimization scheme already stored in knowledge database may be determined to be the recommended optimization scheme (from first representation) (0113). As shown in figure 7, machine learning module receives dynamic data and compilation report for the source code (first representation) of a software program and output a recommended optimization scheme using the function of machine learning module. The recommended optimization scheme is then used by the compiler when subsequently compiling the next interaction of the source code (second representation) of software program to generate optimized executable file (0114). Compiler is configured to compile the source code of the auto-tuning enabled software program in accordance with an optimization scheme to generate an executable file for a processor having a particular architecture, and outputs the executable file for execution on a processor (0106). Also see 0112. As per claim 18, Gao et al. further teaches, “The system of claim 15, wherein the at least one processor includes at least one first processor and at least one second processor, wherein the generating the K candidate schedules of the basic block is performed by the at least one first processor, and wherein scheduling the basic block further includes performing, by the at least one second processor, the analysis of the first representation of the basic block. See Gao et al. paragraph 0060-0061, 0064-0065 and figure 2. As per claims 15-17 and 19-20, they contain similar limitations to claims 1-3, 9 and 14 and are therefore rejected for the same reasons. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (WO 2020/142195 A1) and Gao et al. (US 2020/0293295 A1) as applied to claims 1 above and further in view of Drego et a. (US 2021/0208889 A1). As per claim 5, Li et al. and Gao et al. further teach, “The method of claim 4, wherein the representation of the second basic block is a second representation, the method further comprising: assigning the first basic block to the first instruction scheduler based on one or more attributes of the first representation of the first basic block; and assigning the second basic block to the second instruction scheduler based on one or more attributes of the first representation of the second basic block not exceeding a threshold.” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). Also see 0021. Compiler can include a basic block generating component for generating from a listing of source code, basic blocks of intermediate language instructions. Source code can include text drafted in a programming language according to a specified syntax, where the compiler can interpret the source code and generate corresponding machine language understood by the processor and/or operating system (0017). For each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012) The set of heuristics can correspond to estimate performance metric for the instruction scheduling based on the given optimization for the basic block. Based on the set of heuristics the compiler can schedule the instruction sequence within each basic block (0012). Based on the first heuristics and the second heuristics, one of the first plurality of optimizations can be applied to the first basic block to schedule first instruction for the first basic block and one of the second plurality of optimizations can be applied to the second basic block to schedule second instructions for the basic block. Scheduling the instructions can refer to scheduling the sequence of instructions to be executed within each give basic block where the sequence of instructions is scheduled to achieve the corresponding optimization (0025). The compiler can generate the intermediate language including the first instructions for the first basic block and the second instructions for the second basic block. The scheduling component can apply the optimizations to each basic block to generate a sequence of scheduled instructions to achieve the optimizations and the compiler can combine the basic blocks into intermediate language for executing the corresponding software application (0029). Gao et al. teaches optimizations may refer to transformations performed on source code. Transformations include but are not limited to, strength reduction, in-lining small functions, code hoisting, dead store elimination… loop unrolling and instruction scheduling. Performance parameters are associated with transformations. Performance parameters may include, for example, tiling size, unroll factors, invariant code motions, inline threshold, or like (0054). However Li et al. and Gao do not explicitly appear to teach, “assigning the second basic block to the second instruction scheduler based on one or more attributes of the first representation of the second basic block not exceeding a threshold.” Drego et al. teaches identifying a code size of the candidate inner loop, (ii) identifying whether the code size of the candidate inner loop satisfies or does not exceed an instruction size threshold, wherein the instruction size threshold relates to a maximum possible code size of a potential candidate for loop optimization, wherein automatically setting the most inner loop body as the candidate inner loop for the loop optimization when the code size of the candidate inner loop satisfies or does not exceed the instruction threshold (00012). Also see 0083. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Li et al. and Gao et al. with Drego et al. because all teach the performance of optimization of code. Li et al. teaches performing different optimizations for scheduling instructions within different blocks and that the optimizations performed are not limited. Gao et al. further teaches selecting optimizations that can include scheduling. Gao et al. also teaches many different optimizations such as loop unrolling. Drego et al. teaches a loop optimization in which a candidate inner loop is checked to see if it satisfies or does not exceed an instruction threshold. The examiner states that there are many different types of optimizations that can be performed to increase performance and scheduling of instructions. Selecting different optimizations is nothing more than a design choice and would have been obvious to try. As per claim 6, Li et al, Gao et al. and Drego et al. further teach, “The method of claim 5, wherein the one or more attributes of the first representation of the basic block include a number of instructions in the first representation of the basic block exceeding a threshold.” LI et al. teaches a compiler can generate multiple basic blocks of instructions in an intermediate language, where each basic block can include a collection of intermediate instructions that correspond to a sequence of source code instructions without a branch (0012). Also see 0021. Compiler can include a basic block generating component for generating from a listing of source code, basic blocks of intermediate language instructions. Source code can include text drafted in a programming language according to a specified syntax, where the compiler can interpret the source code and generate corresponding machine language understood by the processor and/or operating system (0017). For each basic block of intermediate language instructions, the compile can determine a set of heuristics corresponding to each of multiple possible optimizations that can be performed for the given basic block to schedule the corresponding instructions within the basic block (0012) The set of heuristics can correspond to estimate performance metric for the instruction scheduling based on the given optimization for the basic block. Based on the set of heuristics the compiler can schedule the instruction sequence within each basic block (0012). Based on the first heuristics and the second heuristics, one of the first plurality of optimizations can be applied to the first basic block to schedule first instruction for the first basic block and one of the second plurality of optimizations can be applied to the second basic block to schedule second instructions for the basic block. Scheduling the instructions can refer to scheduling the sequence of instructions to be executed within each give basic block where the sequence of instructions is scheduled to achieve the corresponding optimization (0025). The compiler can generate the intermediate language including the first instructions for the first basic block and the second instructions for the second basic block. The scheduling component can apply the optimizations to each basic block to generate a sequence of scheduled instructions to achieve the optimizations and the compiler can combine the basic blocks into intermediate language for executing the corresponding software application (0029). Gao et al. teaches optimizations may refer to transformations performed on source code. Transformations include but are not limited to, strength reduction, in-lining small functions, code hoisting, dead store elimination… loop unrolling and instruction scheduling. Performance parameters are associated with transformations. Performance parameters may include, for example, tiling size, unroll factors, invariant code motions, inline threshold, or like (0054). Drego et al. teaches identifying a code size of the candidate inner loop, (ii) identifying whether the code size of the candidate inner loop satisfies or does not exceed an instruction size threshold, wherein the instruction size threshold relates to a maximum possible code size of a potential candidate for loop optimization, wherein automatically setting the most inner loop body as the candidate inner loop for the loop optimization when the code size of the candidate inner loop satisfies or does not exceed the instruction threshold (0012). Also see 0083. Claims 10 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (WO 2020/142195 A1) and Gao et al. (US 2020/0293295 A1) as applied to claims 9 above and further in view of Strenski et al. (US 2023/0326813 A1). As per claim 10, Gao et al. teaches the training and use of a machine learning module (0110-0113). This machine learning model can be a convolutional neural network (0115). However, Gao et al. does not explicitly appear to teach, “The method of claim 9, wherein generating the one or more features based on the first representation of the basic block includes generating a sequence of tokens based on the first representation of the basic block, encoding a plurality of tokens in the sequence of tokens, and combining the encoded plurality of tokens.” Strenski et al. teaches, receiving source code, tokenizing the code, creating embedding, creating sequence vector and then training a neural machine translator Figure 9. Also see figure 12 which teaches tokenizing a source code file and generating a sequence vector to use the trained machine learning model. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gao et al. with Strenski et al. because both teach the training and using of a machine learning model. Generating tokens, encodings and vectors for machine learning is will known to one of ordinary skill in the art. Applying the steps of Strenski et al. will allow Gao et al. to perform its training and use of its model. This is nothing more than apply a known technique to a known device to produce a predictable result. Claims 11 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (WO 2020/142195 A1) and Gao et al. (US 2020/0293295 A1) as applied to claims 9 above and further in view of Lewis et al. (US 11797892 B1). As per claim 11, Gao et al. further teaches, “The method of claim 9, wherein the one or more models include a neural network, and wherein the neural network includes at least one pooling layer disposed between an input layer and an output layer.” Gao et al. teaches the training and use of a machine learning module (0110-0113). This machine learning model can be a convolutional neural network (0115). However, Gao et al. does not explicitly appear to teach, “wherein the neural network includes at least one pooling layer disposed between an input layer and an output layer” Lewis et al. teaches a convolution neural network comprising a input layer 604b and output layer 606b with a pooling layer 602b in-between (column 16, lines 45-67 and figure 6). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Gao et al. with Lewis et al. because both teach the use of a convolution neural network. Lewis teaches a well-known structure of a convolution neural network and therefore would have been obvious to try. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK A GOORAY whose telephone number is (571)270-7805. The examiner can normally be reached Monday - Friday 10:00am - 6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 571-272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK A GOORAY/ Examiner, Art Unit 2199 /LEWIS A BULLOCK JR/ Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596627
AGENTLESS SYSTEM AND METHOD FOR DISCOVERING AND INSPECTING APPLICATIONS AND SERVICES IN COMPUTE ENVIRONMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12572444
COMPATIBILITY CHECK FOR CONTINUOUS GLUCOSE MONITORING APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12566587
REAL-TIME COMPUTING RESOURCE DEPLOYMENT AND INTEGRATION
2y 5m to grant Granted Mar 03, 2026
Patent 12535995
COMPUTER CODE GENERATION FROM TASK DESCRIPTIONS USING NEURAL NETWORKS
2y 5m to grant Granted Jan 27, 2026
Patent 12536091
PROGRAM ANALYSIS APPARATUS, PROGRAM ANALYSIS METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+63.3%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 400 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month