Prosecution Insights
Last updated: April 18, 2026
Application No. 17/977,214

INTERFERENCE CHANNEL CONTENTION MODELLING USING MACHINE LEARNING

Final Rejection §101§102§103§112
Filed
Oct 31, 2022
Examiner
PHAKOUSONH, DARAVANH
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Collins Aerospace Ireland Limited
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-5.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment/Arguments 1. Amendment overcomes the objection of claim 14. 2. Amendment to claims 10-12 overcomes the rejection under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claims 10-12 now recite statutory subject matter (e.g., a non-transitory computer-readable medium and a system including processors and memory). Accordingly, the rejection of claims 10-12 under 35 U.S.C. 101 as non-statutory subject matter is withdrawn. 3. Applicant’s argument filed on January 21, 2026 regarding the rejection under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more has been fully considered but is not persuasive. Applicant asserts that the claims do not recite a judicial exception under Step 2A, Prong One, because the claims do not explicitly recite a mathematical formula, equation, or algorithm, and instead recite computer-specific operations involving performance monitoring counter (PMC) data and multi-processor execution. However, Applicant’s arguments do not address the rejection as set forth. As explained in the Office Action, the rejection does not characterize all of the claimed limitations as mathematical concepts. Rather, the rejection identified the claims as reciting mental processes, including observation, evaluation, comparison, and judgement applied to numerical data. For example, comparing values, validating error, identifying relationships, and generating predicted results involve observation, judgement, and decision-making that can be performed in the human mind, or with the aid of basic computational tools such a calculator or pen and paper. See MPEP 2106.04(a)(2)(III). Thus, Applicant’s argument directed solely to the absence of an explicit mathematical formula does not overcome the rejection. Applicant further argues that the claims cannot practically be performed in the human mind because they involve execution of microbenchmarks on a multi-processor system and collection of PMC data. This argument is not persuasive because it focuses on how the data is obtained rather than on the character of the claimed analysis. The recited execution of microbenchmarks and collection of PMC data merely obtain numerical values for use in this analysis. The claimed invention lies in what is done with those values – namely, determining execution-time differences, comparing values, identifying relationships, and generating predictions – which are evaluative steps that can be performed mentally. The MPEP makes clear that the mere recitation of a computer environment or computer-based data does not preclude a claim from reciting a mental process where the claimed steps involve observation, evaluation, judgement, or decision-making. Applicant’s reliance on performance monitoring counters, multi-processor execution, and hardware-derived data is also not persuasive. These elements merely represent generic computer components used to obtain data and apply the abstract idea and do not change the character of the claimed invention. The presence of such components does not remove the claimed observation, evaluation, comparison, and prediction from the mental process grouping, as the abstract idea lies in analyzing the collected numerical data. Applicant’s reliance on the August 4, 2025 USPTO memorandum is also unpersuasive. The rejection does not assert that the execution of microbenchmarks itself is performed mentally. Rather, the rejection identifies the abstract idea in the subsequent analysis of the resulting numerical data. Accordingly, even if certain data-gathering steps are performed using a computer, the claimed determining, comparing, and predicting steps remain mental processes. Applicant’s analogy to Example 47 is not persuasive. Example 47, Claim 1 was found eligible at Step 2A, Prong One because it recited only structural hardware components – specifically, an ASIC comprising neurons, registers, microprocessors, and synaptic circuits - and therefore did not recite any judicial exception. In contrast, the present claims do not recite any comparable hardware structure or physical circuitry. Instead, the claims recite operations such as collecting numerical Performance Monitoring Counter (PMC) data, determining execution-time effects, comparing values, and generating predicted results. As explained in the Office Action, these operations fall within the mental-process grouping of abstract ideas because they include observation, evaluation, and judgement applied to numerical data. Further, Example 47 explains that merely performing such analysis using a computer or machine learning model does not avoid a judicial exception, as demonstrated by Claim 2 of Example 47, which was found to recite mental processes and mathematical concepts despite being implemented using a neural network. Similarly, the recited PMCs, a multi-processor system, and machine learning based Task Contention Model merely provide a technological environment and data source for performing the claimed analysis and do not change the character of the claims. Accordingly, the claims are not analogous to Example 47, Claim 1 and remain directed to a judicial exception under Step 2A, Prong One. Applicant’s arguments that the claims integrate any alleged abstract idea into a practical application under Step 2A, Prong Two have been fully considered but are not persuasive. Applicant asserts that the claims are directed to a technical solution to a technical problem in a multi-processor computing, namely predicting execution-time delays cause by contention among tasks, and the claimed steps transform hardware performance monitoring data into execution-time effects used to train a machine learning model. However, Applicant’s arguments are not commensurate with the scope of the claims. The claims do not recite an improvement to the functioning of a computer or to another technology. Rather, the claims recite collecting data (e.g., performance monitoring counter (PMC) data), analyzing that data (e.g., determining execution-time effects, comparing values, identifying relationships), and generating predicted results (e.g., predicting contention-induced delays). Such limitations merely use a computer as a tool to perform the abstract idea and do not improve the operation of the computer itself. The alleged “improvement” identified by Applicant – more accurate prediction of delays – relates to the result of the data analysis, not to any improvement in computer functionality. Applicant’s characterization of the claims as reciting a “hardware-aware data generation and selection process” is not supported by the claim language. The recited multi-processor system, performance monitoring counters (PMCs), and execution of microbenchmarks merely provide a source of data and constitute generic computer components. The claims do not recite any specific improvement to how these components operate, nor do they recite any particular technical mechanism that improves resource contention, processor scheduling, or system performance. Instead, the claims analyze data generated from such systems. Applicant’s reliance of MPEP 2106.04(d) is not persuasive because the claims do not recite a practical application of the abstract idea. The claims do not effect a transformation of an article to a different state or thing, do not implement the abstract idea with any particular machine in a meaningful way beyond generally linking it to a computer environment, and do not apply the abstract idea in a manner that imposes a meaningful limit on the judicial exception. Rather, the additional elements amount to insignificant extra-solution activity, such as data gathering and data analysis. Applicant’s reliance on Example 47, Claim 3 is also not persuasive. In that example, eligibility was found because the claim recited a specific technological improvement, namely real-time detection and remediation of network threats that altered the operation of the computer system by blocking malicious traffic. In contrast, the present claims do not recite any comparable technological improvement or remedial action. Instead, the claims merely generate predicted execution-time delays based on analyzed data. Generating improved predictions do not constitute an improvement to computer functionality, but rather reflects an improvement in the abstract idea itself. Accordingly, the claims do not integrate the abstract idea into a practical application under Step 2A, Prong Two. The rejection under 35 U.S.C. 101 is therefore maintained. 4. Applicant’s arguments regarding the rejection under 35 U.S.C. 102(a)(1) filed on January 21, 2026 has been fully considered but are not persuasive. Applicant asserts that Buchaca fails to disclose “inferring, from a ML-based TMC, a predicted effect on the execution time of each actual task,” and argues that Buchaca only predicts resource-usage metrics rather than execution time effects. However, Applicant’s arguments are not commensurate with the scope of the claims under the broadest reasonable interpretation. The claim recites “a predicted effect on the execution time” and does require any particular form or explicit labeling of execution time as a standalone output value. Under the broadest reasonable interpretation, predicting execution behavior of tasks being on monitored system metrics, including resource usage and contention, reasonably encompasses predicting effects on execution time. Applicant’s argument that Table II is directed to resource-usage prediction accuracy is not persuasive because the rejection does not rely solely on Table II. Rather, Buchaca is relied upon as a whole, including its disclosure of predicting co-executed job behavior and resulting execution-time differences due to contention. Applicant’s focus on individual metrics improperly isolates portions of the reference rather than considering the teachings in their entirety. Applicant’s arguments regarding Figures 3-5 are also not persuasive. While Applicant asserts that the figures merely show resource-usage traces, Buchaca explicitly explains that co-scheduled applications “slowdown…which implies a big difference in execution time with respect to the applications being run in isolation,” and further discloses that applications “take around 40 time steps to execute in isolation, but require more than 80 under co-schedule.” These disclosures describe changes in execution duration resulting from contention. Buchaca further explains that the sequence-to-sequence model generates predicted traces of co-located applications over time. These predicted traces reflect how execution progresses under contention, including extended execution duration and slowdown. Under the broadest reasonable interpretation, predicting how execution progresses over time, including increases in the number of time steps required to complete execution, constitutes a predicted effect on execution time. Accordingly, Applicant’s arguments that Figures 3-5 do not disclose predicted execution time effect is not persuasive. Additionally, Buchaca teaches predicting behavior of co-scheduled applications using monitored metrics collected from isolated executions as input a machine learning model. The predicted traces of co-located applications reflect how execution progresses over time under contention. Under the broadest reasonable interpretation, such predicted execution behavior – including slowdown and increased number of execution steps – constitutes a predicted effect on execution time. Accordingly, when the claim language is given the broadest reasonable interpretation, Buchaca teaches inferring a predicted effect on execution time of each execution task based on monitored metrics, and the rejection of claim 13 is maintained. Applicant’s arguments regarding claim 15 are not persuasive for at least the reasons discussed above with respect to claim 13, from which claim 15 depends. Accordingly, the rejection of claim 15 is also maintained. 5. Applicant’s arguments regarding the rejection under 35 U.S.C. 103 filed on January 21, 2026 have been fully considered but are not persuasive. Applicant’s arguments directed towards the rejection of claims 1-4, 7, 8, 10-12, and 14 as being unpatentable over Buchaca in view of Palomo is not persuasive. Applicant asserts that Buchaca fails to disclose “training a machine learning model using, as input, at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding ∆ T B j ,” and argues that Buchaca only predicts resource-usage metrics rather than execution time effects. However, Applicant’s arguments are not commensurate with the scope of the claims under the broadest reasonable interpretation. The claim recites “the corresponding ∆ T ” as output and does not require any particular mathematical formulation, indexing (e.g., ∆ T B j ), labeling, or explicit representation of execution of time as a standalone variable. Under the broadest reasonable interpretation, ∆ T encompasses any predicted change, delay, or variation in execution time associated with task execution. Accordingly, predicted execution behavior of co-scheduled tasks over time – including changes in execution progression, slowdown, or increased completion duration – reasonably encompasses predicting an effect on execution time. Buchaca teaches training machine learning models using monitored metrics, including Performance Monitoring Counter (PMC) – type data collected from isolation, to predict the behavior of co-scheduled applications over time. The predicted traces of co-located applications represent how execution progresses under contention conditions. Under the broadest reasonable interpretation, such predicted execution behavior – including extended execution duration and slowdown – constitutes an output corresponding to an execution time effect. Further, Palomo explicitly teaches that contention between tasks executing shared resources results in interference and delay, which directly affects execution time. Palomo describes that overlapping accesses to shared resources generate contention that increases execution latency and causes delays in task completion. Thus, Palomo establishes that contention-induced effects correspond to execution-time differences ( ∆ T -type effects). Accordingly, when considering the combined teachings of Buchaca and Palomo, a person of ordinary skill in the art would have understood that the predicted behavior of co-scheduled tasks generated by Buchaca’s machine learning model corresponds to contention-induced execution-time effects as taught by Palomo. The combination therefore teaches or at least suggests training a model using PMC-based inputs to produce outputs corresponding to execution-time effects. Applicant’s argument that Palomo, alone or in combination with Buchaca, does not disclose the claimed limitation is not persuasive. As explained above, Buchaca teaches the machine learning model trained on monitored execution data from isolated executions, while Palomo teaches that contention on shared resources results in interference and delay affecting execution time. When the references are considered together under the broadest reasonable interpretation of the claim language, the combined teachings teach or at least suggest training a model using PMC-based inputs to produce outputs corresponding to execution-time effects. Accordingly, the rejection of claim 1 is maintained. Claims 2-4, 7, 8, 10-12, and 14 fall within claim 1. Applicant’s arguments directed to the rejection of claim 5 as being unpatentable over Buchaca in view of Palomo further in view of Inam have been fully considered but is not persuasive. Applicant asserts that claim 5 is not rendered obvious because it depends from claim 1 and incorporates the limitation “training a machine learning model, as input, the at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding ∆ T B j ,” which Applicant alleges is not taught by the applied references. However, Applicant’s arguments are not persuasive for at least the reasons discussed above with respect to claim 1. As explained above, when the claim language is given its broadest reasonable interpretation, the combined teachings of Buchaca and Palomo teach or at least suggest training a machine learning model using monitored execution data to produce outputs corresponding to execution-time effects. Applicant’s arguments do not address the additional teachings of Inam relied upon in the rejection of claim 5, nor does Applicant present arguments directed to the specific additional limitation recited in claim 5. Accordingly, claim 5 is unpatentable for the same reasons as claim 1, and the rejection of claim 5 is maintained. Applicant’s arguments directed to the rejection of claim 6 as being unpatentable over Buchaca in view of Palomo further in view of Iorga have been fully considered but is not persuasive. Applicant asserts that claim 6 is not rendered obvious because it depends from claim 1 and incorporates the limitation “training a machine learning model, as input, the at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding ∆ T B j ,” which Applicant alleges is not taught by the applied references. However, Applicant’s arguments are not persuasive for at least the reasons discussed above with respect to claim 1. As explained above, when the claim language is given its broadest reasonable interpretation, the combined teachings of Buchaca and Palomo teach or at least suggest training a machine learning model using monitored execution data to produce outputs corresponding to execution-time effects. Applicant’s arguments do not address the additional teachings of Iorga relied upon in the rejection of claim 6, nor does Applicant present arguments directed to the specific additional limitation recited in claim 6. Accordingly, claim 6 is unpatentable for the same reasons as claim 1, and the rejection of claim 6 is maintained. Applicant’s arguments directed to the rejection of claim 9 as being unpatentable over Buchaca in view of Palomo further in view of Hoffmann have been fully considered but is not persuasive. Applicant asserts that claim 9 is not rendered obvious because it depends from claim 1 and incorporates the limitation “training a machine learning model, as input, the at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding ∆ T B j ,” which Applicant alleges is not taught by the applied references. However, Applicant’s arguments are not persuasive for at least the reasons discussed above with respect to claim 1. As explained above, when the claim language is given its broadest reasonable interpretation, the combined teachings of Buchaca and Palomo teach or at least suggest training a machine learning model using monitored execution data to produce outputs corresponding to execution-time effects. Applicant’s arguments do not address the additional teachings of Hoffmann relied upon in the rejection of claim 9, nor does Applicant present arguments directed to the specific additional limitation recited in claim 9. Accordingly, claim 9 is unpatentable for the same reasons as claim 1, and the rejection of claim 9 is maintained. Applicant’s arguments directed to the rejections under 35 U.S.C. 103 have been fully considered but are not persuasive. For the reasons discussed above, the combined teachings of Buchaca in view of Palomo, Buchaca in view of Palomo further in view of Inam, Buchaca in view of Palomo further in view of Iorga, and Buchaca in view of Palomo further in view of Hoffmann, when considered under the broadest reasonable interpretation of the claims, teach or at least suggest the claimed limitations. Accordingly, the rejections of claims 1-12, and 14 under 35 U.S.C. 103 is maintained. Priority Acknowledgement is made of the applicant’s claim for Foreign priority to European Patent Application No. 21206561 filed on November 4, 2021. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 19 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “substantially” in claim 19 is a relative term which renders the claim indefinite. The term “substantially zero” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The limitation “the ∆ T B j is substantially zero” is unclear because the term “substantially” is a term of degree without clear boundaries, as the specification fails to provide guidance as to what range of values constitutes “substantially zero.” Accordingly, the scope of the limitation is unclear. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-19 are within the four statutory categories (a process, machine, manufacture or composition of matter). Claims 1-19 are directed to a method consisting of a series of steps, meaning that it is directed to the statutory category of process. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following claim elements are abstract ideas: measuring at least one resultant Performance Monitoring Counter, PMC, over time to extract ideal characteristic footprints of each μBenchmark when operating in isolation (This is an abstract idea of a “mental process.” It recites observation of values over time and mathematical computations that can be done in the human mind or with pen and paper (e.g., tallying/average readings, comparing differences, sketching the pattern) to recognize the “footprint.” See MPEP 2106.04(a)(2)(III).); measuring the effect on the execution time of each μBenchmark, Δ T B j , resulting from contention over interference channels within the multi-processor system (This is an abstract idea of a “mental process.” It recites observation of execution times and mathematical computations that can be done in the human mind or with pen and paper – e.g., write down a baseline and a contended time, subtract to obtain the change in time, and attribute that change to contention.); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: Task Contention Model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) a multi-processor system (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).), executing a plurality of microbenchmarks, μBenchmarks Bj, on the multiprocessor system in isolation (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation; it is test execution/data collection preparatory to the abstract evaluation – i.e., insignificant extra-solution activity. See MPEP 2106.05(f) and MPEP 2106.06(g).) executing possible pairing scenarios of the plurality of μBenchmarks in parallel on the multi-processor system (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation; it is test execution/data collection preparatory to the abstract evaluation – i.e., insignificant extra-solution activity. See MPEP 2106.05(f) and MPEP 2106.06(g).) training a machine learning model using, as an input, the at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding Δ T B j during the parallel execution of each pairing scenario as training inputs (The step of “training” a model is merely an instruction to apply the abstract idea and does provide a meaningful limitation. See MPEP 2106.05(f).) Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following abstract ideas: validating the training error of the ML based TCM (This is an abstract idea of a “mental process.” It recites observation of predicted vs. known values and mathematical computations that can be done in the human mind (i.e., error/difference). measuring at least one resultant PMC over time for each actual execution task (This is an abstract idea of a “mental process.” It recites observation of counter values over time and mathematical computations that can be done in the human mind or with pen and paper. See MPEP 2106.04(a)(2)(III).); inferring, by the ML based TCM, the predicted effect on the execution time of each actual execution task given the at least one resultant PMC of each task as input (This is an abstract idea of a “mental process.” It recites observation of input values and inferring – i.e., evaluating and computing – a predicted change over time for those values, which can be performed in the human mind or with pen and paper. For example, a person could read the PMC readings for a task, apply a simple rule/lookup or do basic arithmetic (difference/ratio/weighted sum) to observe the effect on execution time, and record the result. These are mathematical computations with pen and paper coupled with judgement about the predicted outcome.); measuring the actual execution time (This is an abstract idea of a “mental process.” It recites observation of start and end times and mathematical computations that can be done in the human mind or with pen and paper – e.g., write down the start and stop time and subtract to get the change in time.); comparing the predicted effect on the execution time with the actual execution time, thereby calculating an error between the predicted and actual execution time (This is an abstract idea of a “mental process.” It recites observation of two values and mathematical computations that can be done in the human mind or with pen and paper – e.g., place the two times side-by-side and subtract (or take absolute/squared difference) to obtain the error. See MPEP 2106.04(a)(2)(III). Additionally, calculating an error is a mathematical concept. See MPEP 2106.04(a)(2)(I).) The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: executing a plurality of actual execution tasks on the multi-processor system in isolation (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation; it is test execution/data collection preparatory to the abstract evaluation – i.e., insignificant extra-solution activity. See MPEP 2106.05(f) and MPEP 2106.06(g).) executing the actual execution tasks in parallel (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation; it is test execution/data collection preparatory to the abstract evaluation – i.e., insignificant extra-solution activity. See MPEP 2106.05(f) and MPEP 2106.06(g).) Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, claim 3 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: a first iterative loop over all different pairing scenarios (This is mere instructions to apply the abstract idea and does not provide a meaningful limitation. It amounts to storing and retrieving information in memory during routine loop control and is well-understood, routine, and conventional. See MPEP 2106.06(f) and MPEP 2106.06(d)(II)(iv).); a second iterative loop over each of the PMC measures of each μBenchmark in isolation, as well as the corresponding Δ T B j during the parallel execution of each pairing scenario (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation. It amounts to storing and retrieving information in memory during routine loop control over inputs (PMC measures) and labels and is well understood, routine and conventional.) Regarding claim 4, the rejection of claim 1 is incorporated herein. Further, claim 4 recites the following abstract ideas: wherein the at least one PMC is selected based an identification of the PMCs that are associated with interference channels on the multi-processor system (This is an abstract idea of a “mental process.” It involves observation and judgement to identify which PMCs indicate contention on shared resources (e.g., cache, memory, interconnect) and then “select” them – steps that can be performed in the human mind.). Regarding claim 5, the rejection of claim 1 is incorporated herein. Further, claim 5 recites the following abstract ideas: wherein the measuring of at least one PMC comprises measuring the at least one PMC at a variable monitoring frequency (This is an abstract idea of a “mental process.” It recites observation of counter values and adjusting the measurement interval over time – e.g., record every 10 ms, then every 5 ms, then every 1 ms – and performing mathematical computations with pen and paper on the resulting readings. These are steps that can be carried out in the human mind or with pen and paper.) Regarding claim 6, the rejection of claim 1 is incorporated herein. Further, claim 5 recites the following abstract ideas: each μBenchmark is a synthetic benchmark that is selected so as to stress certain interference channels of the multi-processor system in an isolated way (This is an abstract idea of a “mental process.” It recites selection that can be performed in the human mind or with pen and paper using reasoning, observation, and judgement – e.g., review known microbenchmarks and choose those expected to stress shared-resource interference channels (cache, memory, interconnect) in isolation.). Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the multiprocessor system is a multi-core processor of an avionics system (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.). Regarding claim 8, the rejection of claim 1 is incorporated herein. Further, claim 8 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the multiprocessor system is a homogenous platform (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.). Regarding claim 9, the rejection of claim 1 is incorporated herein. Further, claim 9 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the multiprocessor system is a heterogenous platform or not symmetric (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.). Regarding claim 10, the rejection of claim 1 is incorporated herein. Further, claim 10 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: Machine Learning based Task Contention Model (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.). non-transitory computer-readable medium (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) Regarding claim 11, the rejection of claim 1 is incorporated herein. Further, claim 11 recites the following abstract ideas: predict time delays resulting from contention between tasks running in parallel on a multi-processor system (This is an abstract idea of a “mental process.” It recites observation of task behavior and predicting a change in time (delay) using reasoning and mathematical computations with pen and paper – e.g., concurrent tasks, note expected contention, and compute an expected delay.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A computer system (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.). one processor and a memory (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) Regarding claim 12, the rejection of claim 1 is incorporated herein. Further, claim 11 recites the following abstract ideas: predict time delays resulting from contention between tasks running in parallel on a multi-processor system (This is an abstract idea of a “mental process.” It recites observation of task behavior and predicting a change in time (delay) using reasoning and mathematical computations with pen and paper – e.g., concurrent tasks, note expected contention, and compute an expected delay.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: non-transitory computer-readable medium (This is a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05.). Regarding claim 13, recites the following abstract ideas: measuring at least one resultant Performance Monitoring Counter, PMC, over time to extract ideal characteristic footprints of each μBenchmark when operating in isolation (This is an abstract idea of a “mental process.” It recites observation of values over time and mathematical computations that can be done in the human mind or with pen and paper (e.g., tallying/average readings, comparing differences, sketching the pattern) to recognize the “footprint.” See MPEP 2106.04(a)(2)(III).); inferring, by the ML based TCM, the predicted effect on the execution time of each actual execution task given the at least one resultant PMC of each task as input (This is an abstract idea of a “mental process.” It recites observation of input values and inferring – i.e., evaluating and computing – a predicted change over time for those values, which can be performed in the human mind or with pen and paper. For example, a person could read the PMC readings for a task, apply a simple rule/lookup or do basic arithmetic (difference/ratio/weighted sum) to observe the effect on execution time, and record the result. These are mathematical computations with pen and paper coupled with judgement about the predicted outcome.); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: executing a plurality of actual execution tasks on the multi-processor system in isolation (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation; it is test execution/data collection preparatory to the abstract evaluation – i.e., insignificant extra-solution activity. See MPEP 2106.05(f) and MPEP 2106.06(g).) Regarding claim 14, the rejection of claim 13 is incorporated herein. Further, claim 14 recites the following abstract ideas: wherein the predicted effect execution time of each actual execution task is aggregated for contending tasks so as to predict a worst case execution time, WCET (This is an abstract idea of a “mental process.” It involves observation of predicted times and mathematical computations that can be done in the human mind or with pen and paper – e.g., list predicted effects for contending tasks, aggregate them (sum/compare), and identify the worst case to produce a WCET. See MPEP 2106.04(a)(2)(III).), measuring at least one resultant Performance Monitoring Counter, PMC, over time to extract ideal characteristic footprints of each μBenchmark when operating in isolation (This is an abstract idea of a “mental process.” It recites observation of values over time and mathematical computations that can be done in the human mind or with pen and paper (e.g., tallying/average readings, comparing differences, sketching the pattern) to recognize the “footprint.” See MPEP 2106.04(a)(2)(III).); measuring the effect on the execution time of each μBenchmark, Δ T B j , resulting from contention over interference channels within the multi-processor system (This is an abstract idea of a “mental process.” It recites observation of execution times and mathematical computations that can be done in the human mind or with pen and paper – e.g., write down a baseline and a contended time, subtract to obtain the change in time, and attribute that change to contention.); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: Task Contention Model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) a multi-processor system (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).), executing a plurality of microbenchmarks, μBenchmarks Bj, on the multiprocessor system in isolation (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation; it is test execution/data collection preparatory to the abstract evaluation – i.e., insignificant extra-solution activity. See MPEP 2106.05(f) and MPEP 2106.06(g).) executing possible pairing scenarios of the plurality of μBenchmarks in parallel on the multi-processor system (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation; it is test execution/data collection preparatory to the abstract evaluation – i.e., insignificant extra-solution activity. See MPEP 2106.05(f) and MPEP 2106.06(g).) training a machine learning model using, as an input, the at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding Δ T B j during the parallel execution of each pairing scenario as training inputs (The step of “training” a model is merely an instruction to apply the abstract idea and does provide a meaningful limitation. See MPEP 2106.05(f).) wherein the trained ML based TCM is a trained ML based TCM produced by the method of any of claims 1 to 9 (This is merely an instruction to apply the abstract idea and does not provide a meaningful limitation.). Regarding claim 15, the rejection of claim 13 is incorporated herein. Further, claim 15 recites the following abstract idea: scheduling a plurality of tasks for execution by a multi-processor system, wherein the scheduling uses time delays predicted (This is abstract idea of a “mental process.” It involves observation of predicted delay values and reasoning/judgement to arrange task order and assignment – work that can be performed in the human mind or with pen and a paper (e.g., list task and their predicted delays, compare totals/maxima, and choose a schedule). The reference to a multi-processor system merely states where the schedule will be executed and does not alter the mental nature of the step. See MPEP 2106.04(a)(2(III). Regarding claim 16, the rejection of claim 1 is incorporated herein. Further, claim 16 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein at least two μBenchmarks of the plurality of μBenchmarks are executed simultaneously (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 17, the rejection of claim 1 is incorporated herein. Further, claim 17 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein at least two μBenchmarks of the plurality of μBenchmarks are executed starting at the same time point (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 18, the rejection of claim 1 is incorporated herein. Further, claim 18 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein at least two of the plurality of μBenchmarks are executed in parallel in a pairing scenario while accessing at least one shared interference channel of the multi-processor system (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).), and wherein the execution of the at least two μBenchmarks in parallel does not result in an increase in execution time ∆ T B j for the at least two μBenchmarks relative to execution in isolation (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 19, the rejection of claim 1 is incorporated herein. Further, claim 19 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein training the machine learning model further comprises including pairing scenarios in which at least two of μBenchmarks of the plurality of μBenchmarks are executed in parallel while accessing at least one shared interference channel of the multi-processor system, and for which the ∆ T B j   is substantially zero (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 13 and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Buchaca et al., (NPL: “Sequence-to-sequence models for workload interference prediction on batch processing datacenters” (Published: 2020)). Regarding claim 13, Buchaca discloses: A computer-implemented method of predicting time delays resulting from contention between tasks running in parallel on a multi-processor system using a trained Machine Learning based Task Contention Model (ML based TCM), the method comprising (Buchaca, Abstract “In this work, we propose a methodology for modeling co-scheduling of jobs on data centers, based on their behavior towards resources and execution time and using sequence-to-sequence models based on recurrent neural networks. The goal is to forecast co-executed jobs footprint on resources throughout their execution time, from the profile shown by the individual jobs, in order to enhance resource manager and scheduler placement decisions.” – datacenters are environments where many multi-processor systems are used.”): executing a plurality of actual execution tasks on the multi-processor system in isolation and measuring at least one resultant Performance Monitoring Counter (PMC) over time to extract ideal characteristic footprints of each actual execution task when operating in isolation (Buchaca, page 159, section 4.1 “In order to capture the trace of each execution we have profiled them by using the basic Linux performance analysis tools… we have gathered a total of 141 features with time granularity of one second. For our study we have selected 9 key features, shown in Table 1 (workload metrics recorded at each time step), that are especially relevant for interference prediction and resource estimation. The dataset used in the experiments contains traces generated by a variety of micro-benchmarks (workloads)… The traces are executed using a server with two Intel Xeon E5-2630 processors and 128 GB of RAM, up to 400 isolated and co-located executions.” Page 159, section 5, “We use the Mean Absolute Percentage Error (MAPE) to assess the quality of the predictions at every time step of the execution trace.” page 156, col. 1, third paragraph “The method presented herein predicts the footprint for resource demands of co-located applications, given that the traces of these applications run in isolation” Col. 2, first paragraph “A novel use of Recurrent Neural Networks that estimates the monitored metrics of two co-scheduled applications a∧ b from the information of a and b gathered running the applications in isolation.” page 164, “From a dataset consisting of applications and SPARK profiling features, captured from hardware counters (CPI, number of interruptions, number of cache misses, etc.” – the method uses monitoring metrics, a form of “Performance Monitoring Counters” (hardware counters), which are collected over time to establish a characteristic footprint (profiling features)); inferring, from a ML based TCM, a predicted effect on the execution time of each actual execution task given the at least one resultant PMC of each task as input (Buchaca, page 160, section 5.1 “In this experiment we evaluate the accuracy of the predictions made by the different models on co-scheduled jobs. Table 2 contains the MAPE errors of the predictions on the test set… To assess the behavior of the presented models in the test set visually, we plotted the resource usage of three triplets. Figs. 3–5 show three different pairs of co-located applications with different properties...The shaded region shown in the third column displays the period at which both applications run at the same time. The shaded region shown in the third column displays the period at which both applications run at the same time. This is the period used to compute the error metrics…One of the main difficulties… is the slowdown of both applications while competing for resources, which usually implies a big difference in execution time with respect to the applications being run in isolation… the input jobs take around 40 time steps to execute in isolation, but require more than 80 under the presented co-schedule.”);. Regarding claim 15, Buchaca discloses: A computer-implemented method of scheduling a plurality of tasks for execution by a multi-processor system, wherein the scheduling uses time delays predicted by the method of claim 13 (Buchaca, page 156, col. 1., “Our model employs two Gated Recurrent Units (GRU) [9] as building blocks; one GRU processes the trace signal of the incoming applications and passes the processed information to the other GRU, which outputs the expected resources of the collocated applications over time. The model predicts the whole resource demand trace throughout execution, thereby providing schedulers with a sufficiently accurate estimation for placing applications together and thus minimizing interference.” Page 162, col. 1, “found in Fig. 7. In the figure we can observe how the dotted lines that predict the end of the applications predict very accurately the end of the execution on the first column but do not work as well on the third column. This behavior of loosing quality for longer sequences is consistent with other works such as [14]. Notice the capability of knowing when application finish can be used as a rule for deciding when applications should not be co-scheduled.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7, 8, 10 -12, 14, and 16-19 are rejected under the 35 U.S.C. 103 as being unpatentable over Buchaca et al., (NPL: “Sequence-to-sequence models for workload interference prediction on batch processing datacenters” (Published: 2020)) in view of Palomo et al., (NPL: “Accurate ILP-based contention modeling on statically scheduled multicore systems” (Published: 2019)). Regarding claim 1, Buchaca teaches the following limitations: A computer-implemented method of producing a trained Machine Learning based Task Contention Model, ML based TCM, (Buchaca, Abstract, “challenging scenario where jobs can compete for resources, leading to severe slowdowns or failed executions. Efficient job placement on environments where resources are shared requires awareness on how jobs interfere during execution… In this work, we propose a methodology for modeling co-scheduling of jobs on data centers, based on their behavior towards resources and execution time and using sequence-to-sequence models based on recurrent neural networks.”)to predict time delays resulting from contention between tasks running in parallel on a multi-processor system (Buchaca, Abstract “The goal is to forecast co-executed jobs footprint on resources throughout their execution time, from the profile shown by the individual jobs, in order to enhance resource manager and scheduler placement decisions.” Page 158, section 3.1 “capable of making predictions in that environment, where applications are not required to start at the same time, our training data will contain co-scheduled applications starting with different time delays.” – datacenters are environments where many multi-processor systems are used.”), the method comprising: executing a plurality of microbenchmarks, μBenchmarks   B j , on the multiprocessor system in isolation (Buchaca, page 156, col. 1, last paragraph “For our experiments, we created a dataset with execution traces from the previously mentioned benchmark suites. The dataset consists of triplets (a, b, a ∧ b) where a and b contain the traces of the isolated executions from two application”) and measuring at least one resultant Performance Monitoring Counter, PMC, over time to extract ideal characteristic footprints of each μBenchmark when operating in isolation (page 156, col. 1, third paragraph “The method presented herein predicts the footprint for resource demands of co-located applications, given that the traces of these applications run in isolation” Col. 2, first paragraph “A novel use of Recurrent Neural Networks that estimates the monitored metrics of two co-scheduled applications a∧ b from the information of a and b gathered running the applications in isolation.” page 164, “From a dataset consisting of applications and SPARK profiling features, captured from hardware counters (CPI, number of interruptions, number of cache misses, etc.” – the method uses monitoring metrics, a form of “Performance Monitoring Counters” (hardware counters), which are collected over time to establish a characteristic footprint (profiling features).) ; training a machine learning model using, as an input, the at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding Δ T B j during the parallel execution of each pairing scenario as training inputs (Buchaca, page 165, section 7 “This paper introduces the use of recurrent neural networks for interference and resource prediction tasks of co-scheduled applications…Our method predicts resource usage of the monitored metrics over time and is adaptable enough to be trained with workloads of arbitrary input and output lengths. Moreover, since training is done for a regression task instead of a classification task, we introduce the percentage completion features which significantly improve completion time prediction of co-scheduled applications.” Page 159, section 4.1 “The dataset used in the experiments contains traces generated by a variety of micro-benchmarks (workloads).” Section 4.2 “The dataset is composed of workloads triplets. Each triplet contains a combination of three execution traces…” Page 160, col. 1, first paragraph, “The baseline model predicts the resource usage at time t as the sum of the resources of the isolated applications at time t. This means that the output at time t for a given input [a; b] is ỹ   t =   a t   +   b t . Notice that [a; b] is a matrix of features that already contains padded zeros to encode any temporal phase difference of sequences (should they exist).” Page 160, section 5.1 “Figs. 4 and 5. In Fig. 4 the input jobs take around 40 time steps to execute in isolation, but require more than 80 under the presented co-schedule” Page 162, “A reasonable explanation for the behavior of EOS is that our model is trained by minimizing the mean squared error between predictions and true traces.” However, Buchaca does not teach, but Buchaca in view of Palomo does teach the limitation: executing possible pairing scenarios of the plurality of μBenchmarks in parallel on the multi-processor system and measuring the effect on the execution time of each μBenchmark, Δ T B j , resulting from contention over interference channels within the multi-processor system (Buchaca, page 159, section4.1, “For our study we have selected 9 key features, shown in Table 1, that are especially relevant for interference prediction and resource estimation. The dataset used in the experiments contain traces generated by a variety of micro-benchmarks” Section 4.2 “The dataset is composed of workloads triplets. Each triplet contains a combination of three execution traces. The first two traces correspond to isolated executions of the two applications. The third trace contains the execution of the co-located application from the first two traces…In real world scenarios…In order to increase the co-location cases in our collected dataset, we prepared different scenarios where co-located applications a and b start with different delays. For the benchmarking executions, one of the concurrent applications is delayed to start after its co-located peer application…“ -teaching pairing/co-run scenarios. Palomo page 19, Section V., “Modeling both, task overlapping and access pairing, as an ILP problem allows to compute a tight WCD (Worst-Case Contention Delay) by implicitly accounting for all possible (feasible) task overlapping and access pairing.” -all possible scenarios. Page 19, last paragraph, “Accesses to the shared L2 cache…performed over a shared interconnect (e.g., memory bus), which is therefore the source of contention in the system.” – interference channels.); Accordingly, it would have been to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Buchaca and Palomo before them, to incorporates Palomo’s systemic accounting of all feasible task overlaps and shared-resource pairings into the learning-based co-scheduling predictor of Buchaca. One would have been motivated to make such a combination to evaluate possible pairing scenarios, quantify per-task delay from interference channels during parallel execution, and use those delay predictions to guide mapping and scheduling – thereby producing a more complete and reliable trained contention model for predicting time delays from task contention in safety-sensitive applications. Regarding claim 2, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca further teaches: validating the training error of the ML based TCM (Buchaca, page 156, “We validate the proposed methodology by computing the error of the predicted resources (of co-joined application traces with respect to the real resources.”) by executing a plurality of actual execution tasks on the multi-processor system in isolation and measuring at least one resultant PMC over time for each actual execution task (Buchaca, page 159, section 4.1 “In order to capture the trace of each execution we have profiled them by using the basic Linux performance analysis tools… we have gathered a total of 141 features with time granularity of one second. For our study we have selected 9 key features, shown in Table 1 (workload metrics recorded at each time step), that are especially relevant for interference prediction and resource estimation. The dataset used in the experiments contains traces generated by a variety of micro-benchmarks (workloads)… The traces are executed using a server with two Intel Xeon E5-2630 processors and 128 GB of RAM, up to 400 isolated and co-located executions.” Page 159, section 5, “We use the Mean Absolute Percentage Error (MAPE) to assess the quality of the predictions at every time step of the execution trace.” inferring, by the ML based TCM, the predicted effect on the execution time of each actual execution task given the at least one resultant PMC of each task as input (Buchaca, page 160, section 5.1 “In this experiment we evaluate the accuracy of the predictions made by the different models on co-scheduled jobs. Table 2 contains the MAPE errors of the predictions on the test set… To assess the behavior of the presented models in the test set visually, we plotted the resource usage of three triplets. Figs. 3–5 show three different pairs of co-located applications with different properties...The shaded region shown in the third column displays the period at which both applications run at the same time. The shaded region shown in the third column displays the period at which both applications run at the same time. This is the period used to compute the error metrics…One of the main difficulties… is the slowdown of both applications while competing for resources, which usually implies a big difference in execution time with respect to the applications being run in isolation… the input jobs take around 40 time steps to execute in isolation, but require more than 80 under the presented co-schedule.”); executing the actual execution tasks in parallel and measuring the actual execution time (Buchaca, page 158, section 3.3 “We have experimented with two mechanisms for predicting the completion time of the co-scheduled jobs. The standard End of Sequence (EOS) feature approach [14] and our own Percentage Completion (PC) feature approach… The first strategy, based on an End of Sequence feature, is the standard approach used to decide when an RNN should stop producing more vectors… If our decoder can predict EOS with reasonable precision then we can build stopping criteria based on those values to predict the completion time of the co-scheduled applications… The second strategy consists of a novel approach based on two additional features (one per job) which we call Percentage Completion (PC) features. PC features keep track of the percentage completion of the workloads, providing ResourceNet with extra information relevant to the job estimation runtime. We denote by P C F a and P C F b the percentage completion features for input sequences a and b, respectively. Both features contain at time t how much of the workload has been completed until t, expressed as a percentage… the rate of increment at every time step will depend on the overall number of time steps of the sequence. ” comparing the predicted effect on the execution time with the actual execution time, thereby calculating an error between the predicted and actual execution time (Buchaca, page 160, “We have experimented with different criteria to predict the runtime of co-scheduled applications. Our methods are based on PC and EOS features which are detailed in Section 3.3. Using these features, we can test different criteria to predict the completion time of a co-scheduled pair of jobs… This increase on error and variance can be supported by the behavior of the predicted P C a   and P C b features found in Fig. 7. In the figure we can observe how the dotted lines that predict the end of the applications predict very accurately the end of the execution on the first column but do not work as well on the third column… Our results suggest that criteria build to predict the length of a co-scheduled trace using EOS should not be based on the actual value…” Regarding claim 3, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca further teaches: wherein the training of the machine learning model comprises: a first iterative loop over all different pairing scenarios (Buchaca, page 159, “we prepared different scenarios where co-located applications a and b start with different delays. For the benchmarking executions, one of the concurrent applications is delayed to start after its co-located peer application. The phase differences used in the dataset generation are 0 (synchronized), 0.25, 0.5 and 0.75.” -defines multiple co-location setups. Training ranges over these different co-run setups is the first iterative loop over different pairing scenarios.); a second iterative loop over each of the PMC measures of each μBenchmark in isolation, as well as the corresponding Δ T B j     during the parallel execution of each pairing scenario (Buchaca, page 158, section 3.2, “The encoding process reads the traces from the isolated runs and generates the matrix E… The decoding process takes the produced E as input and generates the output sequence one vector at a time… the predicted resources of the collocated applications at time t” -establishes a per-time step inference on the parallel trace. Page 159, section 4.1 “In order to capture the trace of each execution… From these tools we have gathered a total of 141 features with time granularity of one second. For our study we have selected 9 key features, shown in Table 1 (workload metrics recorded at each time step), that are especially relevant for interference prediction and resource estimation.” – per-time-step inputs.) Regarding claim 4, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo further teaches: wherein the at least one PMC is selected based an identification of the PMCs that are associated with interference channels on the multi-processor system (Buchaca, page 159, section 4.1, “From these tools we have gathered a total of 141 features with time granularity of one second. For our study we have selected 9 key features, shown in Table 1, that are especially relevant for interference prediction and resource estimation.” – hardware performance counters are listed in table 1. Palomo, Page 19, last paragraph, “Accesses to the shared L2 cache…performed over a shared interconnect (e.g., memory bus), which is therefore the source of contention in the system.” – interference channels.). Regarding claim 7, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Palomo further teaches: wherein the multiprocessor system is a multi-core processor of an avionics system (Palomo, Introduction, “Multicore systems are being widespreadly assessed as the reference solution to meet those emerging requirements, even in the most conservative critical embedded real-time domains, such as avionics, automotive and space.”). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Buchaca and Palomo before them, to apply the trained contention-modelling approach to a multi-core avionics processor, since avionics platforms widely use multicore chips with shared resources and require accurate worst-case timing; implementing the same model there would predict contention-induced delays for schedule verification without changing the underlying technique. Regarding claim 8, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Palomo further teaches: wherein the multiprocessor system is a homogenous platform (Palomo, page 19, “Since we assume homogeneous cores, access targets and types are the same across the system. The information on the target of an off-chip access can be relevant in the presence of multiple interconnects or interconnects that support some degree of parallelism.”). Accordingly, it would have been obvious to person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Buchaca and Palomo before them, to use a homogeneous multicore platform in place of the disclosed platform as a routine substitution with predictable results – applying the same contention-modeling method and measurements while reducing per-core variability and yielding cleaner interference characterization. Regarding claim 10, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca further teaches: A non-transitory computer-readable medium storing a Machine Learning based Task Contention Model produced by the method of claim 1 (Buchaca, Abstract “Current techniques, most of which already involve machine learning and job modeling, are based on workload behavior summarization over time, rather than focusing on effective job requirements at each instant of the execution. In this work, we propose a methodology for modeling co-scheduling of jobs on data centers, based on their behavior towards resources and execution time and using sequence-to-sequence models based on recurrent neural networks. The goal is to forecast co-executed jobs footprint on resources throughout their execution time, from the profile shown by the individual jobs, in order to enhance resource manager and scheduler placement decisions.” [section 4.1, table 1] “The traces are executed using a server with two Intel Xeon E5-2630 processors and 128 GB of RAM, up to 400 isolated and co-located executions”). Regarding claim 11, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca further teaches: A computer system including at least one processor and a memory, the memory storing instructions for producing a trained Machine Learning based Task Contention Model, ML based TCM, to predict time delays resulting from contention between tasks running in parallel on a multi-processor system (Buchaca, Abstract “In this work, we propose a methodology for modeling co-scheduling of jobs on data centers, based on their behavior towards resources and execution time and using sequence-to-sequence models based on recurrent neural networks. The goal is to forecast co-executed jobs footprint on resources throughout their execution time, from the profile shown by the individual jobs, in order to enhance resource manager and scheduler placement decisions.” [Section 4.1, table 1] “The traces are executed using a server with two Intel Xeon E5-2630 processors and 128 GB of RAM, up to 400 isolated and co-located executions”), Regarding claim 12, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca further teaches: A non-transitory computer-readable medium storing instructions which, when executed on a computer system, cause the computer system to produce a trained Machine Learning based Task Contention Model, ML based TCM, to predict time delays resulting from contention between tasks running in parallel on a multi-processor system, by performing the method of claim 1 (Buchaca, Abstract, “In this work, we propose a methodology for modeling co-scheduling of jobs on data centers, based on their behavior towards resources and execution time and using sequence-to-sequence models based on recurrent neural networks. The goal is to forecast co-executed jobs footprint on resources throughout their execution time, from the profile shown by the individual jobs, in order to enhance resource manager and scheduler placement decisions. The methods presented herein are validated by using High Performance Computing benchmarks based on different frameworks (such as Hadoop and Spark) and applications (CPU bound, IO bound, machine learning, SQL queries...). Experiments show that the model can correctly identify the resource usage trends from previously seen and even unseen co-scheduled jobs.” [section 4.1, table 1] “The traces are executed using a server with two Intel Xeon E5-2630 processors and 128 GB of RAM, up to 400 isolated and co-located executions”). Regarding claim 14, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo further teaches: wherein the predicted effect execution time of each actual execution task is aggregated for contending tasks so as to predict a worst case execution time, WCET (Buchaca, page 165, Conclusion, “Our experiments show that the model is able to predict the resource usage of co-located applications sharing resources over time, even when application performance degrades drastically due to a high demand of similar resources at the same time. Moreover, as a baseline, we compare our model with standard machine learning algorithms. The experiments show that our model makes more accurate predictions and is able to deal with the sequential nature of the data, thus making it suitable for the presented scenario where input/output pairs have different lengths.” Palomo, page 17. “It is worth noting that a necessary conservative assumption in the analysis of the WCD at task level is to assume that the task under analysis always suffers the worst-case possible contention. As a consequence, pairing in the presence of typed accesses should conservatively pair those accesses that incur higher-latency first (see Eq. 6, 10). However, we do not need to model this constraint as the worst-case pairing (of a request of type t) to each task request is already induced by maximizing the overall makespan, as part of the objective function.” – describes how worst-case scenario is modeled by assuming the task always suffers the worst-case possible contention, which is achieved by maximizing the overall makespan as part of the objective function.) executing a plurality of microbenchmarks, μBenchmarks Bj, on the multi-processor system in isolation (Buchaca, page 156, col. 1, last paragraph “For our experiments, we created a dataset with execution traces from the previously mentioned benchmark suites. The dataset consists of triplets (a, b, a ∧ b) where a and b contain the traces of the isolated executions from two application”) and measuring at least one resultant PMC over time to extract ideal characteristic footprints of each μBenchmark when operating in isolation (page 156, col. 1, third paragraph “The method presented herein predicts the footprint for resource demands of co-located applications, given that the traces of these applications run in isolation” Col. 2, first paragraph “A novel use of Recurrent Neural Networks that estimates the monitored metrics of two co-scheduled applications a∧ b from the information of a and b gathered running the applications in isolation.” page 164, “From a dataset consisting of applications and SPARK profiling features, captured from hardware counters (CPI, number of interruptions, number of cache misses, etc.” – the method uses monitoring metrics, a form of “Performance Monitoring Counters” (hardware counters), which are collected over time to establish a characteristic footprint (profiling features).) ; executing possible pairing scenarios of the plurality of μBenchmarks in parallel on the multi-processor system and measuring the effect on the execution time of each μBenchmark, Δ T B j , resulting from contention over interference channels within the multi-processor system (Buchaca, page 159, section4.1, “For our study we have selected 9 key features, shown in Table 1, that are especially relevant for interference prediction and resource estimation. The dataset used in the experiments contain traces generated by a variety of micro-benchmarks” Section 4.2 “The dataset is composed of workloads triplets. Each triplet contains a combination of three execution traces. The first two traces correspond to isolated executions of the two applications. The third trace contains the execution of the co-located application from the first two traces…In real world scenarios…In order to increase the co-location cases in our collected dataset, we prepared different scenarios where co-located applications a and b start with different delays. For the benchmarking executions, one of the concurrent applications is delayed to start after its co-located peer application…“ -teaching pairing/co-run scenarios. Palomo page 19, Section V., “Modeling both, task overlapping and access pairing, as an ILP problem allows to compute a tight WCD (Worst-Case Contention Delay) by implicitly accounting for all possible (feasible) task overlapping and access pairing.” -all possible scenarios. Page 19, last paragraph, “Accesses to the shared L2 cache…performed over a shared interconnect (e.g., memory bus), which is therefore the source of contention in the system.” – interference channels.); training a machine learning model using, as an input, the at least one PMC measure in isolation of each μBenchmark and, at the output, the corresponding Δ T B j during the parallel execution of each pairing scenario as training inputs (Buchaca, page 165, section 7 “This paper introduces the use of recurrent neural networks for interference and resource prediction tasks of co-scheduled applications…Our method predicts resource usage of the monitored metrics over time and is adaptable enough to be trained with workloads of arbitrary input and output lengths. Moreover, since training is done for a regression task instead of a classification task, we introduce the percentage completion features which significantly improve completion time prediction of co-scheduled applications.” Page 159, section 4.1 “The dataset used in the experiments contains traces generated by a variety of micro-benchmarks (workloads).” Section 4.2 “The dataset is composed of workloads triplets. Each triplet contains a combination of three execution traces…” Page 160, col. 1, first paragraph, “The baseline model predicts the resource usage at time t as the sum of the resources of the isolated applications at time t. This means that the output at time t for a given input [a; b] is ỹ   t =   a t   +   b t . Notice that [a; b] is a matrix of features that already contains padded zeros to encode any temporal phase difference of sequences (should they exist).” Page 160, section 5.1 “Figs. 4 and 5. In Fig. 4 the input jobs take around 40 time steps to execute in isolation, but require more than 80 under the presented co-schedule” Page 162, “A reasonable explanation for the behavior of EOS is that our model is trained by minimizing the mean squared error between predictions and true traces.” Accordingly, it would have been obvious to person of ordinary skill in the art, before the effective filing date, having Buchaca and Palomo before them, to aggregate the model’s predicted per-task co-run delay from Buchaca with each task’s baseline time as in Palomo’s execution-budgeting to obtain a WCET for contending tasks, motivated by the need to provide conservative time budgets for scheduling and to avoid co-placements that risks overruns. Regarding claim 16, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo further teaches: wherein at least two μBenchmarks of the plurality of μBenchmarks are executed simultaneously (Buchaca, [section 4.1] “The dataset used in the experiments contains traces generated by a variety of micro-benchmarks (workloads)… The traces are executed…up to 400 isolated and co-located executions.” – the co-located executions of micro-benchmarks require that at least two benchmarks execute during overlapping time intervals. Overlapping execution means that the benchmarks are executed simultaneously. Therefore, the cited passages teach the limitation that at least two μBenchmarks are executed simultaneously.). Regarding claim 17, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo further teaches: wherein at least two μBenchmarks of the plurality of μBenchmarks are executed starting at the same time point (Buchaca, [section 4.1] Buchaca teaches that the dataset includes “traces generated by a variety of micro-benchmarks (workloads),” and further teaches that [section 3] “the model is fed with traces of programs that start execution at the same time.” – under the broadest reasonable interpretation, Buchaca uses traces generated from micro-benchmarks to represent program execution scenarios. The cited passage teaches programs that start execution at the same time, which meets the limitation that at least two μBenchmarks are executed starting at the same time point.). Regarding claim 18, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo further teaches: wherein at least two of the plurality of μBenchmarks are executed in parallel in a pairing scenario while accessing at least one shared interference channel of the multi-processor system (Buchaca, [section 4.1] “The dataset used in the experiments contains traces generated by a variety of micro-benchmarks (workloads)… The traces are executed using a server with two Intel Xeon E5-2630 processors and 128 GB of RAM, up to 400 isolated and co-located executions” [section 5.2] “We have experimented with different criteria to predict the runtime of co-scheduled applications… Using these features, we can test different criteria to predict the completion time of a co-scheduled pair of jobs.” Palomo, [page 16, left column] “Pairing models the fact that requests from different cores can collide in the access to a shared hardware resource… Note that pairing can only happen when the interfered task and the interfering task, in different cores, overlap in time (i.e., run in parallel).” [page 19, left column] “Accesses to the shared L2 cache – and eventually to memory – are performed over a shared interconnect (e.g., memory bus), which is therefore the source of contention in the system.”- Buchaca’s co-scheduled pair of jobs and co-located executions describe executing multiple micro-benchmarks together on the same system to evaluation completion time, corresponding to at least two μBenchmarks being executed together during a pairing scenario during overlapping execution periods, and Palomo’s requests from different cores accessing a shared hardware resource together with task overlapping in time (i.e., running in parallel) describe execution on a multi-processor system where task execute in parallel, while Palomo’s accesses to shared L2 cache over a shared interconnect described shared hardware resources through which contention occurs.). wherein the execution of the at least two μBenchmarks in parallel does not result in an increase in execution time ∆ T B j for the at least two μBenchmarks relative to execution in isolation (Buchaca, [page 165] “The first, aimed at calculating the cost of a Spark job run in isolation, is approximated by modeling a function from the execution parameters (the number of data partitions, the number of stages, the number of jobs per stage, etc.) to predict the total cost of the job. The second, designed to predict the cost when two jobs run in concurrency by modeling interference, is tackled by adapting the previous formulas.” – under the broadest reasonable interpretation, determining execution cost for a job run in isolation and determining execution cost when two jobs run concurrently with interference describes evaluating whether execution of the at least two μBenchmarks in parallel results in an increase in execution time relative to execution in isolation.). Regarding claim 19, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo further teaches: The computer-implemented method of claim 1, wherein training the machine learning model further comprises including pairing scenarios in which at least two of μBenchmarks of the plurality of μBenchmarks are executed in parallel while accessing at least one shared interference channel of the multi-processor system, and for which the ∆ T B j is substantially zero (Buchaca, [section 4.1] “The dataset used in the experiments contains traces generated by a variety of micro-benchmarks (workloads)… The traces are executed using a server with two Intel Xeon E5-2630 processors and 128 GB of RAM, up to 400 isolated and co-located executions.” [page 165] “The first, aimed at calculating the cost of a Spark job run in isolation… The second, designed to predict the cost when two jobs run in concurrency by modeling interference” Palomo [page 17, left column] “Pairing models the fact that requests from different cores can collide in the access to a shared hardware resource… that pairing can only happen when the interfered task and the interfering task, in different cores, overlap in time (i.e., run in parallel).” [page 19, left column] “Accesses to the shared L2 cache – and eventually to memory – are performed over a shared interconnect (e.g., memory bus)” [page 20] “Interference suffered by τ i because of contention triggered by paired accesses of any type in τ j ” – describes that execution tasks in parallel results in an increase in execution time dure to contention from paired accesses to shared resources, where the total delay is based on the number and type of interfering access. Buchaca teaches μBenchmarks and pairing scenarios through micro-benchmarks executed in isolated and co-isolated executions and further teaches determining execution costs for jobs in isolation and when run in concurrency, while Palomo teaches that paired task overlap in time (i.e., run in parallel) and access shared hardware resources corresponding to a shared interference channel, and further teaches computing interference delay based on paired access, which describes change in execution time, and under the broadest reasonable interpretation, when the number of interfering accesses is zero or minimal the resulting interference delay is zero or negligible, corresponding to ∆ T B j being substantially zero.). Claims 5 is rejected under the 35 U.S.C. 103 as being unpatentable over Buchaca et al., (NPL: “Sequence-to-sequence models for workload interference prediction on batch processing datacenters” (Published: 2020)) in view of Palomo et al., (NPL: “Accurate ILP-based contention modeling on statically scheduled multicore systems” (Published: 2019)) further in view of Inam et al., (NPL: “Bandwidth Measurement using Performance Counters for Predictable Multicore Software (Published: 2012)). Regarding claim 5, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo does not teach, but Buchaca in view of Palomo further in view of Inam does teach: wherein the measuring of at least one PMC comprises measuring the at least one PMC at a variable monitoring frequency (Inam, page 3, section C., “The charmon uses the performance monitoring facility located inside the P4080 processor… Implemented within the platform, the charmon is a continuously running performance monitoring tool. It gathers information about HW-usage for the complete system by periodically sampling performance monitor counters (PMCs) and storing the results in a local database… The charmon can simultaneously measure PMC events for all cores and group them on a per core basis for viewing together with calculated KPIs.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Buchaca, Palomo, and Inam before them, to incorporate systematic measurement of hardware performance monitoring counters during both isolated and co-run executions into the training and validation workflow of the contention model. One would have been motivated to do to obtain time-resolved, low-overhead signals that directly reflect shared-resource usage and contention, enabling the model to learn how such pressure translates into delay inflation for each pairing scenario and to verify those predictions against measured ground truth. This would provide a more reliable and complete contention predictor that supports informed mapping and scheduling decisions. Claims 6 is rejected under the 35 U.S.C. 103 as being unpatentable over Buchaca et al., (NPL: “Sequence-to-sequence models for workload interference prediction on batch processing datacenters” (Published: 2020)) in view of Palomo et al., (NPL: “Accurate ILP-based contention modeling on statically scheduled multicore systems” (Published: 2019)) further in view of Iorga et al., (NPL: “Slow and Steady: Measuring and Tuning Multicore Interference” (Published: 2020)). Regarding claim 6, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo does not teach, but Buchaca in view of Palomo further in view of Iorga does teach: each μBenchmark is a synthetic benchmark that is selected so as to stress certain interference channels of the multi-processor system in an isolated way (Iorga, page 2, col. 2, “We attempted to reproduce the highest slowdowns reported in recent interference work by Bechtel and Yun [3]: namely, that their BwWrite enemy program can cause a slowdown of more than 300× on a synthetic memory-intensive piece of software (on a Raspberry Pi 3 B chip).” – the paper teaches synthetic benchmarks by defining small, parameterized enemy/victim programs that issue controlled cache/memory access patterns to deliberately stress specific shared resources and measure the resulting slowdowns). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Buchaca, Palomo, and Iorga before them, to incorporate synthetic microbenchmarks – selected to stress specific shared resources – into Buchaca’s learning-based contention framework. One would have been motivated to do so to generate controlled, repeatable isolation traces and co-run slowdowns that cleanly expose interference on targeted channels, yielding reliable Δ T labels and improving the model’s contention prediction across pairing scenarios. Claim 9 is rejected under the 35 U.S.C. 103 as being unpatentable over Buchaca et al., (NPL: “Sequence-to-sequence models for workload interference prediction on batch processing datacenters” (Published: 2020)) in view of Palomo et al., (NPL: “Accurate ILP-based contention modeling on statically scheduled multicore systems” (Published: 2019)) further in view of Hoffmann et al., (NPL: “Online Machine Learning for Energy-Aware Multicore Real-Time Embedded Systems” (Published: February 2021)). Regarding claim 9, Buchaca in view of Palomo teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Buchaca in view of Palomo does not teach, but Buchaca in view of Palomo further in view of Hoffmann does teach: wherein the multiprocessor system is a heterogenous platform or not symmetric (Hoffmann, Introduction, “MODERN embedded multicore processors combine a large variety of architectural features to cope with growing application demands, including heterogeneous cores, SIMD units, and application-specific accelerators interconnected by some network-on-chip.”). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Buchaca, Palomo, and Hoffmann before them, to apply the same contention-model training framework on a heterogeneous or asymmetric multicore platform by profiling each task on its own core type and forming cross-type co-run pairings. One would have been motivated to do so to extend delay prediction to mixed-core SoCs that share cache/interconnects, so the model captures contention among dissimilar cores and support scheduling on those platforms. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daravanh Phakousonh whose telephone number is (571)272-6324. The examiner can normally be reached Mon - Thurs 7 AM - 5 PM, Every other Friday 7 AM - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 571-272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daravanh Phakousonh/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Oct 31, 2022
Application Filed
Sep 11, 2025
Non-Final Rejection — §101, §102, §103
Jan 21, 2026
Response Filed
Apr 02, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572821
ACCURACY PRIOR AND DIVERSITY PRIOR BASED FUTURE PREDICTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month