Prosecution Insights
Last updated: April 19, 2026
Application No. 17/979,905

MACHINE LEARNING BASED CONTENTION DELAY PREDICTION IN MULTICORE ARCHITECTURES

Final Rejection §103
Filed
Nov 03, 2022
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Collins Aerospace Ireland Limited
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement submitted on 12/22/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The Amendment filed 01/16/20216 has been entered. Claims 1-15 remain pending in the application. Allowable Subject Matter Claims 7-8, and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Vetter (US 6850920 B2) in view of Lee et. al. (US 20150205637 A1) and McGee. et al. (US 6643613 B2 hereinafter McGee) As to independent claim 1, Vetter teaches a method of generating training data for training a Machine Learning based Task Contention Model (ML based TCM), to predict time delays resulting from contention between tasks running in parallel on a multi-processor system, the method comprising: [generates training data or records for a classifier or model Col. 3-4 ln. 54-2 "generating a plurality of training records to model the behavior of a first application"] executing a plurality of microbenchmarks, microbenchmarks B, on the multi-processor system in isolation and measuring a number of resultant Performance Monitoring Counters, PMCs, over time to extract ideal characteristic footprints of each microbenchmark when operating in isolation; [different microbenchmarks as training for understanding performance (footprint indication by duration metrics) Col. 8 ln 30-67 "Microbenchmarks are typically smaller software benchmarks used to understand a compartmentalized attribute of computer system performance;", "send duration, send wait duration, receive duration, receive wait duration, and message duration"] Vetter does not specifically teach executing in isolation [execute a sample on one node (isolation), extracts data (training data for a learning model) makes predictions and distributes accordingly to results Fig. 5, ¶19 ¶69-72 "the machine learning engine 230 may sample only a part of the workload for a given kernel. predict a kernel feature vector and a execution time of a kernel, and predict the overall execution time on the basis of the kernel feature vector and the execution time."] However, Lee teaches executing in isolation [execute a sample on one node (isolation), extracts data (training data for a learning model) makes predictions and distributes accordingly to results Fig. 5, ¶19 ¶69-72 "the machine learning engine 230 may sample only a part of the workload for a given kernel. predict a kernel feature vector and a execution time of a kernel, and predict the overall execution time on the basis of the kernel feature vector and the execution time."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the performance analysis by Vetter by incorporating the executing in isolation closed by Lee because both techniques address the same field of workload management and by incorporating Lee into Vetter provides an easier environment for developing compute clusters for efficient throughput [Lee ABST] Vetter and Lee do not specifically teach performing a feature correlation analysis on the PMC metrics resulting from the plurality of microbenchmarks to determine the degree of correlation between each resultant PMCs and the executed plurality of microbenchmarks; and selecting a subset of PMCs based upon their degree of correlation between the plurality of microbenchmarks to form a reduced PMC array. However, McGee teaches performing a feature correlation analysis on the PMC metrics resulting from the plurality of microbenchmarks to determine the degree of correlation between each resultant PMCs and the executed plurality of microbenchmarks; and [correlates pairs of metrics including a rank value (degree) Fig. 17 1702, Col 2 ln. 52-67 "metrics are correlated and grouped in a dynamic or adaptive manner. Some embodiments of the present invention, for example, use Spearman rank-order correlation to correlate metrics with non-linear relationships and outliers in the data"] selecting a subset of PMCs based upon their degree of correlation between the plurality of microbenchmarks to form a reduced PMC array. [selects metrics based on correlation Col. 25 ln. 41-48 "First selector 2116 selects as associated metrics those metrics that are correlated with the key metric, and are correlated with a predetermined percentage of the other metrics that are correlated with the key metric. Second selector 2118 selects as associated metrics those metrics that, while not correlated with the key metric, are correlated with a predetermined percentage of metrics that are correlated with the key metric."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the modeling disclosed by Vetter and Lee by incorporating the performing a feature correlation analysis on the PMC metrics resulting from the plurality of microbenchmarks to determine the degree of correlation between each resultant PMCs and the executed plurality of microbenchmarks; and selecting a subset of PMCs based upon their degree of correlation between the plurality of microbenchmarks to form a reduced PMC array disclosed by McGee because all techniques address the same field of workload management and by incorporating McGee into Vetter and Lee better tracks the relations between metrics for improved selection and avoidance of problem causes [McGee Col. 2 ln. 34-49] As to dependent claim 12, the rejection of claim 1 is incorporated. Vetter, Lee and McGee further teach a computer system for producing training data for training a Machine Learning based Task Contention Model (ML based TCM) to predict time delays resulting from contention between tasks running in parallel on a multi-processor system, wherein the computer system is configured perform the method claim 1. [Lee system with machine learning prediction of execution time with parallel execution across nodes with workload distribution ¶13-15 "predicting a data throughput of the at least one compute device comprised in each node by the at least one node; and c) distributing a workload accompanied by the execution of the parallel application to the at least one compute device comprised in each node according to the predicted data throughput of the compute device"] As to dependent claim 13, the rejection of claim 1 is incorporated. Vetter, Lee and McGee further teach computer software comprising instructions which, when executed on a computer system, cause the computer system to produce training data for training a Machine Learning based Task Contention Model, ML based TCM, to predict time delays resulting from contention between tasks running in parallel on a multi-processor system, by performing the method of claim 1. [Lee computer program ¶74, with machine learning prediction of execution time with parallel execution across nodes with workload distribution ¶13-15 "predicting a data throughput of the at least one compute device comprised in each node by the at least one node; and c) distributing a workload accompanied by the execution of the parallel application to the at least one compute device comprised in each node according to the predicted data throughput of the compute device"] Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Vetter in view of Lee and McGee, as applied in the rejection of claim 1 above, and further in view of Diamant et al. (US 12400106 B1 hereinafter Diamant). As to dependent claim 2, Vetter, Lee and McGee teach the method of claim 1 above that is incorporated. Vetter, Lee and McGee do not specifically teach wherein the plurality of microbenchmarks are selected based on the Arithmetic Intensity, AI, of each microbenchmark so as to stress certain interference channels of the multi-processor system in an isolated way. However, Diamant teaches wherein the plurality of microbenchmarks are selected based on the Arithmetic Intensity, AI, of each microbenchmark so as to stress certain interference channels of the multi-processor system in an isolated way. [loads operations based on AIF (arithmetic intensity) threshold (uses AIF as a selection metric) Col. 26. ln 12-24 "Load operations with AIFs greater than a threshold value (e.g., determined based on the ridge point of a roofline model described above) may be compute-bound"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the scheduling techniques by Vetter, Lee and McGee by incorporating the wherein the plurality of microbenchmarks are selected based on the Arithmetic Intensity, AI, of each microbenchmark so as to stress certain interference channels of the multi-processor system in an isolated way disclosed by Diamant because all techniques address the same field of workload management and by incorporating Diamant into Vetter, Lee and McGee more efficiently utilizes available computing power, local memory, and memory bandwidth of the neural network processor, thereby improving the overall performance [Diamant Col. 2 ln. 1-10] Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Vetter in view of Lee and McGee, as applied in the rejection of claim 1 above, and further in view of Bertran et al. (US 9618999 B1 hereinafter Bertran) As to dependent claim 3, Vetter, Lee and McGee teach the method of claim 1 above that is incorporated. Vetter, Lee and McGee do not specifically teach wherein the plurality of microbenchmarks are selected from a pre-populated code block repository, and are selected so as to generate the desired interference and contention scenarios for training data that may be used in training an accurate ML-based TCM. However, Bertran teaches wherein the plurality of microbenchmarks are selected from a pre-populated code block repository, and are selected so as to generate the desired interference and contention scenarios for training data that may be used in training an accurate ML-based TCM. [stressmark database (code block repository) which are chosen for tests Col. 4 ln. 34-51 "instruction sequences 126 should be used to establish various stressmarks in the stressmark database "] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the scheduling techniques by Vetter, Lee and McGee by incorporating the wherein the plurality of microbenchmarks are selected from a pre-populated code block repository, and are selected so as to generate the desired interference and contention scenarios for training data that may be used in training an accurate ML-based TCM disclosed by Bertran because all techniques address the same field of workload management and by incorporating Bertran into Vetter, Lee and McGee improves power efficiency reduce the resource costs of systems [Bertran Col. 1 ln. 25-55] Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Vetter in view of Lee and McGee, as applied in the rejection of claim 1 above, and further in view of Haller et al. (US 20060212875 A1 hereinafter Haller) As to dependent claim 4, Vetter, Lee and McGee teach the method of claim 1 above that is incorporated. Vetter, Lee and McGee do not specifically teach wherein the plurality of microbenchmarks are selected such that they each individually have a shorter execution time individually than a maximum makespan for a given task to be scheduled on the multi-processor system. However, Haller teaches wherein the plurality of microbenchmarks are selected such that they each individually have a shorter execution time individually than a maximum makespan for a given task to be scheduled on the multi-processor system. [less time for execution than makespan (shorter) ¶5, ¶14 "finish in less time than the current makespan minus the execution time of that task on the makespan machine "] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the scheduling techniques by Vetter, Lee and McGee by incorporating the wherein the plurality of microbenchmarks are selected such that they each individually have a shorter execution time individually than a maximum makespan for a given task to be scheduled on the multi-processor system disclosed by Haller because all techniques address the same field of workload management and by incorporating Haller into Vetter, Lee and McGee minimizes late tasks for reasonable task mapping and scheduling [Haller ¶3-4] Claims 5, 6, 11 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (US 20130191612 A1 hereinafter Li) in view Lee and Vetter As to independent claim 5, Li teaches a computer-implemented method of producing a trained Machine Learning based Task Contention Model (ML based TCM), to predict time delays resulting from contention between tasks running in parallel on a multi-processor system using training data generated by the method of any preceding claim, the method comprising: [predicts slowdowns (delays) between co-running tasks ¶32] executing a predetermined set of possible pairing scenarios of the plurality of microbenchmarks in parallel on the multi-processor system and measuring the effect on the execution time of each microbenchmark, dT, resulting from contention over interference channels within the multi-processor system; and [pairing (co-running) with slowdown prediction (change in time) ¶32 "predict the slowdown of co-running jobs due to contention of the shared resources"], [Measures effect on waiting time and latency (execution time) ¶26-27 "predicts and handles interference when two or more jobs time-share GPUs in HPC clusters"…" reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%"] Li does not specifically teach executing in isolation and output, the corresponding dT, during the parallel execution of each pairing scenario as training inputs. However, Lee teaches executing in isolation [execute a sample on one node (isolation), extracts data (training data for a learning model) makes predictions and distributes accordingly to results Fig. 5, ¶19 ¶69-72 "the machine learning engine 230 may sample only a part of the workload for a given kernel. predict a kernel feature vector and a execution time of a kernel, and predict the overall execution time on the basis of the kernel feature vector and the execution time."] output, the corresponding dT, during the parallel execution of each pairing scenario as training inputs. Lee [predicts (outputs) times of kernels and overall time of parallel application ¶69-72 "predict a kernel feature vector and a execution time of a kernel, and predict the overall execution time on the basis of the kernel feature vector and the execution time"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the process scheduling by Li by incorporating the executing in isolation and output, the corresponding dT, during the parallel execution of each pairing scenario as training inputs disclosed by Lee because both techniques address the same field of workload management and by incorporating Lee into Jagemar provides an easier environment for developing compute clusters for efficient throughput [Lee ABST] Li and Lee do not specifically teach the term microbenchmarks and training a machine learning model using, as an input, a reduced PMC array for each microbenchmark in isolation and, However, Vetter teaches microbenchmarks [different microbenchmarks as training for a decision tree (ML model) Col. 8 ln 30-53 "A plurality of benchmarks (or microbenchmarks) may be used to train the decision tree;"], [tests for training Col. 3-4 ln. 54-2] training a machine learning model using, as an input, a reduced PMC array for each microbenchmark in isolation and, [Vetter trained tree (model) using performance behaviors Col. 8 ln. 24-47 "trained by providing it with examples of efficient and inefficient MPI behavior from a first application"], [different microbenchmarks as training Col. 8 ln 30-67 "A plurality of benchmarks (or microbenchmarks) may be used to train the decision tree"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the scheduling disclosed by Jagemar and Lee by incorporating the microbenchmarks and training a machine learning model using, as an input, a reduced PMC array for each microbenchmark in isolation disclosed by Vetter because all techniques address the same field of workload management and by incorporating Vetter into Jagemar and Lee provides more understanding and simplicity in predictions in an efficient way [Col. 7 ln 20-32] As to dependent claim 6, the rejection of claim 5 is incorporated. Li, Lee and Vetter further teach wherein the machine learning model is a decision tree-based predictor, and wherein the machine learning model is an XGBoost model. [Vetter trees (ABST) and Boosting Col. 9 ln. 15-28 "several techniques, such as boosting, may improve this error rate"] As to dependent claim 11, the rejection of claim 5 is incorporated. Li, Lee and Vetter further teach a computer system comprising: at least one processor; and a memory storing a Machine Learning based Task Contention Model configured to predict time delays resulting from contention between tasks running in parallel on a multi-processor system, wherein the Machine Learning based Task Contention Model is produced by the method of claim 5. [Li GPU and compute nodes (processor) ¶11, ¶49 memory ¶43 and model predicts slowdowns ¶32] As to dependent claim 14, the rejection of claim 5 is incorporated. Li, Lee and Vetter further teach a computer system for producing a trained Machine Learning based Task Contention Model (ML based TCM), to predict time delays resulting from contention between tasks running in parallel on a multi-processor system, wherein the computer system is configured perform the method of claim 5.[Lee manycore cluster system ¶7, with machine learning prediction of execution time with parallel execution across nodes with workload distribution ¶13-15 "predicting a data throughput of the at least one compute device comprised in each node by the at least one node; and c) distributing a workload accompanied by the execution of the parallel application to the at least one compute device comprised in each node according to the predicted data throughput of the compute device"] As to dependent claim 15, the rejection of claim 5 is incorporated. Li, Lee and Vetter further teach computer software comprising instructions which, when executed on a computer system, cause the computer system to produce a trained Machine Learning based Task Contention Model (ML based TCM) to predict time delays resulting from contention between tasks running in parallel on a multi-processor system, by performing the method of claim 5. [Lee manycore cluster system ¶7, with machine learning prediction of execution time with parallel execution across nodes with workload distribution ¶13-15 "predicting a data throughput of the at least one compute device comprised in each node by the at least one node; and c) distributing a workload accompanied by the execution of the parallel application to the at least one compute device comprised in each node according to the predicted data throughput of the compute device"] Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Vetter in view of Lee and McGee, as applied in the rejection of claim 1 above, and further in view of YOUSAF et al. (US 20190102213 A1 hereinafter Yousaf) As to dependent claim 9, Li, Lee and Vetter teach the method of claim 5 above that is incorporated. Li, Lee and Vetter do not specifically teach wherein the measuring of at selected PMCs comprises measuring the selected PMCs at a variable monitoring frequency. However, Yousaf teaches wherein the measuring of at selected PMCs comprises measuring the selected PMCs at a variable monitoring frequency. [variable monitoring in monitoring load ¶45 "The variable monitoring frequencies and surveillance epoch lengths provided by the monitoring process illustrated in FIG. 3 can be determined according to intelligent calculations, e.g. utilizing machine learning techniques, and can reduce the monitoring load placed on the resources of the cloud infrastructure as well as the processing load placed on the CMS"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the scheduling techniques by Li, Lee and Vetter by incorporating the wherein the measuring of at selected PMCs comprises measuring the selected PMCs at a variable monitoring frequency disclosed by Yousaf because all techniques address the same field of workload management and by incorporating Yousaf into Li, Lee and Vetter provides an more optimal allocation and distribution of resources [Yousaf ¶10-11] Response to Arguments Applicant arguments filed 01/16/2026 with respect to 112 and 101, these rejections have been withdrawn. Applicant's arguments filed 01/16/2026. In the remark, applicant argues that: Applicant argues that Jagemar fails to teach "performing a feature correlation analysis on the PMCs resulting from the plurality of microbenchmarks to determine the degree of correlation between each resultant PMCs and the executed plurality of microbenchmarks." See Jagemar ¶78 and ¶88. As to point (1) applicant’s arguments with respect to claim 1 have been considered but are moot in view of a new ground of rejection made under rejected under 35 U.S.C. 103 as being unpatentable over Vetter view of Lee and McGee as set forth above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Weston et al. (US 7318051 B2) teaches training with remaining features after correlation analysis (see Col. 3 ln. 34-56 and Col. 7 ln. 33-42) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEAU SPRATT whose telephone number is (571)272-9919. The examiner can normally be reached M-F 8:30-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 5712127212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEAU D SPRATT/Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Nov 03, 2022
Application Filed
Sep 11, 2025
Non-Final Rejection — §103
Jan 16, 2026
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month