Prosecution Insights
Last updated: April 19, 2026
Application No. 18/370,506

METHOD FOR CONFIGURING A REAL-TIME COMPUTER SYSTEM

Non-Final OA §102
Filed
Sep 20, 2023
Examiner
KIM, SISLEY NAHYUN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Tttech Auto AG
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
590 granted / 665 resolved
+33.7% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
707
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
26.1%
-13.9% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 665 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless - (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Butler et al. (US 2022/0197773, hereinafter Butler). Regarding claim 1, Butler discloses A method (fig. 1-42) for configuring a real-time computer system (RTS, RTSa) comprising resources for executing tasks (AT1-AT4, IT1-IT5, ITM_M1, ITM_OUT), wherein at least one of the tasks is a real time task (paragraph [0060]: the video content from being processed in real time, which means any time-sensitive processing (e.g., real-time critical event detection); paragraph [0185]: The infrastructure data may also contain telemetry or usage data for the resources, which identifies the current usage and availability of the resource capacities; paragraph [0279]: The workload data contains information about a workload for a service and/or application to be deployed, placed, or executed across the computing infrastructure; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations); - wherein the resources in the real-time computer systems (RTS, RTSa) comprise at least a first and a second processor (P1, P2) and a communication subsystem (COM1, COM1a) interconnecting at least said first and second processor (P1, P2) and at least a first memory (M1) accessible by said first processor and at least a second memory (M2) accessible by said second processor (paragraph [0184]: The physical resources may include compute resources (e.g., general-purpose processors such as CPUs and processing cores, special-purpose processors such as GPUs and AI accelerators), memory resources; paragraph [0260]: each of which is a different processor model with varying performance capabilities and resources (e.g., varying number of processors, cores, and/or threads, processing frequency, memory capacity, bus speed, hardware acceleration technologies, and so forth) wherein the method comprises the steps: - providing an estimate for an individual resource utilization (101_SOTA, 101, 103, 104) of the tasks (paragraph [0151]: The workload profiles can be used to predict the behavior of current or future workloads; paragraph [0156]-[0158]: The computation is performed based on the following types of information: [0157] (1) Resource capacity: The resource capacity quantifies the assigned versus available capacity of platform features of a resource … (2) Processing capacity: The processing capacity quantifies the usage of the resources and service in the landscape. As an example, based on utilization and saturation metrics, it may be determined that the 24-socket CPU is only used 5% of the time), - providing for each resource a resource model (MOD_SOTA, MOD) (paragraph [0156]: a resource modeler 720 determines current and future (based on predictions) available capacities 725 for the resources and the service instances available; paragraph [0247]: with respect to exploitation of processing architecture features (e.g., Intel Architecture (IA) features), features optimized to handle a specific workload are easily identified with a continuous resource prediction model, and they are used to understand decisions around workload placement/deferral for optimality and delivering expected performance); - determining a configuration allocating each of the tasks to at least one of the resources according to a prediction at least based on said estimate for an individual resource utilization (101_SOTA, 101, 103, 104) of the tasks and said resource model (paragraph [0160]: based on the infrastructure capacity information from the resource modeler 720, along with the usage patterns and service level objectives (SLOs) from the collector subsystem 710, a load translator 730 determines and quantifies potential mappings of services to resources in order to compare, contrast, and tradeoff various placement options; paragraphs [0171], [0174]: The planning task is responsible for providing possible actions that change the capabilities available in the environment. It incorporates the following functions … (3) Estimating business/purchasing decisions by conducting ‘what-if’ scenarios based available inventory configurations and resource configuration updates that can inform an update to the future inventor; paragraph [0188]: The service-to-resource placement options identify possible placements of the respective services or workloads across the respective resources of the computing infrastructure over the particular time window; paragraph [0176]: An orchestrator or resource manager, whose scheduler can, based the on the knowledge of available capacities, make a decision on where to optimally place workloads. The RRPM essentially provides a suggestion on how to optimally place workloads based on utility assessments); further comprising the steps of - measuring the real resource utilization of the tasks during execution (paragraph [0167-0168]: Collating inputs from the resource modeler and the inventory catalog to continuously provide an updated capacity assessment for all infrastructural resources; paragraph [0185]: The infrastructure data may also contain telemetry or usage data for the resources, which identifies the current usage and availability of the resource capacities; paragraph [0279]: The workload data contains information about a workload for a service and/or application to be deployed, placed, or executed across the computing infrastructure; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations), and - refining of the prediction according to a result of the measuring (paragraphs [0167], [0168]: The reasoning task is responsible for balancing out all the objectives (across stakeholders) for a given time window, considering the resources and service present within the same. It incorporates the following functions: (1) Collating inputs from the resource modeler and the inventory catalog to continuously provide an updated capacity assessment for all infrastructural resources; paragraph [0171-0176]: The planning task is responsible for providing possible actions that change the capabilities available in the environment); and - refining the configuration (paragraphs [0171], [0174]: The planning task is responsible for providing possible actions that change the capabilities available in the environment. It incorporates the following functions … (3) Estimating business/purchasing decisions by conducting ‘what-if’ scenarios based available inventory configurations and resource configuration updates that can inform an update to the future inventor; paragraph [0175]: Based on the reasoning and planning tasks, the RRPM 750 outputs allocation options 755a,b that are available both now and in the future) according to the refined prediction (paragraphs [0167], [0168]: The reasoning task is responsible for balancing out all the objectives (across stakeholders) for a given time window, considering the resources and service present within the same. It incorporates the following functions: (1) Collating inputs from the resource modeler and the inventory catalog to continuously provide an updated capacity assessment for all infrastructural resources; paragraph [0171-0176]: The planning task is responsible for providing possible actions that change the capabilities available in the environment). Regarding claim 2, Butler discloses wherein the tasks comprise at least one infrastructure task (IT1-IT5, ITM_M1, ITM_OUT) (paragraph [0146]: the collector subsystem 710 collects a variety of infrastructure-related and service-related information) and at least one application task (AT1-AT4) (paragraph [0067]: this architecture greatly improves the level of service that these service providers can offer to customers with video streaming applications). Regarding claim 3, Butler discloses wherein, at least a first task to be executed on said first processor is configured to send at least one message MSG1a, MSG3a, MSG1b, MSG3b) to at least a second task to be executed on said second processor (paragraph [0101]: the system memory 414a containing the video segment on the overloaded node 410 may be reused to replicate the video segment over the local network to the peer node 420 (e.g., using network transmission DMA acceleration); paragraph [0103]: various approaches can be used to replicate the video segment from system memory 414 over the local network, such as remote direct memory access (RDMA) and/or RDMA over Converged Ethernet (RoCE)). Regarding claim 4, Butler discloses wherein a set of infrastructure tasks (IT1-IT5, ITM_M1, ITM_OUT, ITM_COM) is determined (paragraph [0146]: the collector subsystem 710 collects a variety of infrastructure-related and service-related information; paragraphs [0147]-[0149]: The infrastructure-related information may include the following types of information: (1) Landscape of infrastructure: A landscaper subsystem may be used to collect details on the physical and logical resources and service instances available on the infrastructure, including geographical, topological, and contextual details of the individual entities. (2) Physical capacity: A telemetry subsystem may be used to capture information on available capacity from physical resources, such as compute resources (e.g., number of physical cores available and used), memory resources (e.g., available and used random access memory (RAM)), network resources (e.g., bandwidth available and consumed for each network interface controller (NIC) and single root input/output virtualization (SR-IOV) channel), and storage resources (e.g., available and used disk space)) to execute the application tasks (paragraph [0067]: this architecture greatly improves the level of service that these service providers can offer to customers with video streaming applications; paragraph [0218]: the illustrated process flow shows how a system can, given a service request, decompose it into a set of task(s) and/or sub-task(s) and match those to resources capabilities known to it; paragraph [0220]: This matching/mapping may be based on multiple properties 1410a-d … Based on the resource determined to be the best match, a possible actuation or task assignment plan will then be created). Regarding claim 5, Butler discloses wherein a set of infrastructure messages to be communicated between infrastructure tasks and/or application tasks for executing the application tasks is determined (paragraph [0101]: the system memory 414a containing the video segment on the overloaded node 410 may be reused to replicate the video segment over the local network to the peer node 420 (e.g., using network transmission DMA acceleration); paragraph [0103]: various approaches can be used to replicate the video segment from system memory 414 over the local network, such as remote direct memory access (RDMA) and/or RDMA over Converged Ethernet (RoCE); paragraph [0124]: after the peer node receives the replicated or offloaded video segment from the overloaded edge node, the peer node performs the visual computing task on the video segment, and the peer node then sends the compute result from the visual computing task (e.g., an indication of identified objects, people, and/or events) back to the overloaded edge node). Regarding claim 6, Butler discloses wherein a configuration (102) of the system is calculated using the prediction, wherein the prediction is based at least on the resource model (paragraph [0156]: informed by the information from the collector subsystem 710, a resource modeler 720 determines current and future (based on predictions) available capacities 725 for the resources and the service instances available), the individual resource utilization (101_SOTA, 101, 103, 104) (paragraph [0156]-[0158]: The computation is performed based on the following types of information: [0157] (1) Resource capacity: The resource capacity quantifies the assigned versus available capacity of platform features of a resource … (2) Processing capacity: The processing capacity quantifies the usage of the resources and service in the landscape. As an example, based on utilization and saturation metrics, it may be determined that the 24-socket CPU is only used 5% of the time; paragraph [0185]: The infrastructure data may also contain telemetry or usage data for the resources, which identifies the current usage and availability of the resource capacities) of the at least one real-time task (paragraph [0151]: The workload profiles can be used to predict the behavior of current or future workloads; paragraph [0279]: The workload data contains information about a workload for a service and/or application to be deployed, placed, or executed across the computing infrastructure; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations) and the set of determined infrastructure tasks (paragraph [0146]: the collector subsystem 710 collects a variety of infrastructure-related and service-related information) and infrastructure messages (paragraph [0101]: the system memory 414a containing the video segment on the overloaded node 410 may be reused to replicate the video segment over the local network to the peer node 420 (e.g., using network transmission DMA acceleration); paragraph [0103]: various approaches can be used to replicate the video segment from system memory 414 over the local network, such as remote direct memory access (RDMA) and/or RDMA over Converged Ethernet (RoCE)). Regarding claim 7, Butler discloses wherein all tasks and messages are allocated to resources (paragraph [0160]: based on the infrastructure capacity information from the resource modeler 720, along with the usage patterns and service level objectives (SLOs) from the collector subsystem 710, a load translator 730 determines and quantifies potential mappings of services to resources in order to compare, contrast, and tradeoff various placement options; paragraph [0188]: The service-to-resource placement options identify possible placements of the respective services or workloads across the respective resources of the computing infrastructure over the particular time window; paragraph [0175]: Based on the reasoning and planning tasks, the RRPM 750 outputs allocation options 755a,b that are available both now and in the future; paragraph [0176]: An orchestrator or resource manager, whose scheduler can, based the on the knowledge of available capacities, make a decision on where to optimally place workloads. The RRPM essentially provides a suggestion on how to optimally place workloads based on utility assessments) according to the configuration (paragraphs [0171], [0174]: The planning task is responsible for providing possible actions that change the capabilities available in the environment. It incorporates the following functions … (3) Estimating business/purchasing decisions by conducting ‘what-if’ scenarios based available inventory configurations and resource configuration updates that can inform an update to the future inventor). Regarding claim 8, Butler discloses wherein the real resource utilization of tasks and/or a sequence of tasks and messages on the resources are measured during execution (paragraph [0185]: The infrastructure data may also contain telemetry or usage data for the resources, which identifies the current usage and availability of the resource capacities; paragraph [0279]: The workload data contains information about a workload for a service and/or application to be deployed, placed, or executed across the computing infrastructure; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations). Regarding claim 9, Butler discloses wherein the prediction (paragraph [0175]: Based on the reasoning and planning tasks, the RRPM 750 outputs allocation options 755a,b that are available both now and in the future; paragraph [0176]: The RRPM essentially provides a suggestion on how to optimally place workloads based on utility assessments) is realized as one, two, or a multitude of infrastructure tasks (paragraph [0146]: the collector subsystem 710 collects a variety of infrastructure-related and service-related information; paragraphs [0147]-[0149]: The infrastructure-related information may include the following types of information: (1) Landscape of infrastructure: A landscaper subsystem may be used to collect details on the physical and logical resources and service instances available on the infrastructure, including geographical, topological, and contextual details of the individual entities. (2) Physical capacity: A telemetry subsystem may be used to capture information on available capacity from physical resources, such as compute resources (e.g., number of physical cores available and used), memory resources (e.g., available and used random access memory (RAM)), network resources (e.g., bandwidth available and consumed for each network interface controller (NIC) and single root input/output virtualization (SR-IOV) channel), and storage resources (e.g., available and used disk space)) on one, two, or a multitude of processors (paragraph [0184]: The physical resources may include compute resources (e.g., general-purpose processors such as CPUs and processing cores, special-purpose processors such as GPUs and AI accelerators), memory resources) in the real-time computer system (paragraph [0060]: the video content from being processed in real time, which means any time-sensitive processing (e.g., real-time critical event detection); paragraph [0185]: The infrastructure data may also contain telemetry or usage data for the resources, which identifies the current usage and availability of the resource capacities; paragraph [0279]: The workload data contains information about a workload for a service and/or application to be deployed, placed, or executed across the computing infrastructure; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations). Regarding claim 10, Butler discloses wherein the prediction (paragraph [0156]: informed by the information from the collector subsystem 710, a resource modeler 720 determines current and future (based on predictions) available capacities 725 for the resources and the service instances available) is realized as an application program in execution on a development computer (paragraph [0128]: Current approaches to capacity planning are static and offline; paragraph [0506]: The simulation environment is reproduced in a real-world deployment) connected to the real-time system (paragraph [0060]: the video content from being processed in real time, which means any time-sensitive processing (e.g., real-time critical event detection; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations). Regarding claim 11, Butler discloses wherein the prediction (paragraph [0156]: informed by the information from the collector subsystem 710, a resource modeler 720 determines current and future (based on predictions) available capacities 725 for the resources and the service instances available) is realized as a service in a remote data center (paragraph [0175]: Based on the reasoning and planning tasks, the RRPM 750 outputs allocation options 755a,b that are available both now and in the future. These outputs 755a,b can then be used by: [0176] (1) An orchestrator or resource manager, whose scheduler can, based the on the knowledge of available capacities, make a decision on where to optimally place workload; paragraph [0465]: remote data center; paragraph [0536]: The program code may execute entirely on the system 4200, partly on the system 4200, as a stand-alone software package, partly on the system 4200 and partly on a remote computer or entirely on the remote computer or server) connected to the real-time system (paragraph [0060]: the video content from being processed in real time, which means any time-sensitive processing (e.g., real-time critical event detection; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations). Regarding claim 12, Butler discloses wherein the measuring (MEASURE) (paragraph [0146]: the collector subsystem 710 collects a variety of infrastructure-related and service-related information) is realized by one, two, or a multitude of infrastructure tasks (paragraphs [0147]-[0149]: The infrastructure-related information may include the following types of information: (1) Landscape of infrastructure: A landscaper subsystem may be used to collect details on the physical and logical resources and service instances available on the infrastructure, including geographical, topological, and contextual details of the individual entities. (2) Physical capacity: A telemetry subsystem may be used to capture information on available capacity from physical resources, such as compute resources (e.g., number of physical cores available and used), memory resources (e.g., available and used random access memory (RAM)), network resources (e.g., bandwidth available and consumed for each network interface controller (NIC) and single root input/output virtualization (SR-IOV) channel), and storage resources (e.g., available and used disk space)) on one, two, or a multitude of processors (paragraph [0184]: The physical resources may include compute resources (e.g., general-purpose processors such as CPUs and processing cores, special-purpose processors such as GPUs and AI accelerators), memory resources) in the real-time computer system (paragraph [0060]: the video content from being processed in real time, which means any time-sensitive processing (e.g., real-time critical event detection); paragraph [0185]: The infrastructure data may also contain telemetry or usage data for the resources, which identifies the current usage and availability of the resource capacities; paragraph [0279]: The workload data contains information about a workload for a service and/or application to be deployed, placed, or executed across the computing infrastructure; paragraph [0280]: the workload data may also contain workload performance data for the workload, such as runtime performance metrics for the workload across various heterogenous resources with varying resource capacities and configurations). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SISLEY N. KIM whose telephone number is (571)270-7832. The examiner can normally be reached M-F 11:30AM -7:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y. Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SISLEY N KIM/Primary Examiner, Art Unit 2196 01/17/2026
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Jan 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602254
JOB NEGOTIATION FOR WORKFLOW AUTOMATION TASKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602260
COMPUTER-BASED PROVISIONING OF CLOUD RESOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12591474
BATCH SCHEDULING FUNCTION CALLS OF A TRANSACTIONAL APPLICATION PROGRAMMING INTERFACE (API) PROTOCOL
2y 5m to grant Granted Mar 31, 2026
Patent 12585507
LOAD TESTING AND PERFORMANCE BENCHMARKING FOR LARGE LANGUAGE MODELS USING A CLOUD COMPUTING PLATFORM
2y 5m to grant Granted Mar 24, 2026
Patent 12578994
SYSTEMS AND METHODS FOR TRANSITIONING COMPUTING DEVICES BETWEEN OPERATING STATES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+16.9%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 665 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month