Prosecution Insights
Last updated: April 19, 2026
Application No. 18/229,947

SYSTEMS AND METHODS OF OPTIMIZING COMPUTE TASKS

Non-Final OA §103
Filed
Aug 03, 2023
Examiner
WU, BENJAMIN C
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Mercedes-Benz Group AG
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
456 granted / 522 resolved
+32.4% vs TC avg
Strong +16% interview lift
Without
With
+16.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
551
Total Applications
across all art units

Statute-Specific Performance

§101
19.8%
-20.2% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
0.8%
-39.2% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 522 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Claims 1–9 and 17–20 are pending for examination in the response (election) filed on 02/02/2026. Claims 10–16 have been WITHDRAWN. Drawings 3. The drawings were received on 08/03/2023 (in the filings). These drawings are acceptable. Specification 4. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “Optimize Workload Scheduling Using Weighted Compute Graph.” Examiner’s Remarks 5. Examiner refers to and explicitly cites particular pages, sections, figures, paragraphs or columns and lines in the references as applied to Applicant’s claims to the extent practicable to streamline prosecution. Although the cited portions of the references are representative of the best teachings in the art and are applied to meet the specific limitations of the claims, other uncited but related teachings of the references may be equally applicable as well. It is respectfully requested that, in preparing responses to the rejections, the Applicant fully considers not only the cited portions of the references, but also the references in their entirety, as potentially teaching, suggesting or rendering obvious all or one or more aspects of the claimed invention. Abbreviations 6. Where appropriate, the following abbreviations will be used when referencing Applicant’s submissions and specific teachings of the reference(s): i. figure / figures: Fig. / Figs. ii. column / columns: Col. / Cols. iii. page / pages: p. / pp. References Cited 7. (A) Yang, US 2019/0220321 A1. (B) Govindaraju et al, US 2020/0186445 A1 (“Govindaraju”). (C) Malkin et al., US 2013/0283211 A1 (“Malkin”) (D) Bellubbi et al., US 2023/0102089 A1 (“Bellubbi”). (E) Cheruvu et al., 2021/0117578 A1 (“Cheruvu”). (F) Roozbeh et al., US 2022/0100667 A1 (“Roozbeh”). Notice re prior art available under both pre-AIA and AIA 8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. 9. Claims 1, 5, 9, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Yang in view of (B) Govindaraju. See “References Cited” section, above, for full citations of references. 10. Regarding claim 1, (A) Yang teaches/suggests the invention substantially as claimed, including: “A computing system for optimizing compute tasks, the computing system comprising: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the computing system to” (Fig. 12 and ¶ 148: computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements)); “determine a set of weighted parameters for a given hardware topology” (¶ 45: generates task dependency and device connectivity graphs 222, 219 from the expanded workload and network topology graphs 221, 218; ¶ 46: the task dependency graph 222 is generated by incorporating WEIGHTS onto the vertices and edges of the expanded workload graph 221 to represent resource requirements and network requirements, respectively. In some embodiments, for example, the weights on the vertices may represent processing resource requirements for the respective tasks, while the weights on the edges may represent network bandwidth requirements for the dependencies among the tasks); “based on the set of weighted parameters, determine an optimal distribution of (i) runnables of a compute graph on the given hardware topology, …” (¶ 48: The E2E hardware recommender 208 then uses the task dependency and device connectivity graphs 222, 219 to derive the optimal hardware for deploying the workload. In particular, since the task dependency and device connectivity graphs 222,219 are both directed acyclic graphs (DAGs), DAG path selection and scheduling techniques can be leveraged to derive the optimal hardware choices from those graphs 222, 219; ¶ 53: The resulting hardware recommendations 226 are then used to automatically provision the appropriate resources 227 for deploying the workload on the network topology per the user’s 202 requirements); “configure a scheduling program on the given hardware topology … in accordance with the optimal distribution” (¶ 52: based on the schedule 224 generated by the scheduling algorithm 223, the E2E hardware recommender 208 then extracts 225 a list of recommended hardware 226 from the network topology graph in the schedule 224; ¶ 53: resource provisioning may involve the selection, deployment, configuration, and/or runtime management of the requisite resources for the workload, including hardware resources; ¶ 24: The computing infrastructure 110 may include a collection of computing resources that provides an end-to-end (E2E) environment for executing workloads). Yang do not teach “determine an optimal distribution of … (ii) data positioning in memory components of the given hardware topology for executing the runnables” and “configure a scheduling program on the given hardware topology to execute the compute graph” (although Yang highly suggests implementing this feature in paragraph 53 and 24). (B) Govindaraju, in the context of Yang’s invention, however teaches or suggests implementing: “determine an optimal distribution of … (ii) data positioning in memory components of the given hardware topology for executing the runnables” (¶ 95: The mapping involves satisfying all of the specifications, constraints, and dependencies contained in the two-dimensional matrix and, in certain implementations, optimizing the mapping to achieve lowest possible cost; ¶ 96: allows the automated application subsystem to carry out the cost analysis needed to determine whether or not to collocate CPU/memory computational resources with data-storage resources in a single cloud-computing facility … These dependencies provide, to the automated application subsystem, a basis for determining an expected data-transfer rate between the application instances and the data-storage device so that the automated application subsystem can determine an optimal location for the data-storage device. It may be optimal to collocate the data-storage device with the application instances in one of the locations); “configure a scheduling program on the given hardware topology to execute the compute graph” (¶ 58: core services provided by the VI management server include host configuration, virtual-machine configuration, virtual-machine provisioning, generation of virtual data-center alarms and events, ongoing event logging and statistics collection, a task scheduler, and a resource-management module; ¶ 105: blueprint-driven automated-application-installation subsystems of workflow-based cloud-management facilities can now automate many of the virtual-machine acquisition, management, and scheduling operations needed to support execution of the distributed application). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (B) Govindaraju with those of (A) Yang to deploy (place), schedule and automatically execute workloads based on locations and requirements and data storage access. The motivation or advantage to do so is to optimize deployment of workloads based on data access (resource) cost objectives and performance constraints and to automate the execution of deployed workloads on the recommended hardware/network topology (see Yang, ¶ 54: ILP model can be used to determine an optimal schedule that satisfies a specified objective ( e.g., minimizing bandwidth utilization, minimizing hardware costs, maximizing performance), while also adhering to various constraints ( e.g., workload resource requirements, device and network resource capacities, mapping constraints). 11. Regarding claim 5, Yang teaches or suggests: “wherein the set of weighted parameters for the given hardware topology comprises a plurality of: latency, bandwidth, memory, power usage, computing power, unit of compute, hardware age, hardware wearing, thermal cooling, or compute values of individual components of the given hardware topology” (¶ 46: the task dependency graph 222 is generated by incorporating WEIGHTS onto the vertices and edges of the expanded workload graph 221 to represent resource requirements and network requirements, respectively. In some embodiments, for example, the weights on the vertices may represent processing resource requirements for the respective tasks, while the weights on the edges may represent network bandwidth requirements for the dependencies among the tasks). 12. Regarding claim 9, Yang teaches or suggests: “wherein the computing system is included in the given hardware topology” (¶ 23: service provider 106 may include any entity that provides automated hardware recommendations and/or resource provisioning for workloads that are deployed on the computing infrastructure 110; ¶ 24: The computing infrastructure 110 may include a collection of computing resources that provides an end-to-end (E2E) environment for executing workloads; Fig. 2 and ¶ 33: service provider 206 may include any entity that owns and/or operates some or all of the E2E computing infrastructure (e.g., a cloud service provider (CSP) such as Amazon, Google, or Microsoft). Moreover, the service provider 206 may also provide automated hardware recommendations and/or resource provisioning for workloads). 13. Regarding claim 17, it is the corresponding computer program product claim reciting similar limitations of commensurate scope as the system of claim 1. Therefore, it is rejected on the same basis as claim 1 above. B. 14. Claims 2 and 18 rejected under 35 U.S.C. 103 as being unpatentable over (A) Yang in view of (B) Govindaraju, as applied to claims 1 and 17 above, and further in view of (C) Malkin. 15. Regarding claim 2, Yang and Govindaraju do not teach “wherein determining the optimal distribution comprises executing a traveling salesman algorithm using the set of weighted parameters and a set of requirements of the runnables.” (C) Malkin, in the context of Yang and Govindaraju’s teachings, however teaches or suggests implementing: “determining the optimal distribution comprises executing a traveling salesman algorithm using the set of weighted parameters and a set of requirements of the runnables” (¶ 29: The “Traveling salesman” algorithm refers to a process applied to a given a weighted graph. That algorithm outputs a path to visit all the vertices in the graph with the least amount of cost; ¶ 36: the database server device running the travelling salesman algorithm is configured to operate an itinerary generation program that implements methods to access each of the specified to-do lists, packing lists, and constraints, and calculate an itinerary list of to-do's and required resources. In one embodiment, at 57, based on one or more stored to-do lists 100, there is automatically attempted to calculate ( or update) the itinerary which accomplishes the target task, as well as many of the other to-do's from the list as is possible); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (C) Malkin with those of Yang and Govindaraju to incorporate a traveling salesman algorithm in generating an optimal schedule for the workload. The motivation or advantage to do so is to provide a scheduling graph (execution path) with the least amount of cost (e.g. hardware, resource utilization, latencies, etc.). 16. Regarding claim 18, it is the corresponding computer program product claim reciting similar limitations of commensurate scope as the system of claim 2. Therefore, it is rejected on the same basis as claim 2 above. C. 17. Claims 3–4 and 19–20 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Yang in view of (B) Govindaraju and (C) Malkin, as applied to claims 2 and 18 above, and further in view of (D) Bellubbi. 18. Regarding claim 3, Yang, Govindaraju, and Malkin do not teach “reevaluate the optimal distribution of the runnables on the given hardware topology by performing at least one of (i) determining an updated set of weighted parameters for the given hardware topology, or (ii) determining an updated set of requirements of the runnables; and based on reevaluating the optimal distribution, determine an updated optimal distribution of the runnables on the given hardware topology.” (D) Bellubbi, in the context of Yang, Govindaraju and Malkin’s teachings, however teaches or suggests implementing: “reevaluate the optimal distribution of the runnables on the given hardware topology by performing at least one of (i) determining an updated set of weighted parameters for the given hardware topology, or (ii) determining an updated set of requirements of the runnables” (¶ 54: gathering and storing information related to runs of the execution schedule. In these or other embodiments, the task management may include performing an analysis of the gathered information. The analysis may include determining one or more performance metrics or characteristics associated with implementation of the execution schedule: ¶ 157: one or more schedule quality assurance (QA) tools may be used to help ensure that the schedules satisfy corresponding requirements. In some embodiments, the QA tools may include one or more tools that are configured to assess and/or modify the schedules 204. For example, in some embodiments, the QA tools may include one or more of a schedule verification engine configured to verify that one or more of the schedules 204 comply with corresponding constraints; ¶ 314: the actual runnable execution times may be compared against corresponding calculated worst case execution times (WCET) to determine an accuracy level of one or more of the WCETs. Additionally or alternatively, the execution time distributions may be used to determine whether certain runnables are disproportionately using resources or time. In these or other embodiments, the execution time distributions may indicate whether certain compute engines are being overutilized or underutilized; ¶ 317: analyze the schedule 604. For example, an analysis of runnable execution times may indicate that certain runnable execution times are significantly shorter than the allocated time in the schedule 604 for such runnables. Additionally or alternatively, the analysis of runnable execution times may indicate that other runnable execution times are longer than the allocated time in many instances (e.g., in a threshold percentage of the time) … analysis may indicate that the schedule 604 may overutilize or underutilize certain compute engines); “based on reevaluating the optimal distribution, determine an updated optimal distribution of the runnables on the given hardware topology” (¶ 54: the task management may include performing one or more operations that may adjust the execution schedule based on the analysis; ¶ 318: the runtime information may indicate which runnables may be contending on resources and interfering with each other. In these or other embodiments, the modification operations may include mutually excluding the scheduling of such runnables; ¶ 203: one or more branch characteristics may include runnable placement. For example, in some embodiments, certain runnables (e.g., critical path runnables) may have a higher scheduling prioritization than other runnables). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (D) Bellubbi with those of Yang, Govindaraju, and Malkin to modify/update execution schedules based on monitored resource utilization, performance and/or execution constraints. The motivation or advantage to do so is to provide scheduling quality assurance (QA) so as to meet corresponding (quality or user) requirements. 19. Regarding claim 4, Yang and Bellubbi teach or suggest: “reconfigure the scheduling program to execute the compute graph in accordance with the updated optimal distribution” (Yang, ¶ 52: based on the schedule 224 generated by the scheduling algorithm 223, the E2E hardware recommender 208 then extracts 225 a list of recommended hardware 226 from the network topology graph in the schedule 224; ¶ 53: resource provisioning may involve the selection, deployment, configuration, and/or runtime management of the requisite resources for the workload, including hardware resources; ¶ 24: The computing infrastructure 110 may include a collection of computing resources that provides an end-to-end (E2E) environment for executing workloads; Bellubbi, ¶ 54: performing one or more operations that may adjust the execution schedule based on the analysis; ¶ 156: The finalized schedules 204 that may be generated using the instruction set 226 may be obtained and executed by the corresponding runtime system). 20. Regarding claims 19–20, they are the corresponding computer program product claims reciting similar limitations of commensurate scope as the system of claims 3–4. Therefore, they are rejected on the same basis as claims 3–4 above. D. 21. Claims 6–7 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Yang in view of (B) Govindaraju, as applied to claim 1 above, and further in view of (E) Cheruvu. 22. Regarding claim 6, Yang teaches or suggests “wherein the given hardware topology corresponds to a multiple system-on-chip (mSoC)” (¶ 157: All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including a central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other; ¶ 147: All or part of any component of FIG. 12 may be implemented as a separate or stand-alone component or chip, or may be integrated with other components or chips such as a system-on-a-chip (SoC)). Yang and Govindaraju do not teach “a multiple system-on-chip (mSoC) comprising a central chiplet and a set of workload processing chiplets.” (E) Cheruvu, in the context of Yang and Govindaraju’s teachings, however teaches or suggests implementing: “a multiple system-on-chip (mSoC) comprising a central chiplet and a set of workload processing chiplets” (¶ 51: SoCs, multi-chip packages (MCPs), etc.; ¶ 53: examples are described in the context of example SoCs formed of a plurality of chiplets. As used herein, a chiplet is a modular integrated circuit block that has been designed to perform certain function(s) and work with other chiplets to form a larger chip or circuit; Fig. 5 and ¶ 80: FIG. 5 is an example multi-chiplet SoC 500 including a plurality of chiplets 510-560 configured to perform various functions for the SoC 500. For example, as shown in the example of FIG. 5, the set of chiplets 510-570 can include a chip manager 510, a test chiplet 520, a communication subsystem 530, an autonomous drive controller subsystem 540, one or more processor cores 550-560, a shared cache 570; ¶ 81: The example chip manager 510 provides primary control for primary-secondary interactions with the other chiplets 520-570). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (E) Cheruvu with those of Yang and Govindaraju to use a multi-chiplet SoCs for deploying and executing workloads. The motivation or advantage to do so is to improve the efficiency of workload execution while reducing power consumption/costs. 23. Regarding claim 7, Yang, Govindaraju, and Cheruvu, in combination, teach or suggest: “wherein the central chiplet includes (i) a shared memory accessible by the set of workload processing chiplets, and (ii) the scheduling program to schedule the runnables of the compute graph for execution by the workload processing chiplets in accordance with the optimal distribution” (Yang, (¶ 52: based on the schedule 224 generated by the scheduling algorithm 223, the E2E hardware recommender 208 then extracts 225 a list of recommended hardware 226 from the network topology graph in the schedule 224; ¶ 53: resource provisioning may involve the selection, deployment, configuration, and/or runtime management of the requisite resources for the workload, including hardware resources; ¶ 24: The computing infrastructure 110 may include a collection of computing resources that provides an end-to-end (E2E) environment for executing workloads; ¶ 144: A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors’ local cache information may be stored in the shared cache if a processor is placed into a low power mode; Govindaraju, ¶ 58: core services provided by the VI management server include host configuration, virtual-machine configuration, virtual-machine provisioning, generation of virtual data-center alarms and events, ongoing event logging and statistics collection, a task scheduler, and a resource-management module; ¶ 105: blueprint-driven automated-application-installation subsystems of workflow-based cloud-management facilities can now automate many of the virtual-machine acquisition, management, and scheduling operations needed to support execution of the distributed application; Cheruvu, Fig. 5 and ¶ 80: FIG. 5 is an example multi-chiplet SoC 500 including a plurality of chiplets 510-560 configured to perform various functions for the SoC 500. For example, as shown in the example of FIG. 5, the set of chiplets 510-570 can include a chip manager 510, a test chiplet 520, a communication subsystem 530, an autonomous drive controller subsystem 540, one or more processor cores 550-560, a shared cache 570; ¶ 81: The example chip manager 510 provides primary control for primary-secondary interactions with the other chiplets 520-570). E. 24. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over (A) Yang in view of (B) Govindaraju and (E) Cheruvu, as applied to claim 7 above, and further in view of (F) Roozbeh. 25. Regarding claim 8, Yang, Govindaraju, and Cheruvu do not teach “wherein the shared memory stores data required for executing the runnables and has a hierarchy including a set of caches accessible over a network, and wherein the caches and the network are associated with intrinsic latencies.” (F) Roozbeh however teaches or suggests: “wherein the shared memory stores data required for executing the runnables and has a hierarchy including a set of caches accessible over a network, and wherein the caches and the network are associated with intrinsic latencies” (Fig. 1 and ¶ 31: a memory hierarchy as previously described. In this example, the system has a three layered cache structure comprising Layer 1 and Layer 2 caches which are private to the individual cores, i.e., not shared with the other cores of the processor, hence Core-1 is connected to the L1 cache and the L2 cache of the structure L1/2-1, Core-2 is connected to the L1 cache and L2 cache of L1/2-2, etc., up to Core-m, thus being connected to the L1 cache and the L2 cache of L1/2-m. Further illustrated is the slicing of a shared cache layer, in this example being the L3 cache of the cache structure, and may also be denoted the Last Level Cache (LLC) …. The slices of the cache are accessible to all the cores via an interconnect (e.g., ring bus or mesh); ¶ 32: Below the layered cache structure is what is generally called the main memory, comprising a comparatively large volume of volatile memory, herein after referred to as the memory. The memory hierarchy in this example ends with the secondary memory, which in general may comprise one or more Hard Disc Drives (HDDs) and/or Solid-State Drives (SSDs), and thus being a non-volatile memory type. FIG. 1 further indicates a relative latency for accessing data and instruction of the different levels in the memory hierarchy; See ¶¶ 2–3: In modem processors, the cache is also implemented in a hierarchical manner, e.g., a Layer one cache (L1), a Layer two cache (L2), and a Layer 3 cache (L3) also known as the Last Level Cache (LLC). The L1 and L2 cache are private to each core while the LLC is in often shared among all PU cores). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (F) Roozbeh with those of Yang, Govindaraju, and Cheruvu to use a memory hierarchy including a shared cache to store workload instructions and data. The motivation or advantage to do so is to improve the efficiency of workload data access, sharing, and storage. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN C WU whose telephone number is (571)270-5906. The examiner can normally be reached Monday through Friday, 8:30 A.M. to 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J. Li can be reached on (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN C WU/Primary Examiner, Art Unit 2195 March 5, 2026
Read full office action

Prosecution Timeline

Aug 03, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602258
INSTANTIATING SOFTWARE DEFINED STORAGE NODES ON EDGE INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12585508
RECONSTRUCTING AND VERIFYING PROPRIETARY CLOUD BASED ON STATE TRANSITION
2y 5m to grant Granted Mar 24, 2026
Patent 12579006
SYSTEMS AND METHODS FOR UNIVERSAL AUTO-SCALING
2y 5m to grant Granted Mar 17, 2026
Patent 12572388
COMPUTING RESOURCE SCHEDULING BASEDON EXPECTED CYCLES
2y 5m to grant Granted Mar 10, 2026
Patent 12566646
Accessing Critical Resource in a Non-Uniform Memory Access (NUMA) System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+16.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 522 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month