Prosecution Insights
Last updated: April 19, 2026
Application No. 18/517,587

RESOURCE SCHEDULING METHODS AND APPARATUSES, AND ELECTRONIC DEVICES

Non-Final OA §102
Filed
Nov 22, 2023
Examiner
KIM, SISLEY NAHYUN
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Alipay (Hangzhou) Information Technology Co., Ltd.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
590 granted / 665 resolved
+33.7% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
42 currently pending
Career history
707
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
26.1%
-13.9% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 665 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless - (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3 and 7-12 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Jain et al. (US 2023/0244392, hereinafter Jain). Regarding claim 1, Jain discloses A resource scheduling method, comprising (fig. 1-7) : determining estimated resource consumption data of a target container group in a plurality of time periods (paragraph [0017]: The pod may support multiple containers and forms a cohesive unit of service for the applications hosted within the containers; paragraph [0036]: The vertical pod autoscaler 104 executes metric tracking functionality 112 to track memory and processor utilization by the container 134 accessing the volume 118; paragraph [0037]: The vertical pod autoscaler 104 executes recommender functionality 110 to query the monitoring system and time series database 114 for historic memory and processor utilization by the container 134 over a time period ; paragraph [0041]: If the upper bound limit and/or the lower bound limit would not be exceeded, then the pod continues to run the container 134 . In this way, the request 122 of the current memory and processer allocation is granted for the container 134 ; paragraph [0042]: if the historic memory and processor utilization exceeds a threshold (e.g., CPU utilization increased from 0.5 CPUs to 0.7 CPUs, which may exceed a 1% threshold or any other threshold set for the vertical pod autoscaler ), then the container may be evicted (e.g., deconstructed and removed from memory ) ) in response to a resource application request of the target container group (paragraph [0041] The vertical pod autoscaler 104 may determine whether a current memory and/or processor allocation (e.g., the request 122 for 100 milli CPU and 300 Mi memory) for the container 134 does not satisfy the recommendation 124 ) ; obtaining resource amount data of each of a plurality of cluster nodes (paragraph [0031]: The custom filter may implement a filtering algorithm to obtain I/O usage metrics of a pod (or container) and a virtual machine from the monitoring system and time series database over a particular time period. The filtering algorithm obtains available IOPS and throughput capacity of all available pods based upon metrics retrieved from the monitoring system and time series database) in the plurality of time periods (paragraph [0036]: The monitoring system and time series database 114 supports queries that include a specified time limit (e.g., 50 seconds, 10 minutes, 5 days, 7 weeks, etc.) and one or more raw or synthetic parameters of the system) , wherein the resource amount data can represent a resource headroom of the cluster node (paragraph [00 70 ]: the scheduling scheme needs to ensure that enough headroom of free resources is left on a virtual machine while packing pods on the virtual machine ) ; and scheduling the target container group to at least one of the plurality of cluster nodes (paragraph [0017]: The pod may support multiple containers ; paragraph [0030]: The scheduler (a default kube -scheduler) is modified with a custom filter that takes IOPS and throughput as an input for filtering out virtual machines in order to select a virtual machine for hosting a pod ; paragraph [0062]: During operation 308 of method 300, if the first virtual machine 402 is selected by the scheduler 420 using the custom filter 424, then the pod is deployed upon the first virtual machine 402 ) based on the estimated resource consumption data of the target container group in the plurality of time periods (paragraph [0017]: The pod may support multiple containers and forms a cohesive unit of service for the applications hosted within the containers; paragraph [0036]: The vertical pod autoscaler 104 executes metric tracking functionality 112 to track memory and processor utilization by the container 134 accessing the volume 118; paragraph [0037]: The vertical pod autoscaler 104 executes recommender functionality 110 to query the monitoring system and time series database 114 for historic memory and processor utilization by the container 134 over a time period ) and the resource amount data of each of the plurality of cluster nodes (paragraph [0031]: The custom filter may implement a filtering algorithm to obtain I/O usage metrics of a pod (or container) and a virtual machine from the monitoring system and time series database over a particular time period. The filtering algorithm obtains available IOPS and throughput capacity of all available pods based upon metrics retrieved from the monitoring system and time series database) in the plurality of time periods (paragraph [0036]: The monitoring system and time series database 114 supports queries that include a specified time limit (e.g., 50 seconds, 10 minutes, 5 days, 7 weeks, etc.) and one or more raw or synthetic parameters of the system) . Regarding claim 11 referring to claim 1 , Jain discloses An electronic device, comprising a memory and a processor, wherein the memory stores executable instructions that, in response to execution by the processor, cause the processor to: … (FIG. 6). Regarding claim 12 referring to claim 1 , Jain discloses A non-transitory computer-readable storage medium comprising instructions stored therein that, when executed by a processor of an electronic device, cause the processor to: … (FIG. 6). Regarding claim 2, Jain discloses wherein the resource amount data comprises a first resource occupation amount and a total resource amount (paragraph [0073]: The pod placement strategy 470 includes a determination 471 of cumulative IOPS and cumulative throughput for the node … A determination 472 is executed to determine available IOPS as a difference between an IOPS limit of a virtual machine and the cumulative IOPS. A determination 473 is executed to determine available throughput as a difference between a throughput limit of the virtual machine and the cumulative throughput) , and the scheduling the target container group to at least one of the plurality of cluster nodes (paragraph [0017]: The pod may support multiple containers ; paragraph [0030]: The scheduler (a default kube -scheduler) is modified with a custom filter that takes IOPS and throughput as an input for filtering out virtual machines in order to select a virtual machine for hosting a pod ; paragraph [0062]: During operation 308 of method 300, if the first virtual machine 402 is selected by the scheduler 420 using the custom filter 424, then the pod is deployed upon the first virtual machine 402 ) based on the estimated resource consumption data of the target container group in the plurality of time periods (paragraph [0017]: The pod may support multiple containers and forms a cohesive unit of service for the applications hosted within the containers; paragraph [0036]: The vertical pod autoscaler 104 executes metric tracking functionality 112 to track memory and processor utilization by the container 134 accessing the volume 118; paragraph [0037]: The vertical pod autoscaler 104 executes recommender functionality 110 to query the monitoring system and time series database 114 for historic memory and processor utilization by the container 134 over a time period ) and the resource amount data of each of the plurality of cluster nodes (paragraph [0031]: The custom filter may implement a filtering algorithm to obtain I/O usage metrics of a pod (or container) and a virtual machine from the monitoring system and time series database over a particular time period. The filtering algorithm obtains available IOPS and throughput capacity of all available pods based upon metrics retrieved from the monitoring system and time series database) in the plurality of time periods (paragraph [0036]: The monitoring system and time series database 114 supports queries that include a specified time limit (e.g., 50 seconds, 10 minutes, 5 days, 7 weeks, etc.) and one or more raw or synthetic parameters of the system) comprises: for each of the plurality of cluster nodes, adding the estimated resource consumption data of the target container group in the plurality of time periods to first resource occupation amounts of the cluster node in the plurality of time periods, to obtain second resource occupation amounts of the cluster node in the plurality of time periods (paragraph [0032]: Estimated IOPS and throughput of the pod is determined based upon the metrics retrieved monitoring system and time series database . A buffer value may be added to the estimated IOPS and throughput of the pod to account for potential burstiness; paragraph [0062]: The scheduler 420 combines this historic IOPS and/or throughput with the IOPS and/or throughput of the available virtual machines in order to determine how much remaining IOPS each virtual machine would have if the virtual machine hosted the pod; Note : Jain shows the scheduler computing the effect of placing the pod (i.e., combining estimated pod usage with node usage to compute remaining capacity). Jain frames this as computing “remaining capacity if placed,” which is algebraically the same as forming second_occupation = first_occupation + estimate. The disclosure emphasizes time series metrics and windowed aggregates; explicit per time slice storage of second_occupation arrays is not verbatim but is enabled by the time series DB) ; and scheduling the target container group to at least one of the plurality of cluster nodes based on the second resource occupation amounts of each of the plurality of cluster nodes in the plurality of time periods and total resource amounts of each of the plurality of cluster nodes in the plurality of time periods (paragraph [0031]: If multiple virtual machines match the estimated (required) IOPS and throughput of the pod, then a particular virtual machine may be selected and returned by the kube -scheduler. This virtual machine may be a virtual machine whose available capacity closest matches estimated (required) IOPS and throughput of the pod, which may minimize the available IOPS and throughput capacity of all available pods minus the estimated (required) IOPS and throughput of the pod; paragraph [0032]: this scheduling mechanism uses a best-fit bin packing scheme to pack pods into virtual machines in order to efficiently utilize resources with minimal increase in client latencies (e.g., assigning too many pods to a virtual machine may result in unacceptable client latency); paragraph [0036]: The monitoring system and time series database 114 supports queries that include a specified time limit (e.g., 50 seconds, 10 minutes, 5 days, 7 weeks, etc.); paragraph [0063]: The custom filter 424 may be used to filter out virtual machines that would have relatively larger amounts of IOPS remaining if the virtual machines would host the pod. In this way, the custom filter 424 may be used to identify and select a virtual machine that may have the least amount of remaining IOPS if the virtual machine hosted the pod; Note: Jain describes feasibility filtering (nodes that can meet estimated needs) and selecting a best fit node that minimizes remaining capacity / leftover headroom. This matches the claim’s scheduling based on second_occupation vs total_resource . Jain focuses on practical selection heuristics (min remaining, best fit)) . Regarding claim 3, Jain discloses wherein the scheduling the target container group to at least one of the plurality of cluster nodes based on the second resource occupation amounts of each of the plurality of cluster nodes in the plurality of time periods and total resource amounts of each of the plurality of cluster nodes in the plurality of time periods (paragraph [0031]: If multiple virtual machines match the estimated (required) IOPS and throughput of the pod, then a particular virtual machine may be selected and returned by the kube -scheduler. This virtual machine may be a virtual machine whose available capacity closest matches estimated (required) IOPS and throughput of the pod, which may minimize the available IOPS and throughput capacity of all available pods minus the estimated (required) IOPS and throughput of the pod; paragraph [0032]: this scheduling mechanism uses a best-fit bin packing scheme to pack pods into virtual machines in order to efficiently utilize resources with minimal increase in client latencies (e.g., assigning too many pods to a virtual machine may result in unacceptable client latency); paragraph [0036]: The monitoring system and time series database 114 supports queries that include a specified time limit (e.g., 50 seconds, 10 minutes, 5 days, 7 weeks, etc.); paragraph [0063]: The custom filter 424 may be used to filter out virtual machines that would have relatively larger amounts of IOPS remaining if the virtual machines would host the pod. In this way, the custom filter 424 may be used to identify and select a virtual machine that may have the least amount of remaining IOPS if the virtual machine hosted the pod; Note: Jain describes feasibility filtering (nodes that can meet estimated needs) and selecting a best fit node that minimizes remaining capacity / leftover headroom. This matches the claim’s scheduling based on second_occupation vs total_resource . Jain focuses on practical selection heuristics (min remaining, best fit)) comprises: for each of the plurality of cluster nodes, if the second resource occupation amount of the cluster node in each time period is less than or equal to the total resource amount of the cluster node in the time period, determining that the cluster node satisfies a resource amount need of the target container group (paragraph [0062]: The scheduler 420 combines this historic IOPS and/or throughput with the IOPS and/or throughput of the available virtual machines in order to determine how much remaining IOPS each virtual machine would have if the virtual machine hosted the pod; paragraph [0073]: The pod placement strategy 470 includes a determination 471 of cumulative IOPS and cumulative throughput for the node. A determination 472 is executed to determine available IOPS as a difference between an IOPS limit of a virtual machine and the cumulative IOPS. A determination 473 is executed to determine available throughput as a difference between a throughput limit of the virtual machine and the cumulative throughput; Note: Jain performs a feasibility filter by computing “remaining if hosted” (equivalent to checking whether second occupation exceeds total)) ; and scheduling the target container group to at least one of cluster nodes that satisfy the resource amount need of the target container group (paragraph [0031]: If multiple virtual machines match the estimated (required) IOPS and throughput of the pod, then a particular virtual machine may be selected and returned by the kube -scheduler. This virtual machine may be a virtual machine whose available capacity closest matches estimated (required) IOPS and throughput of the pod, which may minimize the available IOPS and throughput capacity of all available pods minus the estimated (required) IOPS and throughput of the pod; paragraph [0062]: During operation 308 of method 300, if the first virtual machine 402 is selected by the scheduler 420 using the custom filter 424, then the pod is deployed upon the first virtual machine 402; paragraph [0063]: The custom filter 424 may be used to filter out virtual machines that would have relatively larger amounts of IOPS remaining if the virtual machines would host the pod. In this way, the custom filter 424 may be used to identify and select a virtual machine that may have the least amount of remaining IOPS if the virtual machine hosted the pod; Note: Jain filters/identifies nodes that can meet the pod’s needs (i.e., satisfy the resource requirement) and schedules the pod to one of those nodes, typically selecting the best fit among feasible candidates). Regarding claim 7, Jain discloses wherein the resource amount data comprises a resource headroom (paragraph [00 70 ]: the scheduling scheme needs to ensure that enough headroom of free resources is left on a virtual machine while packing pods on the virtual machine ) , and the scheduling the target container group to at least one of the plurality of cluster nodes (paragraph [0017]: The pod may support multiple containers ; paragraph [0030]: The scheduler (a default kube - scheduler) is modified with a custom filter that takes IOPS and throughput as an input for filtering out virtual machines in order to select a virtual machine for hosting a pod ; paragraph [0062]: During operation 308 of method 300, if the first virtual machine 402 is selected by the scheduler 420 using the custom filter 424, then the pod is deployed upon the first virtual machine 402 ) based on the estimated resource consumption data of the target container group in the plurality of time periods (paragraph [0017]: The pod may support multiple containers and forms a cohesive unit of service for the applications hosted within the containers; paragraph [0036]: The vertical pod autoscaler 104 executes metric tracking functionality 112 to track memory and processor utilization by the container 134 accessing the volume 118; paragraph [0037]: The vertical pod autoscaler 104 executes recommender functionality 110 to query the monitoring system and time series database 114 for historic memory and processor utilization by the container 134 over a time period ) and the resource amount data of each of the plurality of cluster nodes (paragraph [0031]: The custom filter may implement a filtering algorithm to obtain I/O usage metrics of a pod (or container) and a virtual machine from the monitoring system and time series database over a particular time period. The filtering algorithm obtains available IOPS and throughput capacity of all available pods based upon metrics retrieved from the monitoring system and time series database) in the plurality of time periods (paragraph [0036]: The monitoring system and time series database 114 supports queries that include a specified time limit (e.g., 50 seconds, 10 minutes, 5 days, 7 weeks, etc.) and one or more raw or synthetic parameters of the system) comprises : for each of the plurality of cluster nodes, if the estimated resource consumption data of the target container group in each time period is less than or equal to the resource headroom of the cluster node in the time period, determining that the cluster node satisfies a resource amount need of the target container group (paragraph [0062]: During operation 306 of method 300, the scheduler 420 utilizes the custom filter 424 to determine whether to select the first virtual machine 402 for hosting the pod , the second virtual machine 406 for hosting the pod, or a different virtual machine or a new virtual machine for hosting the pod; paragraph [0063]: the custom filter 424 may be used to identify and select a virtual machine that may have the least amount of remaining IOPS if the virtual machine hosted the pod ; paragraph [00 70 ]: the scheduling scheme needs to ensure that enough headroom of free resources is left on a virtual machine while packing pods on the virtual machine ) ; and scheduling the target container group to at least one of cluster nodes (paragraph [0017]: The pod may support multiple containers ; paragraph [0030]: The scheduler (a default kube -scheduler) is modified with a custom filter that takes IOPS and throughput as an input for filtering out virtual machines in order to select a virtual machine for hosting a pod ; paragraph [0062]: During operation 308 of method 300, if the first virtual machine 402 is selected by the scheduler 420 using the custom filter 424, then the pod is deployed upon the first virtual machine 402 ) that satisfy the resource amount need of the target container group (paragraph [0062]: During operation 306 of method 300, the scheduler 420 utilizes the custom filter 424 to determine whether to select the first virtual machine 402 for hosting the pod , the second virtual machine 406 for hosting the pod, or a different virtual machine or a new virtual machine for hosting the pod; paragraph [0063]: the custom filter 424 may be used to identify and select a virtual machine that may have the least amount of remaining IOPS if the virtual machine hosted the pod ) . Regarding claim 8, Jain discloses wherein the determining estimated resource consumption data of a target container group in a plurality of time periods comprises: obtaining historical resource consumption data of the target container group; and determining the estimated resource consumption data of the target container group in the plurality of time periods based on the historical resource consumption data (paragraph [0017]: The pod may support multiple containers and forms a cohesive unit of service for the applications hosted within the containers; paragraph [0036]: The vertical pod autoscaler 104 executes metric tracking functionality 112 to track memory and processor utilization by the container 134 accessing the volume 118; paragraph [0037]: The vertical pod autoscaler 104 executes recommender functionality 110 to query the monitoring system and time series database 114 for historic memory and processor utilization by the container 134 over a time period ; paragraph [0041]: If the upper bound limit and/or the lower bound limit would not be exceeded, then the pod continues to run the container 134 . In this way, the request 122 of the current memory and processer allocation is granted for the container 134 ; paragraph [0042]: if the historic memory and processor utilization exceeds a threshold (e.g., CPU utilization increased from 0.5 CPUs to 0.7 CPUs, which may exceed a 1% threshold or any other threshold set for the vertical pod autoscaler ), then the container may be evicted (e.g., deconstructed and removed from memory). Regarding claim 9, Jain discloses wherein the obtaining historical resource consumption data of the target container group comprises: extracting a time-based resource consumption feature and/or resource usage configuration data of the target container group; and obtaining the historical resource consumption data of the target container group in the plurality of time periods when determining, based on the time-based resource consumption feature and/or the resource usage configuration data, that the target container group can use time-based resources (paragraph [0017]: The pod may support multiple containers and forms a cohesive unit of service for the applications hosted within the containers; paragraph [0036]: The vertical pod autoscaler 104 executes metric tracking functionality 112 to track memory and processor utilization by the container 134 accessing the volume 118; paragraph [0036]: The monitoring system and time series database 114 supports queries that include a specified time limit (e.g., 50 seconds, 10 minutes, 5 days, 7 weeks, etc.) and one or more raw or synthetic parameters of the system; paragraph [0037]: The vertical pod autoscaler 104 executes recommender functionality 110 to query the monitoring system and time series database 114 for historic memory and processor utilization by the container 134 over a time period ; paragraph [0041]: If the upper bound limit and/or the lower bound limit would not be exceeded, then the pod continues to run the container 134 . In this way, the request 122 of the current memory and processer allocation is granted for the container 134 ; paragraph [0042]: if the historic memory and processor utilization exceeds a threshold (e.g., CPU utilization increased from 0.5 CPUs to 0.7 CPUs, which may exceed a 1% threshold or any other threshold set for the vertical pod autoscaler ), then the container may be evicted (e.g., deconstructed and removed from memory). Regarding claim 10, Jain discloses wherein the target container group comprises a plurality of application containers, and peaks and valleys of the estimated resource consumption data of the plurality of application containers in the plurality of time periods are complementary to each other (paragraph [0017]: The pod may support multiple containers and forms a cohesive unit of service for the applications hosted within the containers; paragraph [0048]: The vertical pod autoscaler 104 may determine a current state of the pod 132, such as an active state (e.g., CPU usage is greater than about 80% of usage), a moderate state (e.g., CPU usage is between about 20% and 80% of usage, such as between 40% and 50% of usage), a dormant state (e.g., CPU usage is less than 20% of usage) , or a burst state where the pod 132 is experiencing a sudden burst of traffic (bursty traffic) ). Allowable Subject Matter Claims 4-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Torres et al. (US 2023/0280996) discloses “The rule execution engine 316 may evaluate such information to determine if the pod 304 and/or the other pods deployed within the container hosting environment 302 are operating optimally. In some embodiments, this may be accomplished by utilizing machine learning functionality 318 that can detected patterns that may be predictive of suboptimal (degraded) performance and/or failure of the pod 304, the container 306, and/or the application 305” (paragraph [0033]) and “A rule that performs memory analysis on memory operational statistics to determine if memory headroom statistics exceed a first threshold (e.g., a 30% threshold) for a period of time, and if so, then a remedial action is recommended or automatically implemented to modify a manifest file with a reduced memory allocation for the application container” (paragraph [0052]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SISLEY N. KIM whose telephone number is (571)270-7832 . The examiner can normally be reached M-F 11:30AM -7:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice . If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y. Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov . Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SISLEY N KIM/ Primary Examiner, Art Unit 2196 3/15/2026
Read full office action

Prosecution Timeline

Nov 22, 2023
Application Filed
Mar 18, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602254
JOB NEGOTIATION FOR WORKFLOW AUTOMATION TASKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602260
COMPUTER-BASED PROVISIONING OF CLOUD RESOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12591474
BATCH SCHEDULING FUNCTION CALLS OF A TRANSACTIONAL APPLICATION PROGRAMMING INTERFACE (API) PROTOCOL
2y 5m to grant Granted Mar 31, 2026
Patent 12585507
LOAD TESTING AND PERFORMANCE BENCHMARKING FOR LARGE LANGUAGE MODELS USING A CLOUD COMPUTING PLATFORM
2y 5m to grant Granted Mar 24, 2026
Patent 12578994
SYSTEMS AND METHODS FOR TRANSITIONING COMPUTING DEVICES BETWEEN OPERATING STATES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+16.9%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 665 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month