Prosecution Insights
Last updated: April 19, 2026
Application No. 18/796,368

METHOD FOR CONTROLLING A REMOTELY OPERATED VEHICLE

Non-Final OA §101§103§DP
Filed
Aug 07, 2024
Examiner
AHMED, MASUD
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Einride Autonomous Technologies AB
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
969 granted / 1178 resolved
+30.3% vs TC avg
Moderate +13% lift
Without
With
+13.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
27 currently pending
Career history
1205
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
36.5%
-3.5% vs TC avg
§102
21.7%
-18.3% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1178 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 16–35 are rejected under 35 U.S.C. §101 as being directed to non-statutory subject matter. The claims are drawn to a control process for a remotely operated vehicle and to a vehicle configured with a control unit implementing that same logic. While the claims fall within the statutory classes of methods and machines, they are directed to abstract data evaluation and decision-making rather than a technological improvement to vehicle control systems, communication systems, computing hardware, or emergency braking mechanisms. In analyzing claim 16 as representative, the claim recites a vehicle operated remotely via a communication link and includes the steps of monitoring a latency of the communication link by a control unit in the vehicle, requesting an emergency stop maneuver of the vehicle in response to the latency exceeding a dynamic predetermined latency threshold, updating the current driving scenario, and adjusting the dynamic predetermined latency threshold to match the updated driving scenario. These limitations describe data observation, comparison of communication delay measurements to a threshold value, decision execution based on that comparison, and modification of the decision threshold based on updated operating context. Such operations constitute mathematical conditional evaluation, logical rule execution, and information processing that could be carried out mentally or by a generic processor, and therefore fall within the abstract-idea category of mental processes and fundamental decision rules. Under the second step of the eligibility analysis, the claim must integrate the abstract idea into a practical application. The claim does not improve communication latency measurement mechanisms, braking mechanisms, or remote vehicle control hardware. Instead, it merely applies the rule that if latency exceeds a threshold then an emergency stop is requested, and then modifies that threshold based on environmental conditions. Although the claim uses the context of a vehicle, that contextual placement does not amount to a technical improvement. The claim instructs a conventional control unit to carry out conditional logic using scenario parameters such as weather, traffic, or road type without providing a non-conventional implementation of how the latency is monitored, how braking is controlled, or how the scenario is updated. The vehicle remains operated in a conventional manner, and the claim amounts to using a computer environment to perform standard information evaluation and rule-based output. In evaluating Step 2B, the claim must include significantly more than the abstract decision logic. The additional recitations in the dependent claims, including those in claims 17 through 25, merely refine or qualify the threshold-adjustment logic already present in claim 16. For example, claim 17 recites that the dynamic predetermined latency threshold is adjusted during an ongoing trip, claim 18 identifies specific scenario changes such as traffic congestion, weather conditions, and visibility, and claim 20 adds cancellation logic based on brake reaction window timing. These do not add an inventive concept beyond the abstract idea of adjusting decision thresholds in response to operational inputs. They do not describe unconventional hardware, a new communication protocol, a novel braking control mechanism, or a technical improvement in computing efficiency or reliability. Instead, they represent expected refinements to the abstract rule-based framework already set forth. The same analysis applies to the system claims. Claim 26, as representative, recites a remotely operated vehicle having a control unit configured to monitor latency, request an emergency stop based on dynamic predetermined latency threshold, update the driving scenario, and adjust the threshold to match the scenario. This system claim merely implements the abstract idea on a generic machine using routine processor functionality. It does not improve computer operation, does not provide a new hardware configuration, and does not disclose a technical innovation in network latency computation or braking-actuation control. Therefore, it does not supply an inventive concept under §101. Accordingly, claims 16–35 are rejected under §101 as being directed to an abstract idea without an inventive concept sufficient to transform the claimed subject matter into patent-eligible subject matter. The claims may become eligible if amended to include specific technical improvements to control-loop execution, braking mechanisms, communication-latency measurement algorithms, or real-time adaptive safety logic implemented by non-conventional hardware. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 16-35 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims of U.S. Patent No. 12,077,170, Although the claims at issue are not identical, they are not patentably distinct from each other because of the following: Claim 16 is rejected under the judicially-created doctrine of obviousness-type double patenting over claim 1 of U.S. Patent No. 12,077,170. Claim 1 of the ’170 patent recites a method of controlling a remotely operated vehicle, including (i) monitoring a latency of a communication link by a control unit of the vehicle, (ii) requesting an emergency stop of the vehicle when the latency exceeds a predetermined latency threshold, and (iii) cancelling the emergency stop request in response to recovery of the communication link within a brake reaction time period, thereby permitting the system to safely operate under lower latency tolerance values. The present claim 16 recites substantially the same core elements, namely monitoring latency of a communication link by a control unit, and requesting an emergency stop maneuver in response to said latency exceeding a latency threshold. The claim further recites cancelling the emergency stop request upon latency recovery, which is fully taught by the patented claim 1. Therefore, the base operational mechanism of latency-monitoring, threshold-triggered emergency breaking, and communication-recovery cancellation is fully anticipated by the patented claim 1. The applicant attempts to differentiate claim 16 through the recitation of a dynamic predetermined latency threshold that varies by current driving scenario and by additionally reciting steps of updating the driving scenario and adjusting the threshold based on that scenario. However, the patented claim 1, when read in view of dependent issued claim 2, already discloses use of a dynamic latency threshold that varies depending on operating conditions. The modification of dynamically adjusting the threshold based on updated scenario inputs represents no more than an obvious refinement of the same feature already protected in claim 1 and claim 2 of the ’170 patent, because the purpose of threshold adaptivity—to ensure safe braking response under changing operational conditions—remains the same. One of ordinary skill in vehicle autonomy and remote-operation control would have found it obvious to adjust latency thresholds based on updated driving conditions, such as weather, traffic, or cargo weight, particularly where the patented claims already teach dynamic threshold variability as a safety-modulation mechanism. The difference in implementation constitutes an expected optimization rather than a patentably distinct inventive concept. Accordingly, claim 16 is not materially different in scope from the patented claim 1 and instead constitutes an obvious variation of the previously claimed method of network-latency triggered emergency stop-control in a remote vehicle. The claim is therefore rejected as not patentably distinct under the doctrine of obviousness-type double patenting. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 16-35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gross et al (US 2020/0201319), in view of Rastoll et al (US 2020/0192352). Claims 16 and 26, Gross teaches “monitoring a latency of the communication link by a control unit in the vehicle.” ’319 teaches latency monitoring directly. Paragraph [0036]–[0037] explains that the internal computing system evaluates whether round-trip signal latency is within a predetermined threshold: “a determination is made… that identifies whether signal latency is within a predefined threshold… If signal latency is not within the predefined threshold… instructions from the remote computing system cannot be executed unless latency is reduced below the threshold.”. ’352 does not expressly teach latency measurement, therefore ’319 supplies this missing feature; Rastoll teaches “a method of controlling a remotely operated vehicle, the vehicle operated from a remote operation station via a communication link between the vehicle and the remote operation station.” ’352 explicitly discloses remote operation of a vehicle via communication links. In paragraph [0051]–[0054], ’352 teaches that remote control system 140 receives operator input and transmits driving commands over a wireless channel to the vehicle system for remote maneuvering. The reference states: “Remote control system 140 may comprise one or more computing devices that receive input from a human operator and generate driving instructions for transmission to vehicle system 102… remote computer system 106 may comprise one or more computing devices that facilitate remote operation of the vehicle through communication with teleoperation system 104.”; “requesting an emergency stop maneuver of the vehicle by the control unit in response to the latency exceeding a dynamic predetermined latency threshold”. ’319 teaches conditional stopping/constraint deactivation based on latency exceeding a threshold. Paragraph [0036]–[0037] explains that when latency exceeds a safe threshold, motion is prevented until latency recovers. “If signal latency is not within the predefined threshold… commands cannot be executed… If signal latency is determined to have reduced below the predefined threshold, the identified constraint is deactivated at 314 to allow the autonomous vehicle to advance.”. ’352 complements this by teaching autonomous braking control and safety-stop behavior triggered by communication issues. Paragraph [0136] discloses “automatic speed reduction could be performed in response to detecting… an interruption in the wireless communication link… until the vehicle comes to a stop.”. A skilled person would find it obvious to combine ’319’s latency detection with ’352’s emergency stop capability, yielding the claimed stop request triggered by latency. “wherein the dynamic predetermined latency threshold varies by a current driving scenario.” ’319 inherently suggests threshold values that govern whether vehicle motion is permitted or denied, which the reference frames as “predetermined threshold duration of time (e.g., half second or less)”. ’352 discloses scenario-specific safety responses based on environmental conditions including link quality, object distance and operational context. Paragraph [0136] expressly states braking behavior varies based on conditions: “The rate of speed reduction could be based on various factors such as length of the interruption, current speed, and distances to nearby objects.”. Modifying latency thresholds based on operational factors would have been obvious, as ’352 teaches condition-variable braking control and ’319 already enforces network-latency-based operation constraints. “updating the current driving scenario for the vehicle in the control unit.” ’352 teaches continuously updated situational input via sensors and transmission monitoring. Paragraph [0134]–[0137] explains that the system monitors communication quality, environmental factors, hazard proximity, and vehicle state while updating control commands. “Transmission speed may drop below threshold… vehicle system switches representation… emergency braking may be performed in response to detected conditions.”. ’319 simultaneously evaluates latency state changes over time. A person of skill would naturally update the scenario if the latency environment changes, making the limitation obvious. “adjusting the dynamic predetermined latency threshold to match the updated current driving scenario.” ’352 teaches dynamic behavior adjustments depending on conditions, including modulation of braking force and slow-down rates based on disruption duration, visibility constraints and proximity of objects. Paragraph [0136] teaches adaptive thresholding: “the rate of speed reduction could be based on various factors such as length of communication interruption… increasing rate over time…”. ’319 similarly describes threshold-based action control that may be revisited when latency conditions change. Adapting the acceptable latency threshold to the scenario would have been an expected design choice when combining these teachings, providing predictable safety improvement; Therefore, it would have been obvious ordinary skilled artisan because ’319 teaches the latency-based control logic including threshold monitoring and stop-authorization, and ’352 teaches scenario-adaptive braking and remote operation communication-based control. Combining the references yields the claim in full under KSR as a predictable and routine integration of network latency management with remote emergency stopping. Claims 17 and 27 , Gross teaches “wherein the dynamic predetermined latency threshold is adjusted during an ongoing trip involving the vehicle in response to changing conditions that updated the current driving scenario.” ’319 inherently teaches re-evaluating and re-checking latency during continued operation. In paragraphs [0036]–[0037], ’319 states that the system repeatedly determines whether latency is below a threshold, and only when latency improves can the vehicle proceed. The reference reads: “If… signal latency is determined to have reduced below the predefined threshold, the identified constraint is deactivated at 314 to allow the autonomous vehicle to advance… steps 310-319 are repeated until completion.”. The repeated evaluation implies operation during a continuing driving session rather than a single static moment. ’352 teaches ongoing driving execution under remote supervision, continuously updating communication quality and environmental state. Paragraph [0134]–[0137] explains dynamic, real-time operational change management: “the vehicle system may switch from video to reconstruction when the transmission speed has dropped below a threshold… [and] automated emergency braking may be performed in response to detected conditions… multiple maneuvers may be used to bring the vehicle to the location… steps may be repeated.”. When read together, ’319 provides the latency-based threshold logic and ’352 provides the continuous trip-progress context. A skilled person would find it obvious to dynamically adjust latency thresholds throughout travel to maintain safe operation, since both references already disclose real-time conditional-update functionality. Therefore, it is rendered obvious over ’319 combined with ’352. Claims 18 and 28 , Gross teaches the scenario updates comprise changing conditions including traffic congestion, weather conditions, visibility, cargo type, and road type. ’319 teaches adjustment of vehicle behavior based on communication-performance conditions and network degradation, providing a framework for variable threshold logic. Paragraph [0036]–[0037] notes latency variation, operator notification, and conditional authorization depending on system state. However, ’319 does not enumerate environmental inputs. ’352 directly supplies them. In paragraph [0136], ’352 discloses: “The rate of speed reduction could be based on various factors such as the length of interruption, the current speed of the vehicle, and distances to nearby objects.”. Elsewhere the system dynamically responds to loss of visibility or changing perception feed conditions when switching representation types, as described in [0134]–[0137]. A person of ordinary skill would plainly recognize that additional operational factors such as congestion, weather, or cargo weight affect safe stop-timing and therefore would adjust latency thresholds to reflect those changing conditions. Incorporating these foreseeable, well-known driving conditions into the existing latency-driven safety control taught by ’319 would be a routine engineering enhancement. Claims 19 and 29, Gross teaches, “ states that threshold adjustment is performed by one of the vehicle automatically, a fleet management system automatically, a remote operator manually, or a fleet manager manually” ’319 inherently discloses multiple decision sources. Paragraph [0036]–[0037] notes that automated logic determines whether the vehicle may continue based on latency, but paragraph [0036] also notes human operator involvement when latency exceeds limits: “an operator of the remote computing system is notified that… latency exceeds predefined threshold and… the operator can wait for latency to reduce.”. ’352 independently teaches that remote operators generate override instructions, and automated logic may independently brake when link disruption persists. Paragraph [0134]–[0137] states remote control generates command decisions, while automated braking overrides unsafe conditions: “driving instructions may be generated by a remote computer system… automated emergency braking can be performed in response to determining… that instructions would cause collision.”. This directly equates to manual threshold change, automated vehicle change, or centralized override. Therefore Claim 19 is obvious because each actor listed is already taught or implied by the references. Claims 20 and 30, Gross teaches “cancelling the requested emergency stop maneuver by the control unit in response to the communication link being recovered within a brake reaction time period… thereby enabling the remote operation station to set lower dynamic predetermined latency thresholds based on the brake reaction time period.”’319 discloses latency-based authorization and resumption of vehicle movement when latency recovers below threshold. Paragraph [0036]–[0037] states: “If… signal latency is determined to have reduced below the predefined threshold, the identified constraint is deactivated… to allow the autonomous vehicle to advance… if latency is not reduced, commands cannot be executed.”. This constitutes stop-request cancellation and continuation of movement when communication quality improves. ’352 extends this by teaching recovery-timed braking release, as paragraph [0136] teaches emergency slowing until communication resumes: “automatic speed reduction may reduce speed… until wireless communication link becomes available again”. The pairing shows that stop cancellation upon communication recovery is expected engineering practice, and using reaction timing to select lower thresholds is an obvious resulting modification. Claims 21 and 31, Gross teaches “wherein the requested emergency stop maneuver is cancelled before the vehicle starts decelerating.”’319 inherently teaches non-activation of braking when latency reflexively returns below threshold prior to action. Paragraph [0036]–[0037] describes a check before granting propulsion stop or deactivation, meaning a command may be aborted before mechanical actuation begins. ’352 paragraph [0135]–[0137] describes emergency braking performed only when risk thresholds persist. If connection recovers immediately, braking need not begin. A skilled person would cancel the stop command prior to deceleration exactly as claimed. Thus claim 21 is obvious in view of ’319 + ’352. Claims 22 and 32, Gross teaches “cancelling the requested emergency stop maneuver in response to the communication link being recovered within a time period following the brake reaction time period… corresponding to delay between brake initiation and full braking power.”’319 discloses recovery-triggered restoration of motion if latency improves after a stop-activation decision. Paragraph [0036]–[0037] repeats evaluation of latency “after prior determination,” allowing resumption if recovered later. ’352 contributes explicit staged braking periods and delay windows. Paragraph [0136] teaches “rate of speed reduction… based on interruption length… gradual increase as interruption continues.” This is a brake-response window exactly matching the recited “post-reaction delay until full braking.” The combination yields the claimed timing condition. Claims 23 and 33, Gross teaches “monitoring a latency of the communication link, requesting an emergency stop maneuver, cancelling the requested emergency stop maneuver, and continuing the requested emergency stop maneuver.”’319 paragraph [0036]–[0037] discloses monitoring latency, then either permitting action, or requiring stop if threshold remains exceeded, and repeating until connection stabilizes. Text specifically: “If signal latency is not within threshold… operator is notified… If later reduced below threshold, vehicle advances… otherwise steps repeat until completion.” This is the same cycle: trigger stop, cancel on recovery, or continue stop until recovered. ’352 confirms braking continues until link returns: “automatic speed reduction until wireless communication link becomes available again.” Thus no novel difference exists; claim 23 is obvious. Claims 24 and 34, Gross teaches “requesting an emergency stop maneuver includes the control unit sending a request for an emergency stop maneuver to a brake system of the vehicle.”’319 expressly describes the control module outputting stop-authorization signals to propulsion control. Paragraph [0034]–[0036] states the latency module determines whether commands may deactivate propulsion or allow movement. ’352 gives the missing explicit mechanical-brake interface. Paragraph [0135] discloses “vehicle system may perform automated emergency braking… reducing acceleration or applying brakes.” Combining brake execution from ’352 with latency-triggered stop control from ’319 makes this limitation obvious. Claims 25 and 35 , Gross teaches “operating from a remote operation station includes remotely driving or remotely monitoring.”’352 fully discloses remote teleoperation including monitoring + driving input through UI and live video representation. Paragraph [0051]–[0054]: “remote operator generates driving instructions… visual representation displayed to remote operator for maneuvering.” ’319 complements monitoring by latency evaluation. Because remote monitoring and control are explicitly taught, claim 25 provides no patentable distinction. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MASUD AHMED whose telephone number is (571)270-1315. The examiner can normally be reached M-F 9:00-8:30 PM PST with IFP. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at 571 270 3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MASUD . AHMED Primary Examiner Art Unit 3657A /MASUD AHMED/Primary Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Aug 07, 2024
Application Filed
Dec 06, 2025
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596012
METHOD FOR DETERMINING POINT OF INTEREST FOR USER, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12589729
LOAD BALANCING APPROACH TO EXECUTE COST OPTIMIZATION IN MULTI-MODE AND MULTI-GEAR HYBRID ELECTRIC VEHICLES
2y 5m to grant Granted Mar 31, 2026
Patent 12589777
VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12578723
VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12578739
Vehicle Control System
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+13.2%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1178 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month