Prosecution Insights
Last updated: April 19, 2026
Application No. 17/799,115

A STAND-ALONE ACCELERATOR PROTOCOL (SAP) FOR HETEROGENEOUS COMPUTING SYSTEMS

Final Rejection §103
Filed
Aug 11, 2022
Examiner
HOANG, PHUONG N
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Arizona Board of Regents
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
4y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
240 granted / 345 resolved
+14.6% vs TC avg
Strong +51% interview lift
Without
With
+50.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
21 currently pending
Career history
366
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Note The prior art rejection below cites particular paragraphs, columns, and/or line numbers in the references for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art. Claims 1, 3 – 22 are pending for examination. Claims 1, 3,6, 8, 13 and 21 are pending for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/09/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6 – 7, 9 – 10, and 12 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Guim et al., (US PUB 2019/0007334 hereinafter Guim) in view of Sukhomlinov et al., (US PUB 2019/0004871 hereinafter Sukhomlinov). As to claim 1, Guim teaches a method for providing a stand-alone accelerator protocol (SAP) executing on a hardware accelerator, the method comprising: providing, through the SAP executing on the hardware accelerator, an indication of available accelerator resources to a remote computing system agent (“...Node 2 HFI may issues ACKs to the one or more accelerators. A core on Node 1 (e.g., host 602-5), issues an enquiry (ENQ©Desc) to its HFI. The HFI issues STLDiscoveryRemoteAcc, with a list of UUIDs (or a request for all UUIDs), which is forwarded via the fabric to Node 2 (e.g., host 602-3) HFI. Node 2 HFI responds with a response value revealing the available accelerators.” Para. 0107) and (“A host fabric interface (HFI) apparatus, including: an HFI to communicatively couple to a fabric; and a remote hardware acceleration (RHA) engine to: query an orchestrator via the fabric to identify a remote resource having an accelerator ...” abstract) wherein the SAP includes: discovering the hardware accelerator (“.... In certain embodiments, the logic for providing such a discovery request may be provided on the HFI of host 602-5....” para. 0096) and (“...This flow allows an application to discover if an acceleration operation or a list of operations are currently supported by a given node...” para. 0106 - 107) and caching information associated with the hardware accelerator (“Without the aid of a remote hardware accelerator, to move a large block of memory from memory bank 732-1 to memory bank 732-3, core 710 would have to fetch the memory via fabric 772 into its local cache or memory, and then write the memory out via fabric 772 to memory bank 732-3....” para. 0114) and (“At operation 6, the accelerator 906 extracts the parameters that are pointed to by the memory descriptor. The reads generated to the descriptor may hit the cache of HFI 974, which may have previously acquired ownership.” Para. 0146) dynamically assigning network addresses to the hardware accelerator (“Orchestrator 604 may provide a remote hardware acceleration registration (RHAR) engine 622, and a remotely accessible accelerator table (RAAT) 620.... RAAT 620 may be a local store of registered remotely accessible accelerators, including appropriate interfaces for those accelerators. For example, RAAT 620 may include a field that indicates that host 602-3 provides a network accelerator, and may also include a profile for providing an appropriate interface for accessing the network accelerator of host 602-3. In various examples, host 602-3 may be identified in RAAT 620 by any suitable identifier, such as an IP address, MAC address, hostname, or other locally unique identifier...” para. 0092 – 0094) and (“..Note that there are existing mechanisms for mapping and discovering remote resources, such as system address decoders that can be used to perform mapping and expose information to applications.” para. 0126), wherein the SAP includes: [discovering the hardware accelerator and caching information associated with the hardware accelerator, dynamically assigning network addresses to the hardware accelerator, interpreting transport packets for tracking, managing, registering, and allocating the hardware accelerator, interpreting contents of a message packet used by the hardware accelerator, and providing an interface for a driver for the hardware accelerator] receiving, over a network fabric a request to execute a first computational function (“At operation 4, fabric 970 propagates the Omni-Path™ remote accelerator compute command to node 2 HFI 974.” Para. 0144 - 0145); and responding to the request to execute the first computational instruction, wherein responding to the request to execute the first computational instruction comprises executing the first computational function (“The network accelerator of host 602-3 performs the appropriate processing on the request payload, and returns results to its HFI. The HFI of host 602-3 then returns the results to host 602-5 via fabric 670....’ para. 0102 - 0103) and (“In operation 7, the accelerator performs the requested operation...” para. 0146 – 0147). Guim does not but Sukhomlinov teaches Interpreting transport packets (“...translate the API call to a native invocation for the microservice accelerator; and forward the native invocation to the microservice accelerator...” para. 0265) for tracking (“..tracking ID...” para. 0160), managing, registering (“Orchestrator 604 may provide a remote hardware acceleration registration (RHAR) engine 622, and a remotely accessible accelerator table (RAAT)...” para. 0092) and (“...receive a microservice instance registration for a microservice accelerator, wherein the registration includes a microservice that the microservice accelerator is configured to provide...” abstract and para. 0242) and allocating the hardware accelerator (“..an FPGA from an FPGA pool may be allocated....” para. 0020) and (“...service discovery function (SDF). The SDF can then maintain a catalog of available microservices, which may include mappings for translating the standard microservices API calls to an API call usable by a particular instance of the microservice. This architecture enables the specialization of certain architecture capabilities, such as “bump in the wire” acceleration, FPGA function sets invoked from processing cores, or purpose optimized processors with highly specialized software. These can be transparently integrated into the data center as needed.” Para. 0030), interpreting contents of a message packet used by the hardware accelerator (“...encrypted content...” para. 0173) and (“...translate the API call to a native invocation for the microservice accelerator; and forward the native invocation to the microservice accelerator...” para. 0265), and providing an interface for a driver for the hardware accelerator (“..The microservices driver may then program the FPGA with the gate configuration, and once the FPGA is programmed, the microservice driver may begin forwarding calls to the FPGA via the standardized microservices API mapped to the specific interface for that FPGA instance..” para. 0031); It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Guim by applying the teachings of Sukhomlinov because Sukhomlinov would provide a framework for implementing discovery of microservice accelerator to services requests (para. 0027 and 0174 - 0177). As to claim 6, Guim modified by Sukhomlinov teaches The method of claim 1, Guim teaches further comprising providing auto-discovery of the hardware accelerator to the remote computing system agent when connecting to the network fabric (“..a processor of host 602-5 may send a request to its HFI for network acceleration, and the fabric sends a discovery request to orchestrator 604” para. 0096). As to claim 7, Guim modified by Sukhomlinov teaches The method of claim 6, Guim teaches wherein the indication of available processing resources of the hardware accelerator is provided upon connection to the network fabric (“Upon receiving the discovery request, orchestrator 604 queries RAAT 620 to determine the availability of any network accelerators. In this case, orchestrator 604 determines that host 602-3 has an available network accelerator. Orchestrator 604 also extracts from RAAT 620 the appropriate interface information for the network accelerator. Orchestrator 604 then returns this information to host 602-5 via fabric 670.” Para. 0097). As to claim 9, Guim modified by Sukhomlinov teaches The method of claim 1, Guim teaches further comprising determining whether to accept the request to execute the first computational function based on availability of accelerator resources (“Upon receiving the discovery request, orchestrator 604 queries RAAT 620 to determine the availability of any network accelerators. In this case, orchestrator 604 determines that host 602-3 has an available network accelerator...” para. 0097). As to claim 10, Guim modified by Sukhomlinov teaches The method of claim 9, Guim teaches further comprising reconfiguring a subroutine of the hardware accelerator in response to accepting the request to execute the first computational function (“...Orchestrator 604 also extracts from RAAT 620 the appropriate interface information for the network accelerator...” para. 0097). As to claim 12, Guim modified by Sukhomlinov teaches The method of claim 1, Guim teaches further comprising providing performance data of the hardware accelerator to the remote computing system agent (“The network accelerator of host 602-3 performs the appropriate processing on the request payload, and returns results to its HFI. The HFI of host 602-3 then returns the results to host 602-5 via fabric 670....’ para. 0102 - 0103) As to claim 13, this is an accelerator hardware claim of claim 1. See rejection for claim 1 above. Further, Guim teaches an accelerator processor (“Node 0 208 is a processing node including a processor...” para. 0033); and a memory comprising instructions (“...memory...” para. 0038). As to claim 14, Guim modified by Sukhomlinov teaches The hardware accelerator of claim 13, Guim teaches wherein the hardware accelerator connects to the remote computing system agent without being provisioned by a central processing unit (CPU) (“A host fabric interface (HFI) apparatus, including: an HFI to communicatively couple to a fabric; and a remote hardware acceleration (RHA) engine to: query an orchestrator via the fabric to identify a remote resource having an accelerator ...” abstract). As to claim 15, Guim modified by Sukhomlinov teaches The hardware accelerator of claim 13, Guim teaches wherein the hardware accelerator supports hot-plugging into the network fabric (“...throughout data center 200, various nodes may provide different types of HFIs 272, such as onboard HFIs and plug-in HFIs...” para. 0036 and 0116). As to claim 16, Guim modified by Sukhomlinov teaches The hardware accelerator of claim 15, Guim teaches wherein the hardware accelerator supports one or more of a Peripheral Component Interconnect Express (PCIe),Transmission Control Protocol/Internet Protocol (TCP/IP), InfiniBand (IB), or Universal Serial Bus (USB) network fabric (element fabric 270 of figure 2). As to claim 17, Guim modified by Sukhomlinov teaches The hardware accelerator of claim 15, Guim teaches wherein the hardware accelerator is further configured to provide auto-discovery of the accelerator during connection time (“..Thus, host 602-5 queries orchestrator 604 via fabric 670, asking for availability of any network accelerators. In certain embodiments, the logic for providing such a discovery request may be provided on the HFI of host 602-5. Thus, a processor of host 602-5 may send a request to its HFI for network acceleration, and the fabric sends a discovery request to orchestrator 604.” Para. 0096). As to claim 18, Guim modified by Sukhomlinov teaches The hardware accelerator of claim 13, Guim teaches wherein the memory further store a plurality of subroutine configurations (“In certain embodiments, a “discovery flow” may also be provided. In an embodiment, the HFI, fabric flow, and memory controller may be extended to provide a method for application to discover which acceleration functions each accelerator of a given node exposes. Each of the node memory agents may register to the HFI the UUID memory operations supported by the accelerator. This may be registered, for example, via a pcode. This information, stored locally in the HFIC, can be accessed by remote nodes through a novel fabric flow. This flow allows an application to discover if an acceleration operation or a list of operations are currently supported by a given node. The remote HFI may respond with the memory operations that each of the memory controllers support. Alternatively, all operations supported by a remote node can be requested.” Para. 0106). As to claim 19, Guim modified by Sukhomlinov teaches The hardware accelerator of claim 18, Guim teaches wherein responding to the request to execute the first computational function is further based on whether the first computational function corresponds to one of the plurality of subroutine configurations (“...Each of the node memory agents may register to the HFI the UUID memory operations supported by the accelerator. This may be registered, for example, via a pcode. This information, stored locally in the HFIC, can be accessed by remote nodes through a novel fabric flow. This flow allows an application to discover if an acceleration operation or a list of operations are currently supported by a given node....” Para. 0106). As to claim 20, Guim modified by Sukhomlinov teaches The hardware accelerator of claim 18, Guim teaches wherein the memory further comprises instructions that cause the hardware accelerator to load one of the plurality of subroutine configurations and execute the first computational function (“In certain embodiments, a “discovery flow” may also be provided. In an embodiment, the HFI, fabric flow, and memory controller may be extended to provide a method for application to discover which acceleration functions each accelerator of a given node exposes. Each of the node memory agents may register to the HFI the UUID memory operations supported by the accelerator. This may be registered, for example, via a pcode. This information, stored locally in the HFIC, can be accessed by remote nodes through a novel fabric flow. This flow allows an application to discover if an acceleration operation or a list of operations are currently supported by a given node. The remote HFI may respond with the memory operations that each of the memory controllers support. Alternatively, all operations supported by a remote node can be requested.” Para. 0106). Claims 3 – 5 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Guim in view of Sukhomlinov, as applied to claim 1, and further in view of Fan et al., (US PAT 10,742,777 hereinafter Fan). As to claim 3, Guim modified by Sukhomlinov teaches the method of claim 1, Guim teaches further comprising returning results of the first computational function [to an address in accordance] with the request (“In operation 11, HFI 972 propagates the response to core 902. Software running on core 902 may then handle the response and use the return payload as is appropriate to the application...” para. 0151). Guim and Sukhomlinov do not but Fan teaches to an address in accordance (“...In Step 209, the response is forwarded to the source address src_ip, src_port of the request through the local proxy listening socket” col. 3 lines 52 – 62). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Guim and Sukhomlinov by applying the teachings of Fan because Fan would provide a mean to return response to specific source address to accelerate traffic (col. 2 lines 1 – 30). Fan also teaches request is acceleration system (title, abstract). Therefore, Fan teaches the same field of the invention and can be combined. As to claim 4, Guim modified by Sukhomlinov teaches The method of claim 3, Guim and Sukhomlinov do not but Fan teaches wherein the address is a source address of the request (“...In Step 209, the response is forwarded to the source address src_ip, src_port of the request through the local proxy listening socket” col. 3 lines 52 – 62). See motivation for claim 3 above. As to claim 5, Guim modified by Sukhomlinov teaches The method of claim 3, Guim and Sukhomlinov do not but Fan teaches wherein the address is a forwarding address indicated in the request (“...In Step 209, the response is forwarded to the source address src_ip, src_port of the request through the local proxy listening socket” col. 3 lines 52 – 62). See motivation for claim 3 above. As to claim 22, this claim recites similar scope of claim 3, see rejection for claim 3 above. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Guim in view of Sukhomlinov, as applied to claim 1, and further in view of Bernat et al., (US PUB 2018/0150330 hereinafter Bernat). Bernat was cited in previous office action. As to claim 8, Guim modified by Sukhomlinov teaches the method of claim 6, Guim teaches wherein the indication of available processing resources of the hardware accelerator [is updated periodically] after connecting to the network fabric (“...Orchestrator 604 also extracts from RAAT 620 the appropriate interface information for the network accelerator. Orchestrator 604 then returns this information to host 602-5 via fabric 670.” Para. 0097) and (“...Node 2 HFI may issues ACKs to the one or more accelerators. A core on Node 1 (e.g., host 602-5), issues an enquiry (ENQ©Desc) to its HFI. The HFI issues STLDiscoveryRemoteAcc, with a list of UUIDs (or a request for all UUIDs), which is forwarded via the fabric to Node 2 (e.g., host 602-3) HFI. Node 2 HFI responds with a response value revealing the available accelerators.” Para. 0107). Guim and Sukhomlinov do not but Bernat teaches is updated periodically (“…Accordingly, the accelerator query manager 1436 periodically (or on demand) transmits environment discovery queries to the orchestrator server 1216 for accelerator updates. Accelerator updates may include lists of accelerator identifiers for newly connected accelerators, removed accelerators, or the like…” para. 0071 - 0072). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Guim and Sukhomlinov by applying the teachings of Bernat because Bernat’s periodically update and discovery for list of accelerators tin order to provide services (para. 0071). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Guim in view of Sukhomlinov, as applied to claim 1, and further in view of Gasser et al., (US PAT 10,719,366 hereinafter Gasser). Gasser was cited in previous office action. As to claim 11, Guim modified by Sukhomlinov teaches The method of claim 1, further comprising: Guim and Sukhomlinov do not but Gasser teaches receiving a request to execute a second computational function while the first computational function is pending; and not accepting the request to execute the second computational function (“…For example, the indirection layer 120 may store a total amount of computation (e.g., based on the size of data associated with calls) sent to the accelerator but not yet completed. If the amount of incomplete computation at the accelerator is higher than a threshold amount, then to avoid delays in processing, the indirection layer 120 may keep calls on the CPU until the accelerator's queue of work is smaller. Based on such knowledge of the accelerator's availability, the indirection layer 120 may optimize the hardware selection 125 by occasionally keeping calls on the CPU that otherwise would have been dispatched to the accelerator 150.”) col. 5 lines 16 – 30). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Guim and Sukhomlinov by applying the teachings of Gasser because Gasser would optimize hardware acceleration operations by not accepting request when the services are not done (col. 5 lines 16 – 30). Response to Arguments III. INFORMATION DISCLOSURE STATEMENT Applicant argued (“On page 2, in paragraph 3, the Office Action states that the Information Disclosure Statement is missing a copy of NPL citation # 3. Applicant has provided herewith an IDS including a copy of NPL citation # 3.” (page 7 of remark). In response, Examiner considered and initialized the IDS filed on 04/09/25. IV. REJECTIONS UNDER 35 U.S.C. § 101 Applicant’s arguments, with respect to 101 rejection have been fully considered and are persuasive. The rejection has been withdrawn. (pages 7 - 8 of remark). V. REJECTIONS UNDER 35 U.S.C. § 112 Applicant’s arguments, with respect to 112 rejection have been fully considered and are persuasive. The rejection has been withdrawn. (page 8 of remark). VI. REJECTIONS UNDER 35 U.S.C. § 103 Applicant’s arguments, with respect to the rejection under103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Guim and Sukhomlinov. VII. NEW CLAIM Applicant argued “Applicant has added dependent claim 22, which is supported by dependent claim 3. No new matter has been added.” (pages 8 - 11 of remark). In response, New claim is rejected. See rejection above. Conclusion The prior art made of record but not relied upon request is considered to be pertinent to applicant’s disclosure. Zhang, (US PUB 2012/0154375), discloses a system with plug-and-play manager can detect attached accelerator (GPU) and load the attached accelerator driver (title, abstract and figures 1 – 15). Kadam, (US PUB 2022/0311594), discloses a computing system having accelerator resource manager detecting each attached accelerator and assign requests to available resources (title, abstract and figures 1 – 8). Gantman, (US PUB 2015/0178032), discloses a multimedia remote display system comprising a multimedia source device to discover a remote multimedia sink device, which has a graphics processing unit (GPU) (title, abstract and figures 1 – 4). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG N HOANG whose telephone number is (571)272-3763. The examiner can normally be reached 9:5-30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHUONG N HOANG/ Examiner, Art Unit 2194 /KEVIN L YOUNG/Supervisory Patent Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Aug 11, 2022
Application Filed
Mar 21, 2025
Non-Final Rejection — §103
Jul 17, 2025
Interview Requested
Aug 07, 2025
Applicant Interview (Telephonic)
Aug 07, 2025
Examiner Interview Summary
Aug 20, 2025
Response Filed
Nov 07, 2025
Final Rejection — §103
Jan 12, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536052
SYSTEMS AND METHODS FOR DEPLOYING PERMISSIONS IN A DISTRIBUTED COMPUTING SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12450106
AUTOMATIC ACCESS CONTROL OF CALLS MADE OVER NAMED PIPES WITH OPTIONAL CALLING CONTEXT IMPERSONATION
2y 5m to grant Granted Oct 21, 2025
Patent 12430176
CONTROLLING OPERATION OF EDGE COMPUTING NODES BASED ON KNOWLEDGE SHARING AMONG GROUPS OF THE EDGE COMPUTING NODES
2y 5m to grant Granted Sep 30, 2025
Patent 12386665
METHOD FOR MANAGING RESOURCES, COMPUTING DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Aug 12, 2025
Patent 12373265
TECHNOLOGIES FOR RULES ENGINES ENABLING HANDOFF CONTINUITY BETWEEN COMPUTING TERMINALS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+50.8%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month