Prosecution Insights
Last updated: April 19, 2026
Application No. 18/619,740

DISTRIBUTABLE AI VOICE UPSCALING

Non-Final OA §102§103
Filed
Mar 28, 2024
Examiner
LE, THUYKHANH
Art Unit
2655
Tech Center
2600 — Communications
Assignee
LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
307 granted / 393 resolved
+16.1% vs TC avg
Strong +37% interview lift
Without
With
+37.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
41.8%
+1.8% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 393 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statement (IDS) submitted on 03/28/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 4. Claims 1-2, 4, 6-7, 17-19 are rejected under 35 U.S.C. 102(a) (2) as being anticipated by Law et al. (US 2024/0371357 A1. Provisional application No. 63/500,164, filed on 05/04/2023.) With respect to Claim 1, Law et al. disclose A method comprising: receiving, over a communication channel by an electronic device of a first user, a low quality voice communication from a second user (Law et al. [0012] and Fig. 1 element 102a-n and 122a-n noise source(s) disclose receiving speech from a second user and noise. Note: a second user is at an audio processing apparatus 104(a) and a first user is at an audio processing apparatus 104b, the first user and the second user are in a communication session such as a teleconference, [0064] describes the audio processing apparatus 104a transmit the one or more audio signal(s) 106a-n to the audio processing apparatus 104b); accessing an artificial intelligence (“AI”) voice upscaling model of the second user, the AI voice upscaling model trained on a voice of the second user (Law et al. [0033] the audio transformation model(s) 110a-n can be trained as speech-to-text models and/or text-to-speech models to regenerated the audio signal, [0038] the one or more audio signal(s) 106a-n, audio processing data 108, the one or more speech signals, one or more frequency patterns associated with the speaking voice of the speaking entity, the one or more speech primitives, the voice vector representation, the speech vector representation, one or more regenerated speech signal(s) 114a-n, one or more portions of image data captured by the video capture device(s) 124a-n, and/or one or more portions of audio localization data can be cached and/or otherwise stored in one or more datastores to be used as training data configured for training, re- training, and/or otherwise updating the audio transformation model(s) 110a-n); using the AI voice upscaling model to improve a quality of the low quality voice communication to create a higher quality voice communication of the second user (Law et al. [0017] disclose that the regenerated speech will only include the speech information and will be recreated in the speaker's voice in a high-quality audio format, [0021] applying machine learning model (e.g., an audio transformation model) to generate low-bandwidth, low-noise communication with high audio fidelity (e.g., such as during a teleconference), [0029] disclose the audio transformation model(s) 110a-n or regenerating speech signals in the respective speaking voice of one or more speaking entities 120a-n with higher quality); and transmitting the higher quality voice communication to a speaker connected to the electronic device of the first user (Law et al. Fig. 1 describes the regenerated speech signal(s) is output via a speaker of device’s the first user.) With respect to Claim 2, Law et al. disclose wherein receiving the low quality voice communication from the second user, accessing the AI voice upscaling model of the second user, using the AI voice upscaling model to improve a quality of the low quality voice communication, and transmitting the higher quality voice communication to a speaker connected to the electronic device of the first user are performed in real-time (Law et al. [0021] applying the machine learning models such as an audio transformation model to generate low-bandwidth, low-noise communications with high audio fidelity (e.g., such as during a teleconference) in real-time.) With respect to Claim 4, Law et al. disclose wherein machine learning is used to continually train the AI voice upscaling model after the training period (Law et al. [0038] the audio transformation model(s) is/are trained, re-trained and updated.) With respect to Claim 6, Law et al. disclose wherein the AI voice upscaling model is accessible via a connection to a cloud computing system (Law et al. [0040] and Fig. 1 disclose the audio transformation model(s) is/are accessible via a network.) With respect to Claim 7, Law et al. disclose wherein the communication channel is of limited bandwidth such that the low quality voice communication from the second user loses quality while being transmitted to the first user (Law et al. [0017] disclose a bandwidth-limited channel.) With respect to Claim 17, Law et al. disclose An apparatus comprising: a processor (Law et al. [0045] describes a processor); and non-transitory computer readable storage media storing code, the code being executable by the processor to perform operations (Law et al. [0045] describes a processor and a computer readable storage medium) comprising: receiving, over a communication channel by an electronic device of a first user, a low quality voice communication from a second user (Law et al. [0012] and Fig. 1 element 102a-n and 122a-n noise source(s) disclose receiving speech from a second user and noise. Note: a second user is at an audio processing apparatus 104(a) and a first user is at an audio processing apparatus 104b, the first user and the second user are in a communication session such as a teleconference, [0064] describes the audio processing apparatus 104a transmit the one or more audio signal(s) 106a-n to the audio processing apparatus 104b); accessing an artificial intelligence (“AI”) voice upscaling model of the second user, the AI voice upscaling model trained on a voice of the second user (Law et al. [0033] the audio transformation model(s) 110a-n can be trained as speech-to-text models and/or text-to-speech models to regenerated the audio signal, [0038] the one or more audio signal(s) 106a-n, audio processing data 108, the one or more speech signals, one or more frequency patterns associated with the speaking voice of the speaking entity, the one or more speech primitives, the voice vector representation, the speech vector representation, one or more regenerated speech signal(s) 114a-n, one or more portions of image data captured by the video capture device(s) 124a-n, and/or one or more portions of audio localization data can be cached and/or otherwise stored in one or more datastores to be used as training data configured for training, re- training, and/or otherwise updating the audio transformation model(s) 110a-n); using the AI voice upscaling model to improve a quality of the low quality voice communication to create a higher quality voice communication of the second user (Law et al. [0017] disclose that the regenerated speech will only include the speech information and will be recreated in the speaker's voice in a high-quality audio format, [0021] applying machine learning model (e.g., an audio transformation model) to generate low-bandwidth, low-noise communication with high audio fidelity (e.g., such as during a teleconference), [0029] disclose the audio transformation model(s) 110a-n or regenerating speech signals in the respective speaking voice of one or more speaking entities 120a-n with higher quality); and transmitting the higher quality voice communication to a speaker connected to the electronic device of the first user (Law et al. Fig. 1 describes the regenerated speech signal(s) is output via a speaker of device’s the first user.) With respect to Claim 18, Law et al. disclose wherein receiving the low quality voice communication from the second user, accessing the AI voice upscaling model of the second user, using the AI voice upscaling model to improve a quality of the low quality voice communication, and transmitting the higher quality voice communication to a speaker connected to the electronic device of the first user are performed in real-time (Law et al. [0021] applying the machine learning models such as an audio transformation model to generate low-bandwidth, low-noise communications with high audio fidelity (e.g., such as during a teleconference) in real-time.) With respect to Claim 19, Law et al. disclose wherein the AI voice upscaling model is trained on the voice of the second user via machine learning during a training period and is used to continually train the AI voice upscaling model after the training period (Law et al. (Law et al. [0021] applying the machine learning models such as an audio transformation model to generate low-bandwidth, low-noise communications with high audio fidelity, [0038] the audio transformation model(s) is/are trained, re-trained and updated.) Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 3 and 10-13 are rejected under 35 U.S.C.103 as being unpatentable over Law et al. (US 2024/0371357 A1) in view of Arora (US 2021/0352347 A1.) With respect to Claim 3, Law et al. disclose wherein the AI voice upscaling model is trained on the voice of the second user via machine learning during a training period (Law et al. [0021] applying machine learning models such as, for example, an audio transformation model, to generate low-bandwidth, low-noise communications with high audio fidelity (e.g., such as during a teleconference, [0038 and 0040] training the audio transformation model on the voice of the speaker, so the regenerated speech signal is in the respective voice of the speaking entity), and wherein the AI voice upscaling model is uploaded to a computing device accessible to the first user. Law et al. fail to explicitly teach wherein the AI voice upscaling model is uploaded to a computing device accessible to the first user. However, Arora teaches wherein the AI voice upscaling model is uploaded to a computing device accessible to the first user (Arora [0003] describes streaming audio, video and related content to a client device, [0006] describes sending an upscaling model to the client device.) Law et al. in view of Arora are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of regenerating the speech signal in the target speaker as taught by Law et al., using teaching of sending the upscaling model to the client device as taught by Arora for the benefit of upscaling the video stream for displaying by the client device (Arora [0005] describes upscaling the video stream.) With respect to Claim 10, Law et al. disclose A method comprising: training an artificial intelligence (“AI”) voice upscaling model using a voice of a second user located remotely from a first user (Law et al. [0038] the one or more audio signal(s) 106a-n, audio processing data 108, the one or more speech signals, one or more frequency patterns associated with the speaking voice of the speaking entity, the one or more speech primitives, the voice vector representation, the speech vector representation, one or more regenerated speech signal(s) 114a-n, one or more portions of image data captured by the video capture device(s) 124a-n, and/or one or more portions of audio localization data can be cached and/or otherwise stored in one or more datastores to be used as training data configured for training, re- training, and/or otherwise updating the audio transformation model(s) 110a-n, [0012] disclose a teleconference between two or more participants (e.g., a voice or video call)); initiating a voice communication between an electronic device of the second user and an electronic device of the first user over a communication channel (Law et al. [0040] disclose transmitting the audio processing data over the network to first user’s device, the first user’s device (e.g., audio processing apparatus 104b) regenerate the speech signals in the respective voice of the second user), wherein the electronic device of the first user uses the AI voice upscaling model to create a higher quality voice communication of the second user prior to transmitting the higher quality voice communication to the first user (Law et al. [0017] disclose that the regenerated speech will only include the speech information and will be recreated in the speaker's voice in a high-quality audio format, [0021] applying machine learning model (e.g., an audio transformation model) to generate low-bandwidth, low-noise communication with high audio fidelity (e.g., such as during a teleconference), [0029] disclose the audio transformation model(s) 110a-n or regenerating speech signals in the respective speaking voice of one or more speaking entities 120a-n with higher quality, [0042] disclose the single audio processing 104a can be configured to perform all of the methods described herein in order to regenerate one or more portions of speech signals associated with one or more speaking entities 120a-n). Law et al. disclose using the audio transformation model trained on the voice of the second user to improve the quality of the audio signal. Law et al. disclose the audio transformation model is integrating with the audio processing apparatus (Law et al. Fig. 1.). Law et al. fail to explicitly teach uploading the upscaling model to a computing device accessible to the first user. However, Arora teaches uploading the AI voice upscaling model Law et al. in view of Arora are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of regenerating the speech signal in the target speaker as taught by Law et al., using teaching of sending the upscaling model to the client device as taught by Arora for the benefit of upscaling the video stream for displaying by the client device (Arora [0005] describes upscaling the video stream.) With respect to Claim 11, Law et al. in view of Arora teach wherein, during the voice communication, the electronic device of the first user accesses the AI voice upscaling model of the second user and uses the AI voice upscaling model to create the higher quality voice communication and transmits the higher quality voice communication to a speaker connected to the electronic device of the first user in real time (Law et al. Fig. 1 (the right side of Fig. 1, the electronic device of the fist user accesses the audio transformation model(s) 110a-n to regenerated speech signal(s) from the second user and output the regenerated speech signal(s) via a speaker). With respect to Claim 12, Law et al. in view of Arora teach wherein the AI voice upscaling model is trained on the voice of the second user via machine learning during a training period (Law et al. [0029] the audio transformation model(s) 110a-n can be artificial intelligence (AI) models trained to perform one or more AI techniques and/or one or more machine learning (ML) techniques for regenerating speech signals in the respective speaking voice of one or more speaking entities, [0038] describes using the training data to train the model.) With respect to Claim 13, Law et al. in view of Arora teach wherein machine learning is used to continually train the AI voice upscaling model on the voice of the second user after the training period (Law et al. [0038] describes the audio transformation model(s) is/are trained, re-trained and updated.) 7. Claims 8 is rejected under 35 U.S.C.103 as being unpatentable over Law et al. (US 2024/0371357 A1) in view of Dai et al. (US 2024/0274120 A1.) With respect to Claim 8, Law et al. disclose further comprising: training an AI voice upscaling model on the voice of the first user (Law et al. [0038] describes the audio transformation model is trained on the voice of the teleconference participant); and Law et al. trains the model on the voice of the speaker. Law et al. fail to explicitly teach sending the model to a cloud computing system. However, Dai et al. teach uploading the AI voice upscaling model trained on the voice of the first user to a cloud computing system (Dai et al. [0038] describes training the synthesis model with the target timbre, [0028] describes sending the trained speech synthesis model to a terminal device/server device.) Law et al. in view of Dai et al. are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of regenerating the speech signal in the target speaker as taught by Law et al., using teaching of sending the trained synthesis model to the server as taught by Dai et al. for the benefit of performing the speech synthesis service on the terminal device/server device according to the speech synthesis model (Dai et al. [0038] describes training the synthesis model with the target timbre, [0028] describes sending the trained speech synthesis model to a terminal device/server device.) 8. Claims 15-16 are rejected under 35 U.S.C.103 as being unpatentable over Law et al. (US 2024/0371357 A1) in view of Arora (US 2021/0352347 A1) and Dai et al. (US 2024/0274120 A1.) With respect to Claim 15, Law et al. in view of Arora teach all the limitations of Claim 10 upon which Claim 15 depends. Law et al. in view of Arora teach the audio transformation model to regenerating the audio signal in the voice of the target speaker. Law et al. in view of Arora fail to explicitly teach transmitting the model to a cloud computing system. However, Dai et al. teach wherein uploading the AI voice upscaling model to a computing device accessible to the first user comprises uploading the AI voice upscaling model to a cloud computing system (Dai et al. [0038] describes training the synthesis model with the target timbre, [0028] describes sending the trained speech synthesis model to a terminal device/server device.) Law et al. in view of Arora and Dai et al. are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of regenerating the speech signal in the target speaker as taught by Law et al., using teaching of sending the upscaling model to the client device as taught by Arora for the benefit of upscaling the video stream for displaying by the client device, using teaching of sending the trained synthesis model to the server as taught by Dai et al. for the benefit of performing the speech synthesis service on the terminal device/server device according to the speech synthesis model (Dai et al. [0038] describes training the synthesis model with the target timbre, [0028] describes sending the trained speech synthesis model to a terminal device/server device.) With respect to Claim 16, Law et al. in view of Arora teach further comprising: training an AI voice upscaling model on the voice of the first user (Law et al. [0038] describes training data includes the speech signals from the participants of the teleconference); and Law et al. fail to explicitly teach sending the model to a cloud computing system. However, Dai et al. teach uploading the AI voice upscaling model trained on the voice of the first user to a cloud computing system (Dai et al. [0038] describes training the synthesis model with the target timbre, [0028] describes sending the trained speech synthesis model to a terminal device/server device.) Law et al. in view of Arora and Dai et al. are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of regenerating the speech signal in the target speaker as taught by Law et al., using teaching of sending the upscaling model to the client device as taught by Arora for the benefit of upscaling the video stream for displaying by the client device, using teaching of sending the trained synthesis model to the server as taught by Dai et al. for the benefit of performing the speech synthesis service on the terminal device/server device according to the speech synthesis model (Dai et al. [0038] describes training the synthesis model with the target timbre, [0028] describes sending the trained speech synthesis model to a terminal device/server device.) Allowable Subject Matter 9. Claims 5, 9, 14 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior art(s) taken alone or in combination fail(s) to teach the following element(s) in combination with the other recited elements in the claim(s) . “wherein training the AI voice upscaling model and uploading the AI voice upscaling model occur simultaneously.” as recited in Claim 5. “wherein training the AI voice upscaling model on the voice of the second user and uploading the AI voice upscaling model occur simultaneously.” as recited in Claim 14. “wherein accessing the AI voice upscaling model includes downloading the AI voice upscaling model from a cloud computing system and storing the AI voice upscaling model locally on one of the electronic device of the first user and a local electronic device accessible to the electronic device of the first user prior to receiving the low quality voice communication from the second user.” as recited in Claims 9 and 20. The closest prior arts found as following. a. Law et al. (US 2024/0371357 A1.) In this reference, Law et al. disclose the audio transformation model to regenerating the speech signal in the voice of the target speaker (Law et al. [0017] disclose that the regenerated speech will only include the speech information and will be recreated in the speaker's voice in a high-quality audio format, [0021] applying machine learning model (e.g., an audio transformation model) to generate low-bandwidth, low-noise communication with high audio fidelity (e.g., such as during a teleconference), [0029] disclose the audio transformation model(s) 110a-n or regenerating speech signals in the respective speaking voice of one or more speaking entities 120a-n with higher quality, [0038] the one or more audio signal(s) 106a-n, audio processing data 108, the one or more speech signals, one or more frequency patterns associated with the speaking voice of the speaking entity, the one or more speech primitives, the voice vector representation, the speech vector representation, one or more regenerated speech signal(s) 114a-n, one or more portions of image data captured by the video capture device(s) 124a-n, and/or one or more portions of audio localization data can be cached and/or otherwise stored in one or more datastores to be used as training data configured for training, re- training, and/or otherwise updating the audio transformation model(s) 110a-n.) Law et al. disclose training the audio transformation model. However, Law et al. does not teach and/or disclose training and upload/download/transmitting/sending the model simultaneously as recited in Claims 5 and 14. Law et al. does not teach upload/download/transmitting/sending the model as recited in Claims 9 and 20. b. Arora (US 20210352347 A1.) Arora discloses training and transmitting the upscaling model (Arora [0003] describes streaming audio, video and related content to a client device, [0006] describes sending an upscaling model to the client device, [0055] describes training the upscaling mode.) Arora discloses training and transmitting the upscaling model for upscaling the video content. However, Arora does not teach and/or suggest training and transmitting the upscaling model occur simultaneously as recited in Claims 5 and 14. Arora disclose sending the downscaled video content as a video stream along with the corresponding upscaling model to a client device. Arora does not teach and/or suggest sending the upscaling model sending the video content as recited in claims 9 and 20. c. Dai et al. (US 2024/0274120 A1.) In this reference, Dai et al. disclose training and sending the synthesis model to the terminal device/the server device (Dai et al. [0038] describes training the synthesis model with the target timbre, [0028] describes sending the trained speech synthesis model to a terminal device/server device.) However, Dai et al. does not teach and/or suggest training and transmitting the synthesis model occur simultaneously as recited in Claims 5 and 14. Dai et al. disclose sending the synthesis model to the terminal device/the server device. However, Dai et al. does not teach and/or suggest sending the synthesis model to the one or more the electronic device and storing the synthesis model locally on one of the electronic device of the user and a local electronic device accessible to the electronic device of the first user prior to receiving the speech signal from the second user as claimed in Claims 9 and 20. Conclusion 10. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See PTO-892. a. Shamir et al. (US 2025/0292098 A1.) In this reference, Shamir et al. disclose processing the speech data to generate an upscaled speech output (e.g., speech data that is higher quality than the input speech input data, etc.) b. Calle et al. (US 2018/0358003 A1.) In this reference, Calle et al. disclose recovering the speech quality through regenerating of the speech signal by eliminating room echo, channel problems. c. Chen (US 2014/0088968 A1.) In this reference, Chen discloses modifying the speech segments and regenerating an output speech with high quality. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THUYKHANH LE whose telephone number is (571)272-6429. The examiner can normally be reached Mon-Fri: 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C. Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THUYKHANH LE/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597413
ELECTRONIC DEVICE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12592218
COMMUNICATION DEVICE, COMMUNICATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12592239
ACTIVE VOICE LIVENESS DETECTION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586577
AUTOMATIC SPEECH RECOGNITION USING MULTIPLE LANGUAGE MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579365
INFORMATION ACQUISITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+37.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 393 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month