Prosecution Insights
Last updated: April 19, 2026
Application No. 18/431,412

POINT CLOUD PROCESSING METHOD, DECODER AND STORAGE MEDIUM

Non-Final OA §101§102§103
Filed
Feb 02, 2024
Examiner
CHEN, HUO LONG
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
314 granted / 590 resolved
-8.8% vs TC avg
Strong +30% interview lift
Without
With
+30.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
37 currently pending
Career history
627
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
64.3%
+24.3% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 590 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a Judicial Exception in the form of an Abstract Idea, without significantly more: Beginning with independent claim 1, a process claim, which recites: A point cloud processing method based on a generative adversarial network (GAN), wherein the GAN comprises a generator, wherein the generator comprises a feature extraction module, a coarse feature expansion module, a geometry generation module and a geometry offset prediction module, and the method comprises: inputting a first point cloud into the feature extraction module to obtain a first feature of a point of the first point cloud; inputting the first feature into the coarse feature expansion module to perform up- sampling on the first feature to obtain a second feature; inputting the second feature into the geometry generation module to map the second feature from a feature space to a geometry space to obtain a second point cloud; inputting the second feature and the second point cloud into the geometry offset prediction module to determine an offset value of a point in the second point cloud in the geometry space; and obtaining a third point cloud based on the second point cloud and the offset value. The claim recites abstract ideas: The generative adversarial network (GAN) is used to generally apply the abstract idea without limiting how the trained generative adversarial functions. The generative adversarial is described at a high level such that it amounts to using a computer with a generic generative adversarial to apply the abstract idea. These limitations only recite the outcomes of “obtaining a first feature of a point of the first point cloud”, “performing up-sampling on the first feature to obtain a second feature”, “mapping the second feature from a feature space to a geometry space to obtain a second point cloud”, “determining an offset value of a point the second point cloud in the geometry space” and “obtaining a third point cloud based on the second point cloud and the offset value” and without any details about how the outcomes are accomplished. “Determining” in step fall within the “mental processes” grouping of abstract ideas. Beginning with independent claim 11, a decoder claim, which recites: A decoder, comprising a feature extraction module, a coarse feature expansion module, a geometry generation module, a geometry offset prediction module and a processing module, wherein the feature extraction module is configured to perform feature extraction on a first point cloud input into the feature extraction module, to obtain a first feature of a point of the first point cloud; the coarse feature expansion module is configured to up-sample the first feature input into the coarse feature expansion module, to obtain a second feature; the geometry generation module is configured to map the second feature input into the geometry generation module from a feature space to a geometry space, to obtain a second point cloud; the geometry offset prediction module is configured to determine an offset value of a point in the second point cloud in the geometry space after the second feature and the second point cloud are input into the geometry offset prediction module; and the processing module is configured to obtain a third point cloud based on the second point cloud and the offset value. The claim recites abstract ideas: “obtaining a first feature of a point of the first point cloud”, “performing up-sampling on the first feature to obtain a second feature”, “mapping the second feature from a feature space to a geometry space to obtain a second point cloud”, “determining an offset value of a point the second point cloud in the geometry space” and “obtaining a third point cloud based on the second point cloud and the offset value” recited as being performed by a decoder (computer). The decoder (computer) is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer. “Determining” in step fall within the “mental processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application because there are no recited additional elements that amount to a practical application, such as but no limited to the following as noted in MPEP 2106: PNG media_image2.png 453 1451 media_image2.png Greyscale The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reason: There are not additional elements other than the abstract idea. Independent claims 1 and 11 is merely a generic computer implementation of the abstract ideas and likewise do not amount to significantly more. See MPEP 2106: PNG media_image3.png 249 1434 media_image3.png Greyscale Likewise, the following dependent claims have been analyzed and do not recite elements that recite a practical application or significantly more and remain rejected under 35 USC 101: Claims 2-10 and 12-20. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. A computer program is merely a set of instructions capable of being implemented by a computer. However, by itself without being encoded onto a non-transitory computer-readable medium is not realizable. Hence, claim 20 contains merely nonstatutory functional descriptive material. See MPEP 2106: IV(B)(1)(a), last paragraph. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a feature extraction module” in claim 1, “a coarse feature expansion unit” in claim 1, “a geometry generation module” in claim 1, “a geometry offset module” in claim 1, and “processing module” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. “a feature extraction module” in claim 1 is read as the processor 1020 shown in Fig.10, “a coarse feature expansion unit” in claim 1 is read as the processor 1020 shown in Fig.10, “a geometry generation module” in claim 1 is read as the processor 1020 shown in Fig.10, “a geometry offset module” in claim 1 is read as the processor 1020 shown in Fig.10, and “processing module” in claim 1 is read as the processor 1020 shown in Fig.10. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 11-13, 15 and 16 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Point Cloud Upsampling via Disentangled Refinement. With respect to claim 11, Point Cloud Upsampling via Disentangled Refinement teaches a decoder [regarding to the system shown in Fig.2], comprising a feature extraction module (Fig.2), a coarse feature expansion module [regarding to the feature expansion shown in Fig.2], a geometry generation module [regarding to the Coordinate Regression shown in Fig.2], a geometry offset prediction module [regarding to the Spatial Refiner shown in Fig.2] and a processing module [a processing module is inherent disclosed in the system shown in Fig.2 to control the Dense Generator and the Spatial Refiner to perform their desired functions], wherein the feature extraction module is configured to perform feature extraction on a first point cloud input into the feature extraction module, to obtain a first feature of a point of the first point cloud [a feature extraction unit to embed the feature map FP ∈ R N×C from P (Fig.2 and section 3.2)]; the coarse feature expansion module is configured to up-sample the first feature input into the coarse feature expansion module, to obtain a second feature [feeding FP into a feature expansion unit to generate the expanded feature map FE ∈ R rN×C (Fig.2 and section 3.2)]; the geometry generation module is configured to map the second feature input into the geometry generation module from a feature space to a geometry space, to obtain a second point cloud [Q′ is generated by regressing the point coordinates from FE via multi-layer perceptrons (MLPs) (section 3.2)]; the geometry offset prediction module is configured to determine an offset value of a point in the second point cloud in the geometry space after the second feature and the second point cloud are input into the geometry offset prediction module; and the processing module is configured to obtain a third point cloud based on the second point cloud and the offset value [Considering that Q′ may still be noisy and non-uniform, we thus design a spatial refiner to further fine-tune the spatial location of each point in Q′ and generate a high-quality dense point set Q, we adopt residual learning to regress the per-point offset ∆Q (offset value of a point) (section 3.3)]. With respect to claim 12, which further limits claim 11, Point Cloud Upsampling via Disentangled Refinement teaches wherein the geometry offset prediction module (Fig.2, Spatial Refiner is specifically configured to: determine k nearest neighbor points of the point in the second point cloud in the geometry space, where k is a positive integer [employ KNN grouping on Q′ to search K-nearest neighbors and group the associated neighbor points together to obtain a stacked rN × K × 3 point volume (section 3.3)]; determine k nearest neighbor points of the second feature in the feature space (section 3.3); concatenate, in a feature dimension, the second feature and the k nearest neighbor points of the second feature in the feature space, to obtain a third feature (section 3.3); concatenate, in a geometry dimension, the point in the second point cloud and the k nearest neighbor points of the point in the second point cloud in the geometry space, to obtain a fourth feature (section 3.3); and determine, based on the third feature and the fourth feature, the offset value of the point in the second point cloud in the geometry space (section 3.3). With respect to claim 13, which further limits claim 12, Point Cloud Upsampling via Disentangled Refinement teaches wherein the geometry offset prediction module is specifically configured to: concatenate, in the feature dimension, the third feature and the fourth feature to obtain a fifth feature (section 3.3); and input the fifth feature as a query value Q and a key value K into a first autocorrelation attention mechanism, and input the fourth feature as a value vector V into the first autocorrelation attention mechanism, to obtain the offset value of the point in the second point cloud in the geometry space (section 3.3). With respect to claim 15, which further limits claim 11, Point Cloud Upsampling via Disentangled Refinement teaches wherein the coarse feature extension module is specifically configured to: attach, in a feature dimension, an in-dimensional vector to the first feature, to obtain a sixth feature, where m is a positive integer (section 3.2); and input the sixth feature into a first multilayer perceptron (MLP), and input a result output by the first MLP into a second autocorrelation attention mechanism, to obtain the second feature (section 3.2). With respect to claim 16, which further limits claim 15, Point Cloud Upsampling via Disentangled Refinement teaches wherein vector values of the m-dimensional vector are equally spaced [in the feature expansion unit, we adopt the commonly-used expansion operation by duplicating FP with r copies and concatenating with a regular 2D grid to obtain FE (section 3.2). Therefore, vector values of the m-dimensional vector are equally spaced during performing the expansion operation.]. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 6 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Point Cloud Upsampling via Disentangled Refinement, and further in view of PU-GAN: a Point Cloud Upsampling Adversarial Network. Claims 1-3, 5 and 6 are method claims and they are being analyzed and rejected for the same reason set forth in the rejection of claims 11-13, 15 and 16. Point Cloud Upsampling via Disentangled Refinement does not teach a point cloud processing method based on a generative adversarial network (GAN). PU-GAN: a Point Cloud Upsampling Adversarial Network teaches a point cloud processing method based on a generative adversarial network (GAN) (abstract). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Point Cloud Upsampling via Disentangled Refinement according to the teaching of PU-GAN: a Point Cloud Upsampling Adversarial Network to use the generative adversarial network (GAN) to output the point cloud data because this will allow the point cloud data to be obtained more effectively. With respect to claim 9, which further limits claim 1, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the GAN further comprises a discriminator, and the method further comprises: inputting the third point cloud into the discriminator to obtain a confidence value of a point in the third point cloud, wherein the confidence value indicates whether the third point cloud is ground truth. PU-GAN: a Point Cloud Upsampling Adversarial Network teaches wherein the GAN further comprises a discriminator (section 3.2.2), and the method further comprises: inputting the third point cloud into the discriminator to obtain a confidence value of a point in the third point cloud, wherein the confidence value indicates whether the third point cloud is ground truth (section 3.2.2, 4.2 and 4.3). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Point Cloud Upsampling via Disentangled Refinement according to the teaching of PU-GAN: a Point Cloud Upsampling Adversarial Network to use the generative adversarial network (GAN) to output the point cloud data because this will allow the point cloud data to be obtained more effectively. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Point Cloud Upsampling via Disentangled Refinement, PU-GAN: a Point Cloud Upsampling Adversarial Network and further in view of BIN’541 (CN 111680541). With respect to claim 4, which further limits claim 3, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the first autocorrelation attention mechanism is a multi-head autocorrelation attention mechanism. BIN’541 teaches wherein the first autocorrelation attention mechanism is a multi-head autocorrelation attention mechanism (Fig.2). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Cloud Upsampling via Disentangled Refinement and PU-GAN: a Point Cloud Upsampling Adversarial Network according to the teaching of BIN’541 to use multi-head autocorrelation attention mechanism because this is will a model to focus on different parts of the input simultaneously, leading to richer, more nuanced understanding and better performance on complex tasks like language understanding Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Cloud Upsampling via Disentangled Refinement, PU-GAN: a Point Cloud Upsampling Adversarial Network, and further in view of Xiong’845 (CN 202110151845). With respect to claim 7, which further limits claim 5, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the m-dimensional vector is a 2-dimensional vector. Xiong’845 teaches wherein the m-dimensional vector is a 2-dimensional vector [the geometric modelling target is a 2-dimensional vector (page 13)]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Cloud Upsampling via Disentangled Refinement and PU-GAN: a Point Cloud Upsampling Adversarial Network according to the teaching of Xiong’845 to have the 2-dimensinal vector as the geometric modelling target because geometric modeling in two dimensions (2D) remains a vital, efficient, and foundational approach in engineering. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Cloud Upsampling via Disentangled Refinement, PU-GAN: a Point Cloud Upsampling Adversarial Network, and further in view of Cai’654 (US 2022/0335654). With respect to claim 8, which further limits claim 1, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the geometry generation module is specifically configured to: input the second feature into a first full connection (FC) layer to obtain the second point cloud. Cai’654 teaches wherein the geometry generation module is specifically configured to: input the second feature into a first full connection (FC) layer to obtain the second point cloud (paragraph 146 and 147). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Cloud Upsampling via Disentangled Refinement, PU-GAN: a Point Cloud Upsampling Adversarial Network according to the teaching of Cai’654 to output the point cloud data through the full connection (FC) layer because this will allow the point cloud data to be obtained more effectively. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Point Cloud Upsampling via Disentangled Refinement, and further in view of BIN’541 (CN 111680541). With respect to claim 14, which further limits claim 13, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the first autocorrelation attention mechanism is a multi-head autocorrelation attention mechanism. BIN’541 teaches wherein the first autocorrelation attention mechanism is a multi-head autocorrelation attention mechanism (Fig.2). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Point Cloud Upsampling via Disentangled Refinement according to the teaching of BIN’541 to use multi-head autocorrelation attention mechanism because this is will a model to focus on different parts of the input simultaneously, leading to richer, more nuanced understanding and better performance on complex tasks like language understanding Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Point Cloud Upsampling via Disentangled Refinement, and further in view of Xiong’845 (CN 202110151845). With respect to claim 17, which further limits claim 15, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the rn-dimensional vector is a 2-dimensional vector. Xiong’845 teaches wherein the rn-dimensional vector is a 2-dimensional vector [the geometric modelling target is a 2-dimensional vector (page 13)]. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Point Cloud Upsampling via Disentangled Refinement according to the teaching of Xiong’845 to have the 2-dimensinal vector as the geometric modelling target because geometric modeling in two dimensions (2D) remains a vital, efficient, and foundational approach in engineering. Claims 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Point Cloud Upsampling via Disentangled Refinement, and further in view of Cai’654 (US 2022/0335654). With respect to claim 18, which further limits claim 11, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the geometry generation module is specifically configured to: input the second feature into a first full connection (FC) layer to obtain the second point cloud. Cai’654 teaches wherein the geometry generation module is specifically configured to: input the second feature into a first full connection (FC) layer to obtain the second point cloud (paragraph 146 and 147). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Point Cloud Upsampling via Disentangled Refinement according to the teaching of Cai’654 to output the point cloud data through the full connection (FC) layer because this will allow the point cloud data to be obtained more effectively. With respect to claim 19, which further limits claim 11, Point Cloud Upsampling via Disentangled Refinement does not teach wherein the first point cloud is a point cloud obtained by a decoder through a decoding process. Cai’654 teaches wherein the first point cloud is a point cloud obtained by a decoder through a decoding process (Fig.7). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Point Cloud Upsampling via Disentangled Refinement according to the teaching of Cai’654 to output the point cloud data through the full connection (FC) layer because this will allow the point cloud data to be obtained more effectively. Claim objection Claim 10 is objected to as being dependent upon a rejected base claim 1 because the prior art of record does not teach “wherein inputting the third point cloud into the discriminator to obtain the confidence value of the point in the third point cloud comprises: inputting the third point cloud into at least two consecutive second multilayer perceptron (MLP) layers; performing a maximum pool operation on a feature output by a last second MLP layer in the at least two consecutive second MLP layers to obtain a global feature vector of a third feature; concatenating the global feature vector and features output by the at least two consecutive second MLP layers to determine a concatenated feature; and inputting the concatenated feature into a third autocorrelation attention mechanism, and inputting a result output by the third autocorrelation attention mechanism into a second full connection (FC) layer to obtain the confidence value.” Claim 10 would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcome the 35 U.S.C. 101 rejection. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUO LONG CHEN whose telephone number is (571)270-3759. The examiner can normally be reached on M-F 9am - 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tieu, Benny can be reached on (571) 272-7490. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUO LONG CHEN/Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603178
APPARATUS AND METHODS FOR SUPPORTING MEDICAL DECISIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597162
SYSTEM CALIBRATION USING REMOTE SENSOR DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12592095
METHOD AND SYSTEM OF DETERMINING SHAPE OF A TABLE IN A DOCUMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586398
Detecting a Homoglyph in a String of Characters
2y 5m to grant Granted Mar 24, 2026
Patent 12567271
PICTURE RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
84%
With Interview (+30.3%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 590 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month