Prosecution Insights
Last updated: April 19, 2026
Application No. 18/793,553

KEYPOINT DETECTION METHOD, TRAINING METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

Non-Final OA §102§103
Filed
Aug 02, 2024
Examiner
TAHA, AHMED
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
5 granted / 8 resolved
+0.5% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
35 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
59.8%
+19.8% vs TC avg
§102
29.9%
-10.1% vs TC avg
§112
3.8%
-36.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, 5, 9, 11, 13, 17, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bronstein et. al (U.S. Patent Publication No. 2021/0350620). Regarding claim 1, Bronstein discloses a keypoint detection method performed by an electronic device, the method comprising (interpreted as a computer implemented process that detects keypoints )[Bronstein: 0280 “and detection tasks”]: obtaining a three-dimensional mesh configured for representing a target object [Bronstein: 0011 “a triangular mesh”](teaches identifies geometric structure data as including a triangular mesh which is a 3D mesh); performing feature extraction on vertices of the three-dimensional mesh, to obtain a vertex feature of the three-dimensional mesh [Bronstein: 0011 “a triangular mesh”][Bronstein: 0018 “extracting features associated with each of said data points”](teaches data points which are the mesh/graph (vertices) points, further teaches that the method includes extracting features associated with each point/vertex); performing global feature extraction on the target object based on the vertex feature, to obtain a global feature of the target object, and performing local feature extraction on the target object based on the vertex feature and connection relationship between the vertices, to obtain a local feature of the target object (interpreted as two feature types are computed, global feature summarizing the whole object derived from vertex features and local feature capturing neighborhood/adjacency relationships derived from vertex features plus vertex connectivity)[Bronstein: 0061 “The hierarchical features of the first set of geometric domain data may be extracted from a third set of geometric domain data”][Bronstein: 0290 “The composition of locally supported filters can therefore yield globally supported mappings “](teaches feature extraction at the global level)[Bronstein: 0018 “extracting features associated with each of said data points ; applying a set of weights to the extracted features using the consistent local ordering of data points to compute a new set of output features”][Bronstein: 0045 “Using the connectivity of the geometric domain data ( e.g. a graph ) with a filter comprising a consistent local ordering of vertices ( e.g. a spiral convolutional filter ) , may allow for local processing of each geometric domain data”](teaches feature extraction at the local level)[Bronstein: 0016 “selecting a second point adjacent to said first point”](teaches adjacent points which encodes a connection relationship among vertices, using adjacent/local neighbors for convolution means extracted features incorporate neighborhood connectivity which corresponds to local feature extract)[Bronstein: 0055 “At least one of the down - sampling layers may be a pooling layer”](teaches pooling/down-sampling over the mesh/graph hierarchy, pooling aggregates information across multiple vertices and therefore produces a higher level representation like a global feature derived from per vertex features); and obtaining a position of a keypoint of the target object on the target object based on the vertex feature, the global feature, and the local feature (interpreted as producing the keypoint location (on the object) using the combined learned features (per vertex + global context + local neighborhood context)) [Bronstein: 0128 “we report performance using pseudo - coordinates computed from the vertex positions”][Bronstein: 0327 “MoNet using pseudo - coordinates computed from relative Cartesian coordinates which considering vertex positions as well as globally normalized degree of target nodes for the sake of the fairness”][Bronstein: 0018 “local ordering of data points on the geometric domain ; extracting features associated with each of said data points ; applying a set of weights to the extracted features using the consistent local ordering of data points to compute a new set of output features , and outputting the output features”](teaches utilizing the vertex position which necessarily requires obtaining the vertex (keypoint) position of the target object, further teaches the output features which are the vertex/global/local evidence used to determine task outputs). Regarding claim 3, Bronstein discloses the method according to claim 1, wherein the performing local feature extraction on the target object based on the vertex feature and connection relationship between the vertices, to obtain a local feature of the target object comprises (connection relationship is interpreted as vertices are neighbors/connected to the mesh/graph)[Bronstein: 0045 “Using the connectivity of the geometric domain data ( e.g. a graph ) with a filter comprising a consistent local ordering of vertices ( e.g. a spiral convolutional filter ) , may allow for local processing of each geometric domain data”](teaches local processing to use connectivity which is the basis for local feature extraction): determining a local feature of each of the vertices based on the vertex feature and the connection relationship between the vertices (interpreted as for each vertex, compute a local feature using the vertex’s feature data and which other vertices are connected/neighboring (adjacency))[Bronstein: 0014 “Determining the consistent local ordering of data points may comprise determining the local neighbours of each data point on the geometric domain and ordering said local neighbours in a consistent way”][Bronstein: 0056 “Applying the pooling layer may further comprise determining a subset of vertices of the output of the intrinsic convolutional layer ; and for each vertex of the subset , determining the neighbouring vertices in the geometric domain ; and aggregating input data of the neighbours for all the vertices of the subset”](teaches determining local neighbors for each data point (the connection relationship) and aggregating input data of the neighbors for each vertex, aggregating neighbor input data is exactly computing a per vertex feature that depends on the input features (input data) and neighbor connectivity (neighboring vertices)); and determining the local feature of the target object based on the local feature of each of the vertices (interpreted as combine/aggregate the per vertex local features to produce an object level local feature (a feature representing the target object derived from the local features)[Bronstein: 0259 “Referring to FIG . 26 , when the system 1 is running as an encoder 9 , the processor 2 receives the geometric domain data D , applies an intrinsic convolutional layer to the data D and generates a set of extracted features 4 which are a representation of the geometric domain data D by applying at least an intrinsic convolution layer 12 on the geometric domain data D. The intrinsic convolutional layer includes a consistent local ordering of data points on the geometric domain”] (teaches generating a set of extracted features, a representation of the geometric domain data, which is an object/shape level feature representation derived from the vertex wise processing of the geometric domain). Regarding claim 5, Bronstein discloses the method according to claim 3, wherein the determining the local feature of the target object based on the local feature of each of the vertices comprises: performing feature fusion on the local feature of each of the vertices based on the local feature of each of the vertices, to obtain a fused feature (interpreted as combine the per vertex local features across vertices to produce a single combined feature)[Bronstein: 0056 “aggregating input data of the neighbours for all the vertices of the subset”]; and using the fused feature as the local feature of the target object [Bronstein: 0259 “a set of extracted features 4 which are a representation of the geometric domain data D by applying at least an intrinsic convolution layer 12 on the geometric domain data D”](teaches the extracted/aggregated features as a representation of the geometric domain data (the whole shape/object represented by the mesh/graph), therefore the fused/aggregated feature produced by pooling is used as the object level feature representation, which corresponds to using the fused feature as the local feature of the target object). Claims 9 and 17 are device and non-transitory computer readable storage medium claims corresponding to claim 1 without any additional limitations. Thus, claims 9 and 17 are rejected for the same reasons as claim 1 above. Claims 11 and 19 are device and non-transitory computer readable storage medium claims corresponding to claim 3 without any additional limitations. Thus, claims 11 and 19 are rejected for the same reasons as claim 3 above. Claim 13 is a device claim corresponding to claim 5 without any additional limitations. Thus, claim 13 is rejected for the same reasons as claim 5 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2, 10, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bronstein et. al (U.S. Patent Publication No. 2021/0350620), in view of Black et al. (CA 2734143). Regarding claim 2, Bronstein discloses the method according to claim 1, but fails to explicitly disclose wherein the obtaining a three-dimensional mesh configured for representing a target object comprises: scanning the target object by using a three-dimensional scanning apparatus, to obtain point cloud data of a geometric surface of the target object; and constructing the three-dimensional mesh corresponding to the target object based on the point cloud data. However, Black discloses wherein the obtaining a three-dimensional mesh configured for representing a target object comprises: scanning the target object by using a three-dimensional scanning apparatus, to obtain point cloud data of a geometric surface of the target object; and constructing the three-dimensional mesh corresponding to the target object based on the point cloud data (interpreted as using a 3d scanner to scan the objects surface geometry producing point cloud data (a set of 3d points corresponding to locations on the surface, creating a 3d mesh representation of the object from the point cloud)(Black: Page 95: Lines 2-3 “Typical scanners return a "cloud" of points, which is then triangulated to produce a 3D mesh model”). Bronstein and Black are considered to be analogous to the claimed invention because they are in the same field of computer implemented processing of 3D geometric data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bronstein to incorporate Black’s teachings of using a scanner to return a cloud of points. The motivation for such a combination would provide the benefit of straightforward acquisition pipeline for obtaining the required mesh representation. Claims 10 and 18 are device and non-transitory computer readable storage medium claims corresponding to claim 2 without any additional limitations. Thus, claims 10 and 18 are rejected for the same reasons as claim 2 above. Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Bronstein et. al (U.S. Patent Publication No. 2021/0350620), in view of Tang et al. (U.S. Patent Publication No. 2021/0406475). Regarding claim 4, Bronstein discloses the method according to claim 3, wherein the determining a local feature of each of the vertices based on the vertex feature and the connection relationship between the vertices comprises: determining the vertex as a reference vertex, and determining a vertex feature of the reference vertex and a vertex feature of another vertex based on a vertex feature of each vertex in the three-dimensional mesh, the another vertex being any vertex other than the reference vertex (interpreted as pick one vertex as the reference vertex (the vertex you’re computing a local feature for), identify another vertex, retrieve/identify the vertex feature for the reference vertex and for that other vertex from the set of vertex features for all mesh vertices) [Bronstein: 0046 “selecting a second vertex adjacent to the first vertex”][Bronstein: 0056 “for each vertex of the subset , determining the neighbouring vertices in the geometric domain; and aggregating input data of the neighbours for all the vertices of the subset”]; but fails to explicitly disclose determining a correlation value between the reference vertex and the another vertex based on the vertex feature of the reference vertex, the vertex feature of the another vertex, and the connection relationship between the vertices, the correlation value being configured for indicating a magnitude of a correlation degree between the reference vertex and the another vertex; and determining a local feature of the reference vertex based on the correlation value and the vertex feature of the another vertex. However, Tang discloses determining a correlation value between the reference vertex and the another vertex based on the vertex feature of the reference vertex, the vertex feature of the another vertex, and the connection relationship between the vertices, the correlation value being configured for indicating a magnitude of a correlation degree between the reference vertex and the another vertex (interpreted as compute a correlation value (attention weight/similarity/importance score) between the reference vertex and the other vertex using reference vertex feature, other vertex feature, and the vertex connectivity/edge relationship, the value indicates how strongly related/important the other vertex is to the reference vertex)[Tang: 0054 “compute attention coefficient via equation ( 4 ) : ei ; = a ( Whi , Wh ) Wherein denotes the importance of evidence node j to the text node i”]; and determining a local feature of the reference vertex based on the correlation value and the vertex feature of the another vertex (interpreted as use the correlation value as a weight applied to the other vertex’s feature, and use that to compute the reference vertex’s local feature (weighted aggregation of neighbor features)) [Tang: 0055 “normalize eij to obtain normalized az”][Tang: 0056 “calcu late a text - centric evidence representation X = [ X1 , ... , Xn , ' ] using the weighted sum over H , via the equation ( 6 )”](teaches producing normalized weights and calculating a node specific representation using a weighted sum over node representations (features) corresponding to determining a local feature). Bronstein and Tang are considered to be analogous to the claimed invention because they are in the same field of computer implemented processing of 3D geometric data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bronstein to incorporate Tang’s teachings of computing a correlation/attention value between a reference node and another node. The motivation for such a combination would provide the benefit of more accurate local features. Claim 12 is a device claim corresponding to claim 4 without any additional limitations. Thus, claim 12 is rejected for the same reasons as claim 4 above. Claims 6, 8, 14, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bronstein et. al (U.S. Patent Publication No. 2021/0350620), in view of Schoessler et al. (U.S. Patent Publication No. 2022/0371202). Regarding claim 6, Bronstein discloses the method according to claim 1, but fails to explicitly disclose wherein the obtaining a position of a keypoint of the target object on the target object based on the vertex feature, the global feature, and the local feature comprises: performing feature splicing on the vertex feature, the global feature, and the local feature, to obtain a spliced feature of the target object; and performing detection on the keypoint of the target object based on the spliced feature, to obtain the position of the keypoint of the target object on the target object. However, Schoessler discloses wherein the obtaining a position of a keypoint of the target object on the target object based on the vertex feature, the global feature, and the local feature comprises: performing feature splicing on the vertex feature, the global feature, and the local feature, to obtain a spliced feature of the target object (feature splicing is interpreted as combining multiple feature sets (vertex-level features, a global object feature, and local/neighborhood features) into a single combined (spliced) feature representation used downstream)[Schoessler: 0021 “This MLP outputs per point feature vectors at step 340”][Schoessler: 0021 “At step 360, the 3DSN uses a symmetric function, for example and not by way of limitation, max pooling, to transform the per point feature vectors into global feature vectors at step 370.”][Schoessler: 0021 “Both the local feature vector 340 output from MLP 330 and the global feature vector 370 from MLP 350 and max pooling 360 may be subsequently concatenated for use in point segmentation.”](teaches vertex/point features, global features produced by max pooling (global feature vectors), and feature splicing by concatenation); and performing detection on the keypoint of the target object based on the spliced feature, to obtain the position of the keypoint of the target object on the target object (interpreted as use the spliced feature as input to a detector that outputs a keypoint location (a position on the object))[Schoessler: 0021 “At step 390 the 3DSN utilizes a third MLP to generate a second per point feature vector, which describes each point with respect to the whole object.”][Schoessler: 0021 “The second per point feature vector may be processed through a machine learning vector, for example and not by way of limitation, a fully connected layer or an SVM, which will result in a per point output score 395 for each part. The per point output score 395 permit the 3DSN to determine which part each point belongs to.”][Schoessler: 0023 “the computing system may segment and classify at least a portion of individual limb segments 105, joints 110, end effectors 115 or fingers 120 on robotic limb 100.”][Schoessler: 0024 “the computing system further segments and classifies areas of coffee mug 410 that should be grasped by robotic arm 100, such as the center of the outside of the base of the mug 415 or the center of the coffee mug handle 420”](teaches performing downstream detection from the concatenation derived per point representation by applying a third MLP and then fully connected layer to generate per point output scores that determine which points correspond to particular parts (including joints). Because each point inherently has a 3D position in the point cloud, identifying the points belonging to the joint/part yields the position of a keypoint on the object). Bronstein and Schoessler are considered to be analogous to the claimed invention because they are in the same field of computer implemented processing of 3D geometric data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bronstein to incorporate Schoessler’s teachings of combining multiple features. The motivation for such a combination would provide the benefit of improving efficiency. Regarding claim 8, Bronstein discloses the method according to claim 1, wherein: the performing feature extraction on vertices of the three-dimensional mesh, to obtain a vertex feature of the three-dimensional mesh comprises: performing feature extraction on the vertices of the three-dimensional mesh via a first feature extraction layer, to obtain the vertex feature of the three-dimensional mesh (interpreted as a first processing layer performs feature extraction on mesh vertices to produce per-vertex feature vectors)[Bronstein: 0042 “extracting hierarchical features from the first set of geometric domain data by applying an intrinsic convolution layer on the first set of geometric domain data”][Bronstein: 0044 “The intrinsic convolutional layer may be applied directly on the first get of geometric domain data , that is , on the vertices of the data themselves .”](teaches applying intrinsic convolution layer directly on vertices to extract features and that convolution layer is the claimed first feature extraction layer producing the meshes per vertex); the performing global feature extraction on the target object based on the vertex feature, to obtain a global feature of the target object, and performing local feature extraction on the target object based on the vertex feature and the connection relationship between the vertices, to obtain a local feature of the target object comprises: performing global feature extraction on the target object based on the vertex feature via a second feature extraction layer, to obtain the global feature of the target object, and performing local feature extraction on the target object based on the vertex feature and the connection relationship between the vertices via a third feature extraction layer, to obtain the local feature of the target object (interpreted as a second layer extracts a global/object feature from vertex features and a third layer extracts local features using vertex features plus connectivity among vertices)[Bronstein: 0055 “At least one of the down - sampling layers may be a pooling layer”][Bronstein: 0045 “Using the connectivity of the geometric domain data ( e.g. a graph ) with a filter comprising a consistent local ordering of vertices ( e.g. a spiral convolutional filter ) , may allow for local processing of each geometric domain data”][Bronstein: 0056 “determining the neighbouring vertices in the geometric domain ; and aggregating input data of the neighbours for all the vertices of the subset”](teaches pooling/down sampling layer that aggregates vertex level outputs into a higher level representation (a global feature), and local processing that is explicitly based on connectivity and neighboring vertices (a local feature)); but fails to explicitly disclose and the obtaining a position of a keypoint of the target object on the target object based on the vertex feature, the global feature, and the local feature comprises: performing detection on the keypoint of the target object via an output layer based on the vertex feature, the global feature, and the local feature, to obtain the position of the keypoint of the target object on the target object. However, Schoessler discloses and the obtaining a position of a keypoint of the target object on the target object based on the vertex feature, the global feature, and the local feature comprises: performing detection on the keypoint of the target object via an output layer based on the vertex feature, the global feature, and the local feature, to obtain the position of the keypoint of the target object on the target object (interpreted as a final/output layer performs keypoint detection using the vertex/global/local features and outputs the keypoint position)[Schoessler: 0021 “Both the local feature vector 340 output from MLP 330 and the global feature vector 370 from MLP 350 and max pooling 360 may be subsequently concatenated for use in point segmentation.”][Schoessler: 0021 “per point output score 395 for each part.”][Schoessler: 0017 “tracks multiple components of a robotic limb, such as joints”](teaches an output stage where local and global features are combined concatenated and a downstream classifier produces a per element output score used to identify parts including joints, this is a routine neural network design choice). Bronstein and Schoessler are considered to be analogous to the claimed invention because they are in the same field of computer implemented processing of 3D geometric data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bronstein to incorporate Schoessler’s teachings of using a layered network pipeline where per element features and global features are combined. The motivation for such a combination would provide the benefit of a clear layer wise implementation. Claims 14 and 20 are device and non-transitory computer readable storage medium claims corresponding to claim 6 without any additional limitations. Thus, claims 14 and 20 are rejected for the same reasons as claim 6 above. Claim 16 is a device claim corresponding to claim 8 without any additional limitations. Thus, claim 16 is rejected for the same reasons as claim 8 above. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bronstein et. al (U.S. Patent Publication No. 2021/0350620), in view of Schoessler et al. (U.S. Patent Publication No. 2022/0371202), in further view of Haslam et al. (CA 3 126 205). Regarding claim 7, Bronstein in view of Schoessler discloses the method according to claim 1, wherein the obtaining a position of a keypoint of the target object on the target object based on the vertex feature, the global feature, and the local feature comprises: performing detection on the keypoint of the target object based on the vertex feature, the global feature, and the local feature, to obtain a probability of the keypoint being at each of the vertices in the three-dimensional mesh (interpreted as run a detector using per vertex features, an object level global feature, and local features, to output a per vertex probability distribution indicating where the keypoint is located)[Bronstein: 0044 “The intrinsic convolutional layer may be applied directly on the first get of geometric domain data , that is , on the vertices of the data themselves .”][Bronstein: 0045 “Using the connectivity of the geometric domain data ( e.g. a graph ) with a filter comprising a consistent local ordering of vertices ( e.g. a spiral convolutional filter ) , may allow for local processing of each geometric domain data”][Bronstein: 0055 “At least one of the down - sampling layers may be a pooling layer”](teaches vertex level feature processing on vertices, local feature processing using connectivity, and pooling/down sampling that aggregates into a higher level (global) representation. These are the vertex, local, and global features); but fails to explicitly disclose generating a three-dimensional heatmap corresponding to the three-dimensional mesh based on the probability; and determining the position of the keypoint of the target object on the target object based on the three-dimensional heatmap. However, Haslam discloses generating a three-dimensional heatmap corresponding to the three-dimensional mesh based on the probability (interpreted as convert the per vertex probabilities into a 3D heatmap aligned to the mesh and each mesh location/vertex gets a heat value based on probability)(Haslam: Page 19, Lines 16-17 “The output of this stage will be a heatmap of the mesh, which provides a score of the strength of the mesh at a given point.”); and determining the position of the keypoint of the target object on the target object based on the three-dimensional heatmap (Haslam: Page 19, Lines 14-15 “determine the weakest and strongest position on the mesh”). Bronstein, Schoessler, and Haslam are considered to be analogous to the claimed invention because they are in the same field of computer implemented processing of 3D geometric data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Bronstein and Schoessler to incorporate Haslam’s teachings of using heatmaps. The motivation for such a combination would provide the benefit of straightforward probabilistic localization workflow. Claim 15 is a device claim corresponding to claim 7 without any additional limitations. Thus, claim 15 is rejected for the same reasons as claim 7 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED TAHA whose telephone number is (571)272-6805. The examiner can normally be reached 8:30 am - 5 pm, Mon - Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571)272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHMED TAHA/Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12565101
WINDSHIELD AND VISIBILITY IMPROVEMENTS FOR DRIVERS IN ADVERSE WEATHER AND LIGHTING CONDITIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12561880
AUGMENTED REALITY TATTOO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+75.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month