Prosecution Insights
Last updated: April 19, 2026
Application No. 18/453,891

SIGN LANGUAGE RECOGNITION USING PARAMETER KEYPOINTS

Final Rejection §103
Filed
Aug 22, 2023
Examiner
HON, MING Y
Art Unit
2666
Tech Center
2600 — Communications
Assignee
LENOVO (SINGAPORE) PTE. LTD.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
624 granted / 760 resolved
+20.1% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
783
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 760 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments Applicant’s amendment filed on October 30, 2025 is acknowledged. Currently Claims 1-20 are pending. Claims 19 has been amended. On page 8 of the applicant remarks, the applicant alleges that the references fails to teach, “ Because neither reference discloses (i) any facial keypoint extraction nor (ii) 3-D (x,y,z) coordinates derived from the image data, at least the following limitations remain unmet "keypoints comprise ... a face of the person" (claims 1, 10, 19) and "each keypoint comprises an x, y and z coordinate" (claims 9 & 18). The quotation of Kelly [0025] in the Office Action cannot cure this deficiency. A normalized vector is not evidence that the original detection was 3-D. Under Ex parteClapp (227 USPQ 972 (1985)), a rejection must show where and how the prior art teaches every limitation. Here the rejection does not. Consequently, no prima facie case exists and the rejection of all claims that include the above features must be withdrawn. Additionally, the claims require providing the key-points "to a parameter analyzer for detecting ... handshape, palm orientation, articulation point, and movement." The Office Action concedes Kelly lacks this element and relies solely on Jawahar [0003] to cure it. However, that paragraph merely recites that sign language semantically involves "hand-shape, location, movement, orientation ...." Jawahar does not take key-points as input, contain a module that outputs the four parameters, or teach any algorithm for computing those parameters from images. Because the proposed combination is non-analogous and lacks any articulated motivation consistent with KSR, the obviousness rejection fails on Graham factor (1)(scope/content) and factor (3) (skill/motivation).Accordingly, the rejection of all claims relying on the Kelly plus Jawahar combination must be withdrawn” The examiner respectfully disagrees. The examiner asserts that the combination of Kelly and Jawhar teach providing the plurality of keypoints to a parameter analyzer for detecting a plurality of characteristics of the sign, wherein the plurality of characteristics of the sign comprises a handshape, a palm orientation, an articulation point and a movement; Kelly teaches in Paragraph [0019], “In real time, or after the signing is completed, the sign language information is sent to 12, which extracts out features (e.g. body pose keypoints, hand keypoints, hand pose, thresholded image, etc. . . . ). The features produced by 12 are then transmitted to component 13 which extracts sign language information (e.g. detecting if an individual is signing, transcribing that signing into gloss, or translating that signing into a target language) from a sequence of these per-frame features. Kelly doesn’t disclose features or the claimed characteristics of the sign comprises a handshape, a palm orientation, an articulation point and a movement; Jawahar teaches that in interpretation of sign language will require a human to look at certain features of a sign language giver in order to determine what the sign language giver is trying to convey. Therefore a human is able to extract the features of what they see as disclosed in Jawahar, Paragraph [0003], “Sign language is a language that are used by most of the deaf people to convey anything to the other person. The sign language is expressed through manual articulations including hand gestures, hand-shape, location, movement, orientation in combination with non-manual elements including facial expressions such as eye gaze, eyebrows, and mouth movement” to determine what the sign language giver is trying to convey. It would have been obvious to one of ordinary skill in the art at the time the invention was made to incorporate the additional features of sign language such as “hand gestures, hand-shape, location, movement, orientation in combination with non-manual elements including facial expressions such as eye gaze, eyebrows, and mouth movement” to the features extracted by Kelly’s algorithm because by using additional parameters will improve the algorithm to interpret sign language accurately. For the above reasons, the Kelly in view of Jawhar teaches the claim limitations of Claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6-7, 9-11, 15-16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kelly US2022/0327961 in view of Jawahar et al. US2023/0290371 hereinafter referred to as Jawahar. As per Claim 1, Kelly teaches a computerized process comprising: receiving image data, the image data comprising a person executing a sign of a sign language; (Kelly, Paragraph [0019], “A signer signs into 11 an input device (e.g. minimally a single lens camera). In real time, or after the signing is completed, the sign language information is sent to 12”) extracting a plurality of keypoints from the image data, wherein the keypoints comprise a plurality of locations on a hand of the person, an arm of the person, a trunk of the person and a face of the person; providing the plurality of keypoints to a parameter analyzer for detecting a plurality of characteristics of the sign, (Kelly, Paragraph [0019], “In real time, or after the signing is completed, the sign language information is sent to 12, which extracts out features (e.g. body pose keypoints, hand keypoints, hand pose, thresholded image, etc. . . . ). The features produced by 12 are then transmitted to component 13 which extracts sign language information (e.g. detecting if an individual is signing, transcribing that signing into gloss, or translating that signing into a target language) from a sequence of these per-frame features. Finally, the output is displayed on 14”) generating from an output of the parameter analyzer a description of the sign; filtering and aggregating the description of the sign, thereby creating a condensed description of the sign; (Kelly, Paragraph [0019], “In real time, or after the signing is completed, the sign language information is sent to 12, which extracts out features (e.g. body pose keypoints, hand keypoints, hand pose, thresholded image, etc. . . . ). The features produced by 12 are then transmitted to component 13 which extracts sign language information (e.g. detecting if an individual is signing, transcribing that signing into gloss, or translating that signing into a target language) from a sequence of these per-frame features. Finally, the output is displayed on 14” and Paragraph [0023], “An image train is captured on 201 and streamed, either real time or after capturing is finished. Specifically, within our embodiment of 12, our system performs pose detection via Convolutional Pose Machines in 206 and hand localization via a RCNN in 205. These results are combined to find the bounding box of both the dominant and non-dominant hand by iterating through all bounding boxes found from 205 and finding the one closest to each wrist joint produced by 206. A CPM extracts the hands' poses from the dominant and non-dominant hands' bounding boxes in 207. Finally, all this information is merged into a flattened feature vector. These feature vectors are then normalized in 208”) querying a database of known signs with the condensed description using a dynamic time warping algorithm; and retrieving from the database a communication that is associated with the sign. (Kelly, Paragraph [0028], “The comparator in 211 then first determines if the entire signing region of the feature vector is contained within the list of pre-recorded sentences in the sentence base 214 (a database of sentences) via K Nearest-Neighbors (KNN) with a Dynamic Time Warping (DTVV) distance metric. If the feature vector does not correspond to a sentence, the comparator 211 then goes through each signs' corresponding region in the feature queue and determines if that sign was fingerspelled (done through a binary classifier). If so, the sign is processed by the fingerspelling module in 210 (done through a seq2seq RNN model). If not, the sign is determined by comparing with signs in the signbase in 213 (a database of individual signs) and choosing the most likely candidate (done through KNN with a distance metric of DTVV). Finally, a string of sign language gloss is output (the signs which constituted the feature queue). As the sign transcribed output is not yet in English, the grammar module in 213 translates the gloss to English via a Seq2Seq RNN. The resulting english text is returned to the device for visual display 201”) Kelly does not explicitly teach wherein the plurality of characteristics of the sign comprises a handshape, a palm orientation, an articulation point and a movement; Jawahar teaches wherein the plurality of characteristics of the sign comprises a handshape, a palm orientation, an articulation point and a movement; (Jawahar, Paragraph [0003], “Sign language is a language that are used by most of the deaf people to convey anything to the other person. The sign language is expressed through manual articulations including hand gestures, hand-shape, location, movement, orientation in combination with non-manual elements including facial expressions such as eye gaze, eyebrows, and mouth movement.”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Jawhar into Kelly because by utilizing common characteristics of identifying sign language will assist the system of Kelly in translating sign language to written text. It would have been obvious to one of ordinary skill in the art at the time the invention was made to incorporate the additional features of sign language such as “hand gestures, hand-shape, location, movement, orientation in combination with non-manual elements including facial expressions such as eye gaze, eyebrows, and mouth movement” to the features extracted by Kelly’s algorithm because by using additional parameters will improve the algorithm to interpret sign language accurately. Therefore it would have been obvious to one of ordinary skill to combine the two references to obtain the invention in Claim 1. As per Claim 2, Kelly in view of Jawahar teaches the computerized process of claim 1, wherein the image data comprise video data; and wherein the plurality of keypoints is extracted from a plurality of frames of the video data. (Kelly, Paragraph [0007], “FIG. 2 is a block diagram of our embodiment for Sign Language Translation, which takes as input a video stream and outputs (either simultaneously while receiving the videostream, or after the videostream input has finished) a translation of what was signed into a target language”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 6, Kelly in view of Jawahar teaches the computerized process of claim 1, wherein the parameter analyer for determining the movement comprises: tracking a plurality of positions of the hand and the arm using the keypoints of the hand and the arm; providing the plurality of positions of the hand and the arm to a trained machine learning algorithm; and determining the movement from an output of the machine learning algorithm. (Kelly, Paragraph [0028], “The comparator in 211 then first determines if the entire signing region of the feature vector is contained within the list of pre-recorded sentences in the sentence base 214 (a database of sentences) via K Nearest-Neighbors (KNN) with a Dynamic Time Warping (DTVV) distance metric. If the feature vector does not correspond to a sentence, the comparator 211 then goes through each signs' corresponding region in the feature queue and determines if that sign was fingerspelled (done through a binary classifier). If so, the sign is processed by the fingerspelling module in 210 (done through a seq2seq RNN model). If not, the sign is determined by comparing with signs in the signbase in 213 (a database of individual signs) and choosing the most likely candidate (done through KNN with a distance metric of DTVV). Finally, a string of sign language gloss is output (the signs which constituted the feature queue). As the sign transcribed output is not yet in English, the grammar module in 213 translates the gloss to English via a Seq2Seq RNN. The resulting english text is returned to the device for visual display 201”) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 7, Kelly in view of Jawahar teaches the computerized process of claim 6, comprising: generating mean curves for a plurality of known signs of the sign language; storing the mean curves in the database; creating a trajectory of the sign of the person by normalizing and interpolating the keypoints of the sign of the person; and calculating a distance between the trajectory of the sign of the person and the mean curves using Dynamic Time Warping (DTW) and Euclidean distances to find a sign in the database with the smallest distance from the sign of the person. (Kelly, Paragraph [0028], “The comparator in 211 then first determines if the entire signing region of the feature vector is contained within the list of pre-recorded sentences in the sentence base 214 (a database of sentences) via K Nearest-Neighbors (KNN) with a Dynamic Time Warping (DTVV) distance metric. If the feature vector does not correspond to a sentence, the comparator 211 then goes through each signs' corresponding region in the feature queue and determines if that sign was fingerspelled (done through a binary classifier). If so, the sign is processed by the fingerspelling module in 210 (done through a seq2seq RNN model). If not, the sign is determined by comparing with signs in the signbase in 213 (a database of individual signs) and choosing the most likely candidate (done through KNN with a distance metric of DTVV). Finally, a string of sign language gloss is output (the signs which constituted the feature queue). As the sign transcribed output is not yet in English, the grammar module in 213 translates the gloss to English via a Seq2Seq RNN. The resulting english text is returned to the device for visual display 201”) The rationale applied to the rejection of claim 6 has been incorporated herein. As per Claim 9, Kelly in view of Jawahar teaches the computerized process of claim 1, wherein each keypoint comprises an x, y and z coordinate in the image data. (Kelly, Paragraph [0025], “Setting the mean coordinates of each hand to be (0, 0, 0) and the standard deviation in each dimension for the coordinates of each hand to be an average of 1 unit via an affine transformation”, using x,y,z coordinates to normalize the feature vector) The rationale applied to the rejection of claim 1 has been incorporated herein. As per Claim 10, Claim 10 claims A non-transitory machine-readable medium comprising instructions that when executed by a processor executes a process performing the computerized process as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1. As per Claim 11, Claim 11 claims the same limitation as Claim 2 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 2. As per Claim 15, Claim 15 claims the same limitation as Claim 6 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 6. As per Claim 16, Claim 16 claims the same limitation as Claim 7 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 7. As per Claim 18, Claim 18 claims the same limitation as Claim 9 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 9. Claims 3, 5, 8, 10, 12, 14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kelly US2022/0327961 in view of Jawahar et al. US2023/0290371 hereinafter referred to as Jawahar as applied to Claim 1 and 10 respectively and further in view of Gupta et al. US2021/0397266 hereinafter referred to as Gupta. As per Claim 3, Kelly in view of Jawhar teaches the computerized process of claim 1, Kelly in view of Jawhar does not explicitly teach wherein the parameter analyzer for determining the handshape uses the plurality of keypoints of the handshape in the image data as input to a trained machine learning algorithm. Gupta teaches wherein the parameter analyzer for determining the handshape uses the plurality of keypoints of the handshape in the image data as input to a trained machine learning algorithm. (Gupta, Paragraph [0050], “The system 100 further includes a concept decomposition module 120 that includes a collection of neural network-based engines dedicated for extraction of each gesture component of a performed gesture. In particular, for ASL, the concept decomposition module 120 includes a handshape concept engine 122 for identifying a shape of a hand within a frame, a movement concept engine 123 for characterizing a physical movement of the hand relative to the body, and a location concept engine 124 for extracting a location of the hand relative to the body. For a plurality of frames indicative of a given gesture, the handshape concept engine 122 extracts a first set of gesture components 125 related to a shape of the hand, movement concept engine 122 extracts a second set of gesture components 126 related to the movement of the hand, and location concept engine 124 extracts a set of gesture components 127 related to the location of the hand” Paragraph [0062], “Handshape Recognition: ASL is a visual language and hand shape is an important part of identifying any sign. In the wild, videos produced by ASL users can have different brightness conditions, camera motion blurriness, and low-quality video frames. Deep learning models have shown to exceed human performances in many visual tasks like object recognition, reading medical imaging, and many other visual tasks. Referring to FIGS. 2, 3E and 6, the system 100 includes the handshape concept engine 122 for identifying a hand shape of the gesture. In one embodiment of the present system 100, a convolutional neural network (CNN) 135 is trained to recognize a shape formed by a hand featured in each frame. In some embodiments, the CNN 135 is a GoogleNet Inception v3 that has been trained on over 1.28 million of images with over 1,000 object categories. FIG. 6 shows a layout of the handshape recognition pipeline. From the experiments, it was concluded that cropping handshapes from frames before supplying to the CNN 135 give a better generalization and accuracies, allows the CNN 135 to converge faster to expected results. To reliably detect handshapes out of busy frame images, a simple algorithm is deployed to extrapolate potential hand palm location using key body positions acquired from pose estimation (block 228)”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Gupta into Kelly in view of Jawhar because by utilizing machine learning to analyze common components in identifying/translating sign language to text will expedite the process and increase the accuracy of Kelly. Therefore it would have been obvious to one of ordinary skill to combine the three references to obtain the invention in Claim 3. As per Claim 5, Kelly in view of Jawhar teaches the computerized process of claim 1, Kelly in view of Jawhar does not explicitly teach wherein the parameter analyzer for determining the articulation point uses the keypoints of the hand, the keypoints of the arm, the keypoints of the trunk and the keypoints of the face as input to a trained machine learning algorithm. Gupta teaches wherein the parameter analyzer for determining the articulation point uses the keypoints of the hand, the keypoints of the arm, the keypoints of the trunk and the keypoints of the face as input to a trained machine learning algorithm. (Gupta, Paragraph [0050], “The system 100 further includes a concept decomposition module 120 that includes a collection of neural network-based engines dedicated for extraction of each gesture component of a performed gesture. In particular, for ASL, the concept decomposition module 120 includes a handshape concept engine 122 for identifying a shape of a hand within a frame, a movement concept engine 123 for characterizing a physical movement of the hand relative to the body, and a location concept engine 124 for extracting a location of the hand relative to the body. For a plurality of frames indicative of a given gesture, the handshape concept engine 122 extracts a first set of gesture components 125 related to a shape of the hand, movement concept engine 122 extracts a second set of gesture components 126 related to the movement of the hand, and location concept engine 124 extracts a set of gesture components 127 related to the location of the hand” Paragraph [0062], “Handshape Recognition: ASL is a visual language and hand shape is an important part of identifying any sign. In the wild, videos produced by ASL users can have different brightness conditions, camera motion blurriness, and low-quality video frames. Deep learning models have shown to exceed human performances in many visual tasks like object recognition, reading medical imaging, and many other visual tasks. Referring to FIGS. 2, 3E and 6, the system 100 includes the handshape concept engine 122 for identifying a hand shape of the gesture. In one embodiment of the present system 100, a convolutional neural network (CNN) 135 is trained to recognize a shape formed by a hand featured in each frame. In some embodiments, the CNN 135 is a GoogleNet Inception v3 that has been trained on over 1.28 million of images with over 1,000 object categories. FIG. 6 shows a layout of the handshape recognition pipeline. From the experiments, it was concluded that cropping handshapes from frames before supplying to the CNN 135 give a better generalization and accuracies, allows the CNN 135 to converge faster to expected results. To reliably detect handshapes out of busy frame images, a simple algorithm is deployed to extrapolate potential hand palm location using key body positions acquired from pose estimation (block 228)”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Gupta into Kelly in view of Jawhar because by utilizing machine learning to analyze common components in identifying/translating sign language to text will expedite the process and increase the accuracy of Kelly. Therefore it would have been obvious to one of ordinary skill to combine the three references to obtain the invention in Claim 5. As per Claim 8, Kelly in view of Jawhar teaches the computerized process of claim 1, Kelly in view of Jawhar does not explicitly teach wherein the parameter analyzer comprises one or more of a handshape analyzer, a palm orientation analyzer, an articulation point analyzer and a movement analyzer. Gupta teaches wherein the parameter analyzer comprises one or more of a handshape analyzer, a palm orientation analyzer, an articulation point analyzer and a movement analyzer. (Gupta, Paragraph [0050], “The system 100 further includes a concept decomposition module 120 that includes a collection of neural network-based engines dedicated for extraction of each gesture component of a performed gesture. In particular, for ASL, the concept decomposition module 120 includes a handshape concept engine 122 for identifying a shape of a hand within a frame, a movement concept engine 123 for characterizing a physical movement of the hand relative to the body, and a location concept engine 124 for extracting a location of the hand relative to the body. For a plurality of frames indicative of a given gesture, the handshape concept engine 122 extracts a first set of gesture components 125 related to a shape of the hand, movement concept engine 122 extracts a second set of gesture components 126 related to the movement of the hand, and location concept engine 124 extracts a set of gesture components 127 related to the location of the hand”) Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Gupta into Kelly in view of Jawhar because by utilizing machine learning to analyze common components in identifying/translating sign language to text will expedite the process and increase the accuracy of Kelly. Therefore it would have been obvious to one of ordinary skill to combine the three references to obtain the invention in Claim 8. As per Claim 12, Claim 12 claims the same limitation as Claim 3 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 3. As per Claim 14, Claim 14 claims the same limitation as Claim 5 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 5. As per Claim 17, Claim 17 claims the same limitation as Claim 8 and is dependent on a similarly rejected independent claim. Therefore the rejection and rationale are analogous to that made in Claim 8. Allowable Subject Matter Claims 19-20 are allowed. Claims 4 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MING HON whose telephone number is (571)270-5245. The examiner can normally be reached M-F 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on 571-270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Aug 22, 2023
Application Filed
Jul 29, 2025
Non-Final Rejection — §103
Oct 30, 2025
Response Filed
Mar 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602904
METHOD AND ELECTRONIC DEVICE FOR RECOGNIZING OBJECT BASED ON MASK UPDATES
2y 5m to grant Granted Apr 14, 2026
Patent 12567244
METHOD AND APPARATUS FOR FUSING MULTI-SENSOR DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12555240
BRUCH'S MEMBRANE SEGMENTATION IN OCT VOLUME
2y 5m to grant Granted Feb 17, 2026
Patent 12555411
Facial Emotion Recognition System
2y 5m to grant Granted Feb 17, 2026
Patent 12536838
PATCH-BASED ADVERSARIAL ATTACK DETECTION AND MITIGATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+13.8%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 760 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month