Prosecution Insights
Last updated: April 19, 2026
Application No. 16/940,241

Label Generation Using Neural Networks

Non-Final OA §103
Filed
Jul 27, 2020
Examiner
TUCKER, WESLEY J
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
7 (Non-Final)
83%
Grant Probability
Favorable
7-8
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
596 granted / 715 resolved
+21.4% vs TC avg
Moderate +6% lift
Without
With
+6.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
734
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
35.7%
-4.3% vs TC avg
§102
39.4%
-0.6% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 715 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 2nd 2026 has been entered. Response to Amendment Applicant’s amendment filed February 2nd 2026 has been entered and made of record. Claims 1-5, 7-9, 13-16, 18, 20-24 and 31 are amended. Claims 1-31 are pending. Applicant’s remarks in view of the newly presented amendments have been considered and found to be persuasive, however a new rejection is presented in view the primary refence to Xu and a new secondary reference of USPN 2019/0325243 to Sikka et al. to teach foreground and background image region labels. The newly presented claim limitations are also accordingly addressed below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 10-17, 19-29 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Publication titled “Missing Labels in Object Detection” to Xu et al. and USPN 2019/0325243 to Sikka et al. With regard to claim 1, Xu discloses one or more processors (See abstract, object detection in computer vision. A processor is present in the operations presented in the publication) comprising: circuitry to: receive one or more images and two or more annotations associated with one or more objects in the one or more images (section 1.Introduction and section 3.1, Missing Label Datasets, and section 3.3 More Adaptations to Missing Label Training; A dataset of images is collected that includes images with both instance level annotations and image level annotations for objects in the images. Bounding boxes are also considered part of the annotations); generate, using the two or more annotations, two or more pseudolabels indicating, for at least a portion of the one or more images, [an estimated foreground and background] (See Sections 2.3 and 3.3 and Figure 2, and reference the online version of the publication for the color image. Pseudolabels are generated in an ongoing basis and are refined and updated in order to better train the model. In the example given, pseudolabels include several different annotations including image-level labels and partial instance-level labels. Bounding boxes are also considered part of the annotations. Each of the pseudolabels are updated in accordance with the ground truth annotations. From section 2.3, “Apply this method on the hybrid supervised learning domain, our innovation is to generate pseudo labels from different levels of annotations and update the generator in every training cycle”); modify the two or more pseudolabels by comparing information corresponding to the two or more pseudolabels with information indicated by one or more pseudolabels associated with different annotation types (See Figure 2 and the description: “When the detection network is ready, it takes the place of the teacher models and generates more accurate pseudo label (e.g. blue bounding boxes in the bottom image) , the updated instance-level pseudo bounding boxes are utilized to retrain the model.” Updated pseudolabels are used to refine the training model after the pseudo labels are compared to ground truth labels and refined to generate more accurate pseudo labels. The annotation types in this case are the image level annotations and instance level annotations); and use one or more neural networks to label the one or more objects within the one or more images based, at least in part, on one or more updates to the modified two or more pseudolabels (section 2.3, “Apply this method on the hybrid supervised learning domain, our innovation is to generate pseudo labels from different levels of annotations and update the generator in every training cycle.” The pseudolabels are enhanced with each iterations of the training cycle and is generated based on a combination of the predicted bounding boxes or prediction maps as shown in Fig. 2. The objects such as the sheep in Fig. 2 are labeled with pseudolabels and are iteratively refined). Regarding foreground and background, See Section 3.1. Xu mentions that training examples are used which will make the detector bias to the background which indicates that foreground and background are considerations when attempting to label images. In the example shown in Fig. 2, the detected objects such as sheep are clearly attempted to be recognized from the background of the image. However Xu does not explicitly disclose generating pseudolabels an estimated foreground and background. Sikka discloses neural network object detection similar to that of Xu and further explicitly teaches that foreground and background bounding boxes are determined as predicted object labels (paragraph [0011] and [0079]). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the foreground and background predicted bounding box label generation taught by Sikka in combination with the pseudolabel bounding box object detection of Xu in order to identify foreground and background sections of the image for effectively detection and labeling. With regard to claim 2, Xu discloses the one or more processors of claim 1, wherein: one or more partial labels are generated based, at least in part, on the different annotation types (sections 2.2 and 2.3 and Fig. 2 description: Partial pseudo labels are shown in the block where GT and pseudo labels are merged); one or more prediction maps about the one or more objects are determined by the one or more neural networks (Section 2.3, “In object detection domain, people use high scoring predicted bounding boxes generated by a weakly supervised detector as pseudo labels.” The bounding boxes are predicted using weakly supervised neural networks); one or more feature maps are generated based, at least in part, on [[the]] one or more partial labels and the one or more prediction maps (Section 2.3, “Pseudo label method has been recently used in a fully-supervised setup to compensate for the absence of all instance-level box annotations. The cascade detector in Diba’s work [11] and OICR [35] both use pseudo object labels to train Fast-RCNN and achieve eminent WSOD performance.” The feature maps or bounding boxes are updated to compensate for instance-level box annotations), wherein information contained in the one or more pseudolabels is adjusted about objects contained in the one or more prediction maps (Section 2.3, “Mining pseudo labels can also increase the success of fully-supervised detectors. Zhang et al. [41] determine the most accurate bounding box using pseudo ground-truth excavation (PGE) and pseudo groundtruth adaption (PGA) algorithm from predictions.” The most accurate bounding boxes are determined by using psedo ground truth excavation to improve the bounding boxes); and the modified two or more pseudolabels for the one or more objects within the one or more images are generated based, at least in part, on a combination of the one or more feature maps (section 2.3, “Apply this method on the hybrid supervised learning domain, our innovation is to generate pseudo labels from different levels of annotations and update the generator in every training cycle.” The pseudolabels are enhanced with each iterations of the training cycle and is generated based on a combination of the predicted bounding boxes or prediction maps as shown in Fig. 2). With regard to claim 3, Xu discloses the one or more processors of claim 2, wherein the two one or more pseudolabels are generated by performing a weak supervision technique on the different annotation types (See section 1 at top of page 2, “After analyzing the limitations of these existing FSOD detectors and realizing that state-of the-art WSOD (Weakly Supervised Object Detection) methods register much lower detection performance, we propose a hybrid supervised learning framework for missing label object detection. This framework firstly uses a WSOD detector[41] as a teacher model to generate pseudo labels. Then these labels are merged with existing annotations to train a novel object detector.”). With regard to claim 4, Xu discloses the one or more processors of claim 2, wherein the modified two or more pseudolabels for the one or more objects within the one or more images is generated by concatenating the one or more feature maps into a combined feature map and determining, using a fusion neural network, the two or more pseudolabels from the combined feature map (Fig. 2 displays a fusion neural network. The training images with ground truth are merged or fused with the updated pseudo labels as shown to generate a concatenated feature maps to generate a fused image with the best and most accurate labels). With regard to claim 5, Xu discloses the one or more processors of claim 2, wherein the one or more neural networks are trained to determine the one or more objects in the one or more images based, at least in part, on the one or more prediction maps and the two or more pseudolabels (See Figure 2 and the description: “When the detection network is ready, it takes the place of the teacher models and generates more accurate pseudo label (e.g. blue bounding boxes in the bottom image) , the updated instance-level pseudo bounding boxes are utilized to retrain the model.” Updated pseudolabels are used to refine the training model after the pseudo labels are compared to ground truth labels and refined to generate more accurate pseudo labels for the bounding boxes/prediction maps). With regard to claim 6, Xu discloses the one or more processors of claim 1, wherein the one or more neural networks to determine the one or more objects in a training image is a convolutional neural network (See Fig. 2 and discussion, The CNN portions are shown in blue with CNN+loss shown in green). With regard to claim 7, the discussion of claim 1 applies. The method disclosed by Xu requires a system comprising one or more processors (See section 1, computer vision, and published by the Institute of Software, Chinese Academy of Sciences). With regard to claim 8, the discussion of claim 2 applies. With regard to claim 10, Xu discloses the system of claim 8, wherein a contextual loss is calculated based, at least in part, on the one or more prediction maps and the one or more neural networks are neural network is trained based, at least in part, on the contextual loss (See Fig. 2 and description, CNN+loss is displayed in green. The loss is considered taken in context of the training data and the pseudolabeled images). With regard to claim 11, Xu discloses the system of claim 8, wherein the one or more feature maps are generated by using the one or more predictions maps to determine information in the one or more pseudolabels indicating the one or more objects in the one or more images (See Fig. 2, the bounding boxes shown in the merged ground truth and pseudo labels are considered prediction maps mapping the predicted location of the detected objects. The prediction map bounding boxes shown are used as the pseudo labels are updated to be more accurate/refined for the detected objects). With regard to claim 12, Xu discloses the system of claim 8, wherein the one or more feature maps are combined by concatenating the one or more feature maps into a concatenated feature map and using a convolutional neural network to determine the label one or more pseudolabels (Fig. 2 displays a CNN. The training images with ground truth are merged or fused with the updated pseudo labels as shown to generate a concatenated feature maps to generate a fused image with the best and most accurate labels). With regard to claim 13, Xu discloses the system of claim 12, wherein one or more loss values for the one or more neural networks is calculated based, at least in part, on the modified two or more pseudolabels, and the one or more loss values are used to train the one or more neural networks (See Fig. 2 and description, CNN+loss is displayed in green. The loss is considered taken in context of the training data and the pseudolabeled images). With regard to claim 14, Xu discloses the system of claim 7, wherein the different annotation types comprise indications approximating boundaries or locations of the one or more objects in the one or more images (Fig. 2, annotations are shown in the ground truth training images in red). With regard to claim 15, the discussions of claims 1 and 7 apply. Xu discloses a computer program for computer vision in the operation of the steps discussed with regard to claim 1 (See section 1, computer vision, and published by the Institute of Software, Chinese Academy of Sciences). With regard to claim 16, Xu discloses the non-transitory machine-readable medium of claim 15, wherein the set of instructions, if performed by the one or more processors, further cause the one or more processors to: generate the two or more pseudolabels using one or more weak supervision techniques based, at least in part, on the different annotation types and the one or more images (Section 2.3, “In object detection domain, people use high scoring predicted bounding boxes generated by a weakly supervised detector as pseudo labels.” The bounding boxes are predicted using weakly supervised neural networks), the two or more pseudolabels indicating the estimated foreground and the background in the one or more images (Section 3.1, Xu mentions that training examples are used which will make the detector bias to the background which indicates that foreground and background are considerations when attempting to label images. In the example shown in Fig. 2, the detected objects such as sheep are clearly attempted to be recognized from the background of the image. However Xu does not explicitly disclose generating pseudolabels an estimated foreground and background. Sikka discloses neural network object detection similar to that of Xu and further explicitly teaches that foreground and background bounding boxes are determined as predicted object labels (paragraph [0011] and [0079]). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the foreground and background predicted bounding box label generation taught by Sikka in combination with the pseudolabel bounding box object detection of Xu in order to identify foreground and background sections of the image for effectively detection and labeling); generate one or more prediction maps using the one or more neural networks based, at least in part, on the one or more images wherein information contained in the one or more pseudolabels is adjusted about objects contained in the one or more prediction map (Section 2.3, “In object detection domain, people use high scoring predicted bounding boxes generated by a weakly supervised detector as pseudo labels.” The bounding boxes are predicted using weakly supervised neural networks); update the one or more pseudolabels using the one or more prediction maps into one or more feature maps (See Figure 2 and reference the online version of the publication for the color image. Pseudolabels are generated in an ongoing basis and are refined and updated in order to better train the model. In the example given, pseudolabels include several different annotations including image-level labels and partial instance-level labels. Bounding boxes are also considered part of the annotations and are shown as the prediction maps that are refined as the system is trained. Each of the pseudolabels are updated in accordance with the ground truth annotations); and combine the one or more feature maps into a label for the one or more objects within the one or more images (Fig. 2 displays a fusion neural network. The training images with ground truth are merged or fused with the updated pseudo labels as shown to generate a concatenated feature maps to generate a fused image with the best and most accurate labels). With regard to claim 17, Xu discloses the non-transitory machine-readable medium of claim 16, wherein the one or more neural networks comprise neural network is a convolutional neural network (See Fig. 2 and discussion, The CNN portions are shown in blue with CNN+loss shown in green) and the one or more prediction maps comprise information indicating an estimation of the one or more objects in the one or more images (See Figs. 2 and 5 and discussions. The prediction maps or bounding boxes are considered estimates of the objects and their locations that are refined as the system is trained. See also Section 3.2, top right of page 3: “Our teacher detector is a decent object detection model that forward-passes an image and gives pseudo label for object categories and localization predictions…”). With regard to claim 19, Xu discloses the non-transitory machine-readable medium of claim 16, wherein the one or more feature maps are combined by concatenating the one or more feature maps into a combined feature map and determining a label for the one or more objects within the one or more images based, at least in part, on the combined feature map (section 2.3, “Apply this method on the hybrid supervised learning domain, our innovation is to generate pseudo labels from different levels of annotations and update the generator in every training cycle.” The pseudolabels are enhanced with each iterations of the training cycle and is generated based on a combination of the predicted bounding boxes or prediction maps as shown in Fig. 2. Fig. 2 displays a fusion neural network. The training images with ground truth are merged or fused with the updated pseudo labels as shown to generate a concatenated feature maps to generate a fused image with the best and most accurate labels). With regard to claim 20, Xu discloses the non-transitory machine-readable medium of claim 19, wherein the modified two or more pseudolabels are determined using a convolutional neural network, the convolutional neural network trained based, at least in part, on shared information between the one or more feature maps (See Fig. 2 and discussion, The CNN portions are shown in blue with CNN+loss shown in green. The CNN is continually trained using training images, ground truth images and the updating pseudolabels). With regard to claim 21, Xu discloses the non-transitory machine-readable medium of claim 15, wherein the labeled one or more objects within the one or more images each comprise a label determined based, at least in part, on the one or more images and the different annotation types, and the one or more neural networks are trained based, at least in part, on information contained in the label modified two or more pseudolabels (See Fig. 2 and discussion, The CNN portions are shown in blue with CNN+loss shown in green. The CNN is continually trained using training images, ground truth images and the updating pseudolabels. Pseudo labels are determined for annotations of both image level and instance level annotation types). With regard to claim 22, the discussion of claim 1 applies. With regard to claim 23, Xu discloses the method of claim 22, further comprising: generating one or more feature maps about the one or more images using the one or more neural networks, the one or more feature maps generated based, at least in part, on the one or more images and two or more pseudolabels determined from the different annotation types (Section 2.3, “Pseudo label method has been recently used in a fully-supervised setup to compensate for the absence of all instance-level box annotations. The cascade detector in Diba’s work [11] and OICR [35] both use pseudo object labels to train Fast-RCNN and achieve eminent WSOD performance.” The feature maps or bounding boxes are updated to compensate for instance-level box annotations); and combining the one or more feature maps into a label for the one or more objects within the one or more images (Fig. 2 displays a fusion neural network. The training images with ground truth are merged or fused with the updated pseudo labels as shown to generate a concatenated feature maps to generate a fused image with the best and most accurate labels and feature maps). With regard to claim 24, Xu discloses the method of claim 23, wherein the two or more pseudolabels are determined from the different annotation types using one or more weak supervision techniques (See section 1 at top of page 2, “After analyzing the limitations of these existing FSOD detectors and realizing that state-of the-art WSOD (Weakly Supervised Object Detection) methods register much lower detection performance, we propose a hybrid supervised learning framework for missing label object detection. This framework firstly uses a WSOD detector[41] as a teacher model to generate pseudo labels. Then these labels are merged with existing annotations to train a novel object detector.”), wherein the two or more pseudolabels comprising information to indicate the estimated foreground and the estimated background in the one or more images (Section 3.1, Xu mentions that training examples are used which will make the detector bias to the background which indicates that foreground and background are considerations when attempting to label images. In the example shown in Fig. 2, the detected objects such as sheep are clearly attempted to be recognized from the background of the image. However Xu does not explicitly disclose generating pseudolabels an estimated foreground and background. Sikka discloses neural network object detection similar to that of Xu and further explicitly teaches that foreground and background bounding boxes are determined as predicted object labels (paragraph [0011] and [0079]). Therefore it would have been obvious to one of ordinary skill in the art before time of filing to use the foreground and background predicted bounding box label generation taught by Sikka in combination with the pseudolabel bounding box object detection of Xu in order to identify foreground and background sections of the image for effectively detection and labeling). With regard to claim 25, Xu discloses the method of claim 23, wherein the one or more feature maps are further generated based, at least in part, on updating the one or more pseudolabels based on one or more prediction maps determined by the one or more neural networks, the one or more prediction maps indicating an estimation of the one or more objects in the one or more images (See Fig. 2, the bounding boxes shown in the merged ground truth and pseudo labels are considered prediction maps mapping the predicted location of the detected objects. The prediction map bounding boxes shown are used as the pseudo labels are updated to be more accurate/refined for the detected objects. The predicted bounding box maps are refined through training and use of the pseudo labels in an iterative process until the bounding boxes are the final prediction maps or feature maps correctly identifying image objects with the updated pseudolabels). With regard to claim 26, Xu discloses the method of claim 25, wherein one or more context loss values are calculated based, at least in part, on the one or more prediction maps and the one or more context loss values are used to train the one or more neural networks (See Fig. 2 and description, CNN+loss is displayed in green. The loss is considered taken in context of the training data and the pseudolabeled images). With regard to claim 27, Xu discloses the method of claim 23, wherein the one or more feature maps are combined into the label by concatenating the one or more feature maps into a concatenated feature map and using a fusion neural network to determine the label from the concatenated feature map (Fig. 2 displays a fusion neural network. The training images with ground truth are merged or fused with the updated pseudo labels as shown to generate a concatenated feature maps to generate a fused image with the best and most accurate labels). With regard to claim 28, Xu discloses the method of claim 27, wherein the fusion neural network is a convolutional neural network (Fig. 2, CNN). With regard to claim 29, Xu discloses the method of claim 27, wherein one or more loss values are calculated based, at least in part, on the one or more feature maps and the one or more loss values are utilized to train the fusion neural network (See Fig. 2 and description, CNN+loss is displayed in green. The loss is considered taken in context of the training data and the pseudolabeled images). With regard to claim 31, Xu discloses one or more processors of claim 1, wherein the one or more neural networks are to combine different annotation types associated with an object within one or more images, and to label the objects based, at least in part, on the combined different annotation types, wherein the combined different annotation types comprise one or more updates to the two or more pseudolabels (See Figure 2 and reference the online version of the publication for the color image. Pseudolabels are generated in an ongoing basis and are refined and updated in order to better train the model. In the example given, pseudolabels include several different annotations including image-level labels and partial instance-level labels (See Abstract, section 1 (2). See section 3.3 for a description of combining the instance level and image level pseudo labels. Bounding boxes are also considered part of the annotations. Each of the pseudolabels are updated in accordance with the ground truth annotations as the neural network repeats the recognition steps). Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Publication titled “Missing Labels in Object Detection” to Xu et al. and USPN 2020/0226748 to Kaufman et al. With regard to claim 9, Xu discloses the system of claim 8, but does not explicitly disclose wherein the one or more weak supervision techniques comprise a random walk operation and a region grow operation to determine the two or more pseudolabels indicating at least a foreground and a background for the one or more images. Kaufman discloses a system for training convolution neural networks to perform image segmentation and classification (paragraphs [0069]-[0078]) similar to the recognition and classification of He. Kaufman further discloses that combination of region growing and random walker procedures are used to perform segmentation of medical images and designate objects from the background (paragraphs [0080]-[0086]). Therefore, it would have been obvious to one of ordinary skill in the art before time of filing to use the random walk and region growing procedures of Kaufman in combination with the image segmentation and recognition of Xu in order to determine specific objects of interest in the image and to further train the image recognition neural network. With regard to claim 18, the discussion of claim 9 applies. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Publication titled “Missing Labels in Object Detection” to Xu et al. and USPN 2019/0102878 to Zhang et al. With regard to claim 30, Xu discloses the method of claim 22, but does not explicitly disclose wherein at least one or more neural networks is a 3D U-Net neural network. 3D U-Net neural networks are well known in the art as useful type of neural network. Zhang teaches the use of a 3D U-Net neural network and lists several types of neural networks that can be used in processing image object identification (paragraph [0045]). Therefore, it would have been obvious to one of ordinary skill in the art before time of filing to use a 3D u-Net neural network as taught by Zhang in combination with the object identification of Xu. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESLEY J TUCKER whose telephone number is (571)272-7427. The examiner can normally be reached 9AM-5PM Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WESLEY J TUCKER/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jul 27, 2020
Application Filed
Jun 15, 2022
Non-Final Rejection — §103
Dec 21, 2022
Response Filed
Mar 10, 2023
Final Rejection — §103
Jun 29, 2023
Applicant Interview (Telephonic)
Jun 29, 2023
Examiner Interview Summary
Sep 15, 2023
Notice of Allowance
Sep 27, 2023
Request for Continued Examination
Oct 03, 2023
Response after Non-Final Action
Nov 17, 2023
Non-Final Rejection — §103
Feb 01, 2024
Applicant Interview (Telephonic)
Feb 02, 2024
Examiner Interview Summary
Apr 22, 2024
Response Filed
Aug 02, 2024
Final Rejection — §103
Aug 19, 2024
Interview Requested
Sep 11, 2024
Examiner Interview Summary
Sep 11, 2024
Applicant Interview (Telephonic)
Feb 07, 2025
Request for Continued Examination
Feb 10, 2025
Response after Non-Final Action
Mar 05, 2025
Non-Final Rejection — §103
Apr 15, 2025
Applicant Interview (Telephonic)
Apr 15, 2025
Examiner Interview Summary
Jun 11, 2025
Response Filed
Sep 26, 2025
Final Rejection — §103
Dec 31, 2025
Examiner Interview Summary
Dec 31, 2025
Applicant Interview (Telephonic)
Feb 02, 2026
Request for Continued Examination
Feb 10, 2026
Response after Non-Final Action
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597221
IMAGE PROCESSING APPARATUS AND ELECTRONIC APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597222
METHOD AND SYSTEM FOR DETERMINING A REGION OF WATER CLEARANCE OF A WATER SURFACE
2y 5m to grant Granted Apr 07, 2026
Patent 12592057
SYSTEM AND METHOD FOR DETECTING AND CLASSIFYING RETINAL MICROANEURYSMS
2y 5m to grant Granted Mar 31, 2026
Patent 12585939
SYSTEMS AND METHODS FOR DISTRIBUTED DATA ANALYTICS
2y 5m to grant Granted Mar 24, 2026
Patent 12586410
Method and Device for Dynamic Recognition of Emotion Based on Facial Muscle Movement Monitoring
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+6.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 715 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month