Prosecution Insights
Last updated: April 19, 2026
Application No. 18/110,501

METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING SEGMENTED IMAGES

Non-Final OA §103
Filed
Feb 16, 2023
Examiner
CHOI, TIMOTHY WING HO
Art Unit
2671
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
199 granted / 331 resolved
-1.9% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
21 currently pending
Career history
352
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 331 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 29 December 2025 has been entered. Response to Amendment Applicant’s response, filed 15 December 2025, to the last office action has been entered and made of record. In response to the cancellation of claims 2-3 and 12-13, they are acknowledged and made of record. In response to the amendments to the claims, they are acknowledged, supported by the original disclosure, and no new matter is added. In response to the amendments to the claims, specifically addressing the rejections of claims 1-20 under 35 U.S.C. § 112(a), of the previous Office action, the amended language has removed the at issue subject matter and obviated the respective rejections, and the rejections have been withdrawn. Amendments to the independent claims 1, 11, and 20 have necessitated an updated ground of rejection over the applied prior art. Please see below for the updated interpretations and rejections. Response to Arguments Applicant’s arguments, see p. 9-10 of Applicant’s reply, filed 15 December 2025, with respect to objections of claims 1, 11, and 20 for appearing to recite typographical error have been fully considered and are persuasive. The respective objections of 14 October 2025 have been withdrawn. Applicant's arguments filed 15 December 2025 have been fully considered but they are not fully persuasive. In regards to Applicant’s remarks, see p. 10-11 of Applicant’s reply, where the combined teachings of Mondal, Zhou, Singh, and Pantofaru fail to disclose or suggest each and every limitation of cancelled claims 2-3 and corresponding claims 12-13, which are presently recited in amended independent claims 1, 11, and 20, the Examiner respectfully disagrees. Examiner notes the claims are treated with their broadest reasonable interpretations consistent with the specification. See MPEP 2111. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, the test for obviousness is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ871 (CCPA 1981). The combined teachings of Mondal and Zhou are relied upon to teach a method for performing semi-supervised segmentation using a neural network architecture composed of two generator and discriminator networks, where pre-trained classifiers are used to initially train the first and second generators and discriminators, which the pre-trained classifiers extract features and perform the classification based on the extracted features during the training process, and that the internal weights and parameters of pre-trained classifier remains unchanged during the training process (see Mondal Fig. 1 and sect. 3.1 Cycle Gan for semi-supervised segmentation; see Zhou [0026]-[0027], [0033]-[0034]), and notably, during the training process, the internal weights and parameters of the generator and discriminator model are refined while the pre-trained classifier remains unchanged (see Zhou [0034]). Singh is relied upon to teach in a known technique for using a classification neural network to classify digital images into one or more classes including a feature extractor to extract features associated with the digital image and further process extracted features using a corresponding trained classifier to classify digital images into a digital image class (see Singh Fig. 2, and [0046]-[0048]), and during training for the classification system the parameters of the feature extractor are frozen when performing back propagation to adjust the parameters of the classifier (see Singh [0083]). The combined teachings of the prior art suggests the use of classification neural networks of Singh as the pre-trained classifiers for training the first and second generators and discriminators of the semi-supervised segmentation neural network architecture, where the classification neural networks includes feature extractors connected to corresponding classifiers. While Singh teaches that the parameters of the feature extractor are frozen during the backpropagation of training the classification neural network, Examiner notes that in the combined suggested teachings, the trained classification neural network is applied as the pre-trained classifiers used to train the first and second generators and discriminators, in which such pre-trained classification neural network remains unchanged and that the classification neural network comprises feature extractors. Therefore, as the pre-trained classifier is taught by Zhou to be unchanged during training of the generator and discriminator models, the combined teachings further suggest that the parameters of the feature extractor of the classification neural network used as pre-trained classifier also remains unchanged. Thus, the combined suggested teachings of Mondal, Zhou, and Singh for using trained classification neural networks as pre-trained classifiers to initially train the first and second generator and discriminators of the semi-supervised neural network architecture provides the broadest reasonable interpretation for the proposed amended limitations of “wherein before training the first generator by using the cycle generative adversarial network, the first discriminator and the first generator are obtained from pre-training based on a pre-trained set of feature extractors and a corresponding set of classifiers, and when training the first discriminator, parameters in the pre-trained set of feature extractors are fixed; wherein before training the first generator by using the cycle generative adversarial network, the second discriminator and the second generator are obtained from pre-training based on the pre-trained set of feature extractors and the corresponding set of classifiers, and when training the second discriminator, parameters in the pre-trained set of feature extractors are fixed”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 4-11, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mondal et al. (“Revisiting CycleGAN for semi-supervised segmentation”), herein Mondal, in view of Zhou et al. (US 2022/0180203, effectively filed 3 December 2020), herein Zhou, and Singh et al. (US 2021/024993), herein Singh. Regarding claim 1, Mondal discloses a method comprising: inputting a to-be-processed image to a first generator in a processor-based neural network system (see Mondal sect. 4.4. Implementation details, where the code for implementing the disclosed CycleGAN model was implemented in Pytorch 3.3. and experiments were run on a server equipped with a NVIDIA Titan V GPU), wherein the first generator is obtained from training by using a first discriminator for the first generator, the training further using a second generator and a second discriminator for the second generator, and further wherein the first generator, the first discriminator, the second generator, and the second discriminator are each implemented in the processor-based neural network system and collectively form a cycle generative adversarial network of the processor- based neural network system (see Mondal Fig. 1, sect. 3.1. CycleGAN for semi-supervised segmentation, where the proposed model architecture is composed of two conditional generators and two discriminators, which are trained simultaneously, and that the first generator learns a mapping from an image to its segmentation labels and generates generated labels; and see Mondal sect. 4.4. Implementation details, where the code for implementing the disclosed CycleGAN model was implemented in Pytorch 3.3. and experiments were run on a server equipped with a NVIDIA Titan V GPU); and acquiring segmented images of the to-be-processed image that are generated by the first generator in the processor-based neural network system (see Mondal Fig. 1, sect. 3.1. CycleGAN for semi-supervised segmentation, where the first generator (GIS) generates corresponding generated labels from the image; and see Mondal sect. 4.4. Implementation details, where the code for implementing the disclosed CycleGAN model was implemented in Pytorch 3.3. and experiments were run on a server equipped with a NVIDIA Titan V GPU); processing the segmented images (see Mondal Fig. 1, sect. 3.1. CycleGAN for semi-supervised segmentation, where the labels generated by the first generator (GIS) are fed into the second generator (GSI) to reconstruct the first image). While Mondal does not explicitly disclose inputting a to-be-processed image to a first generator, Mondal does disclose that the first generator learns a mapping from an image to its segmentation labels and generates generated labels (see Mondal Fig. 1 and sect. 3.1. CycleGAN). At the time of filing, one of ordinary skill in the art would have found it obvious from Mondal’s teachings that the first generator learns a mapping from an image to its segmentation labels, that an image is input to the first generator to learn the mapping and suggesting that an image is input to the first generator to generate the generated labels. This modification is rationalized as some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. In this instance, Mondal teaches that the first generator learns a mapping from an image to its segmentation labels and generates corresponding generated labels. One of ordinary skill in the art would understand that an image is necessarily input to the first generator to learn and generate the corresponding labels. Thus, it would have been within the general knowledge of one of ordinary skill in the art to reasonably expect that an image is inputted to the first generator to learn and generate the corresponding labels. Mondal does not explicitly disclose that the training utilizes the plurality of feature extractors and their respective classifiers in a first training phase to train the first generator with the first discriminator and in a second training phase to train the second generator with the second discriminator; wherein before training the first generator by using the cycle generative adversarial network, the first discriminator and the first generator are obtained from pre-training based on a pre-trained set of feature extractors and a corresponding set of classifiers; and wherein before training the first generator by using the cycle generative adversarial network, the second discriminator and the second generator are obtained from pre-training based on the pre-trained set of feature extractors and the corresponding set of classifiers. Zhou teaches in a related and pertinent method for training a generative adversarial network (see Zhou Abstract), where a pre-trained classifier is used to train a generative adversarial model including a generator and discriminator model (see Zhou [0026]-[0027]), where generated data is provided to the pre-trained classifier to generate a classifier loss used in iterative training of the generator and discriminator models (see Zhou [0033]-[0034]), where pre-trained classifier relies on features from generated simulated data when classifying a target class (see Zhou Fig. 2, [0037]-[0040], Fig. 5, and [0053]-[0058]), and that during the training process, the internal weights and parameters of the generator and discriminator model are refined while the pre-trained classifier remains unchanged (see Zhou [0034]). At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Zhou to the teachings of Mondal, such that pre-trained classifiers are used to initially train the first and second generator and discriminators of the semi supervised segmentation neural network architecture, where the pre-trained classifier are suggested to implicitly extract features and perform the classification based on the extracted features during the training process. This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results. In this instance, Mondal discloses a base method for performing semi supervised segmentation using a neural network architecture composed of two generator and discriminator networks. Zhou teaches a known technique of training a generative adversarial network using a pre-trained classifier to train the generator and discriminator models of the generative adversarial model, which the pre-trained classifier is suggested to extract features and perform the classification based on the extracted features, and that the internal weights and parameters of pre-trained classifier remains unchanged during the training process One of ordinary skill in the art would have recognized that by applying Zhou’s technique would allow for the method of Mondal to similarly use pre-trained classifiers to initially train the first and second generators and discriminators of the semi supervised segmentation neural network architecture, which extract features and perform the classification based on the extracted features during the training process and that the internal weights and parameters of pre-trained classifier remains unchanged during the training process, predictably leading to an improved initialization of the generator and discriminator networks of the semi supervised segmentation using a neural network architecture. Mondal and Zhou do not explicitly disclose wherein the processor-based neural network system further comprises a plurality of feature extractors each cascaded with a corresponding classifier; wherein before training the first generator by using the cycle generative adversarial network, the first discriminator and the first generator are obtained from pre-training based on a pre-trained set of feature extractors, and when training the first discriminator, parameters in the pre-trained set of feature extractors are fixed; and wherein before training the first generator by using the cycle generative adversarial network, the second discriminator and the second generator are obtained from pre-training based on the pre-trained set of feature extractors, and when training the second discriminator, parameters in the pre-trained set of feature extractors are fixed. Singh teaches in a related and pertinent systems and methods for training a classification neural network to classify digital image in few shot tasks (see Singh Abstract), where a classification neural network to classify digital images into one or more classes includes a feature extractor to extract features associated with the digital image and further processes the extracted features using a corresponding trained classifier to classify digital images into a digital image class (see Singh Fig. 2 and [0046]-[0048]), where the predicted class are compared to ground truth classes using a loss function for training the classification neural network, which during training for the classification system, parameters of the feature extractor are frozen when performing back propagation to adjust the parameters of the classifier, and the trained neural networks perform various different few-shot learning tasks, e.g. classification, tagging, segmentation (see Singh [0067]-[0071], [0080]-[0083], and [0084]). At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Singh to the teachings of Mondal and Zhou, such that the pre-trained classifiers for initially training the first and second generators and discriminators are classification neural networks as taught by Singh, which comprise of feature extractors to extract features from the processed images, and connected to corresponding classifiers, which process the extracted features to classify the digital images into a predicted class. This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results. In this instance, Mondal and Zhou disclose a base method for performing semi supervised segmentation using a neural network architecture composed of two generator and discriminator networks, where pre-trained classifiers are used to initially train the first and second generators and discriminators of the semi supervised segmentation neural network architecture, which extract features and perform the classification based on the extracted features during the training process, and that the internal weights and parameters of pre-trained classifier remains unchanged during the training process. Singh teaches a known technique of a using classification neural network to classify digital images into one or more classes includes a feature extractor to extract features associated with the digital image and further processes the extracted features using a corresponding trained classifier to classify digital images into a digital image class, and that the trained neural networks perform various different few-shot learning tasks, e.g. classification, tagging, segmentation. One of ordinary skill in the art would have recognized that by applying Singh’s technique would allow for the method of Mondal and Zhou to use classification neural networks, as taught by Singh, as the pre-trained classifiers for initially training the first and second generators and discriminators, where the classification neural networks comprise of feature extractors connected to corresponding classifiers, which extract features from the images and process the extracted features to classify the images into a predicted class, thus further suggesting that the parameters of the feature extractor of the classification neural network used as pre-trained classifier also remains unchanged during the training process, predictably leading to an improved classifier for the initialization of the generator and discriminator networks of the semi supervised segmentation using a neural network architecture. Regarding claim 4, please see the above rejection of claim 1. Mondal, Zhou, and Singh disclose the method according to claim 1, further comprising: inputting the segmented images to the second generator so as to generate a reconstructed image of the to-be-processed image by the second generator (see Mondal Fig. 1, sect. 3.1. CycleGAN for semi-supervised segmentation, where the labels generated by the first generator (GIS) are fed into the second generator (GSI) to reconstruct the first image); and acquiring the reconstructed image of the to-be-processed image that is generated by the second generator (see Mondal Fig. 1, sect. 3.1. CycleGAN for semi-supervised segmentation, where the second generator (GSI) reconstructs the first image from the labels generated by the first generator (GIS)). Regarding claim 5, please see the above rejection of claim 1. Mondal, Zhou, and Singh disclose the method according to claim 1, wherein training the first generator by using the cycle generative adversarial network comprises: training the first generator, the first discriminator, the second generator, and the second discriminator simultaneously so as to obtain a trained cycle generative adversarial network (see Mondal Fig. 1, sect. 3.1. CycleGAN for semi-supervised segmentation, where the proposed model architecture is composed of two conditional generators and two discriminators, which are trained simultaneously). Regarding claim 6, please see the above rejection of claim 1. Mondal, Zhou, and Singh disclose the method according to claim 1, wherein training the first generator by using the cycle generative adversarial network comprises: acquiring a labeled sample image, a sample segmented image corresponding to the labeled sample image, and an unlabeled sample image (see Mondal Fig. 1, sect. 3.2. Loss functions, where the loss functions employed to train the model uses data from labeled images, ground truth masks of labeled images (ground truth labels), and unlabeled images); constructing an integrated training loss function based on the labeled sample image, the sample segmented image, and the unlabeled sample image (see Mondal sect. 3.2. Loss functions and Eq. (1)-(8), where a total loss function combines the segmentation, cycle consistency, and adversarial loss functions, which use the labeled images, ground truth labeled images, and unlabeled images); and training the cycle generative adversarial network based on the integrated training loss function (see Mondal Fig. 1, sect. 3.2. Loss functions, where the total loss function is used to train the model). Regarding claim 7, please see the above rejection of claim 6. Mondal, Zhou, and Singh disclose the method according to claim 6, wherein the integrated training loss function comprises a supervised segmentation loss function constructed by using the first generator and the second generator, a cycle consistency loss function constructed by using the first generator and the second generator, and an adversarial loss function constructed by using the first generator, the second generator, the first discriminator, and the second discriminator (see Mondal sect. 3.2. Loss functions and Eq. (8), where a total loss function combines the segmentation loss functions based on the first and second generator, the cycle consistency loss functions based on the first and second generator, and the adversarial loss functions based on the first and second generators and discriminators). Regarding claim 8, please see the above rejection of claim 7. Mondal, Zhou, and Singh disclose the method according to claim 7, wherein a value of the integrated training loss function decreases with an increase in a value of the adversarial loss function (see Mondal sect. 3.2. Loss functions and Eq. (8), where the adversarial loss functions are subtracted from the other loss terms of the total loss function). Regarding claim 9, please see the above rejection of claim 7. Mondal, Zhou, and Singh disclose the method according to claim 7, wherein the cycle consistency loss function is constructed in the following manner: constructing a first cycle consistency loss function by using the first generator and the second generator and based on the unlabeled sample image (see Mondal Fig. 1, sect. 3.2. Loss functions and Eq. (6), where a first cycle consistency loss function is based on the first and second generator and using unlabeled images); constructing a second cycle consistency loss function by using the first generator and the second generator and based on the sample segmented image (see Mondal Fig. 1, sect. 3.2. Loss functions and Eq. (7), where a second cycle consistency loss function is based on the first and second generator and using ground truth label images); and constructing the cycle consistency loss function based on the first cycle consistency loss function and the second cycle consistency loss function (see Mondal sect. 3.2. Loss functions and Eq. (8), where the first and second cycle consistency loss functions are combined in the total loss function). Regarding claim 10, please see the above rejection of claim 7. Mondal, Zhou, and Singh disclose the method according to claim 7, wherein the adversarial loss function is constructed in the following manner: constructing a first adversarial loss function by using the first discriminator and the first generator and based on the sample segmented image and the unlabeled sample image (see Mondal Fig. 1, sect. 3.2. Loss functions and Eq. (4), where a first adversarial loss function is based on the first generator and first discriminator and using unlabeled images and ground truth label images); constructing a second adversarial loss function by using the second discriminator and the second generator and based on the sample segmented image and the unlabeled sample image (see Mondal Fig. 1, sect. 3.2. Loss functions and Eq. (5), where a second adversarial loss function is based on the second generator and second discriminator and using unlabeled images and ground truth label images); and constructing the adversarial loss function based on the first adversarial loss function and the second adversarial loss function (see Mondal sect. 3.2. Loss functions and Eq. (8), where the first and second adversarial loss functions are combined in the total loss function). Regarding claim 11, it recites an electronic device performing the method of claim 1. Mondal, Zhou, and Singh teach an electronic device performing the method of claim 1 (see Mondal sect. 3.3. Implementation details, where the disclosed teachings are implemented in code and run on a server equipped with a NVIDIA Titan V GPU (12 GBs)). Please see above for detailed claim analysis, with the exception to the following further limitations: at least one processor (see Mondal sect. 3.3. Implementation details, where a NVIDIA Titan V GPU (12 GBs) is disclosed); and at least one memory, the at least one memory being coupled to the at least one processor and storing instructions used for execution by the at least one processor, wherein when executed by the at least one processor, the instructions cause the electronic device to perform the method of claim 1 (see Mondal sect. 3.3. Implementation details, where the disclosed teachings are implemented in code and run on a server equipped with a NVIDIA Titan V GPU (12 GBs), suggesting that the code is stored on corresponding memory of the server and executed by the GPU) Please see the above rejection for claim 1, as the rationale to modify the teachings of Mondal, Zhou, and Singh are similar, mutatis mutandis. Regarding claim 14, see above rejection for claim 11. It is a device claim reciting similar subject matter as claim 4. Please see above claim 4 for detailed claim analysis as the limitations of claim 14 are similarly rejected. Regarding claim 15, see above rejection for claim 11. It is a device claim reciting similar subject matter as claim 5. Please see above claim 5 for detailed claim analysis as the limitations of claim 15 are similarly rejected. Regarding claim 16, see above rejection for claim 11. It is a device claim reciting similar subject matter as claim 6. Please see above claim 6 for detailed claim analysis as the limitations of claim 16 are similarly rejected. Regarding claim 17, see above rejection for claim 16. It is a device claim reciting similar subject matter as claim 7. Please see above claim 7 for detailed claim analysis as the limitations of claim 17 are similarly rejected. Regarding claim 18, see above rejection for claim 17. It is a device claim reciting similar subject matter as claim 9. Please see above claim 9 for detailed claim analysis as the limitations of claim 18 are similarly rejected. Regarding claim 19, see above rejection for claim 17. It is a device claim reciting similar subject matter as claim 10. Please see above claim 10 for detailed claim analysis as the limitations of claim 19 are similarly rejected. Regarding claim 20, it recites a computer program product tangibly stored on a non-transitory computer-readable medium performing the method of claim 1. Mondal, Zhou, and Singh teach a computer program product tangibly stored on a non-transitory computer-readable medium performing the method of claim 1 (see Mondal sect. 3.3. Implementation details, where the disclosed teachings are implemented in code and run on a server equipped with a NVIDIA Titan V GPU (12 GBs), suggesting that the code is stored on corresponding memory of the server and executed by the GPU). Please see above for detailed claim analysis. Please see the above rejection for claim 1, as the rationale to modify the teachings of Mondal, Zhou, and Singh are similar, mutatis mutandis. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lee et al. (US 2022/0284584) is pertinent in teaching in a method for training an algorithm to identify structure anatomical features, where a GAN can be trained to generate a pseudo-contrast computed tomography image from a non-contrast computed tomography image, which comprises a generator and discriminator network, and that discrimination may comprise any suitable classifier such as a random forest (see Lee [0056]-[0059]), and where radiomic features can be used to train a random forest classification algorithm to identify structural features, which comprises extracting radiometric feature values for a set of radiometric features and training a random forest classification algorithm, using the extracted radiomic feature values (see Lee Fig. 19 and [0179]-[0184]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY WING HO CHOI whose telephone number is (571)270-3814. The examiner can normally be reached 9:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VINCENT RUDOLPH can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIMOTHY CHOI/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Feb 16, 2023
Application Filed
Apr 04, 2025
Non-Final Rejection — §103
Jul 11, 2025
Response Filed
Oct 09, 2025
Final Rejection — §103
Dec 15, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12497051
APPARATUSES, SYSTEMS, AND METHODS FOR DETERMINING VEHICLE OPERATOR DISTRACTIONS AT PARTICULAR GEOGRAPHIC LOCATIONS
2y 5m to grant Granted Dec 16, 2025
Patent 12488569
UNPAIRED IMAGE-TO-IMAGE TRANSLATION USING A GENERATIVE ADVERSARIAL NETWORK (GAN)
2y 5m to grant Granted Dec 02, 2025
Patent 12475992
SYSTEM AND METHOD FOR NAVIGATING A TOMOSYNTHESIS STACK INCLUDING AUTOMATIC FOCUSING
2y 5m to grant Granted Nov 18, 2025
Patent 12469300
SYSTEMS, DEVICES, AND METHODS FOR VEHICLE CAMERA CALIBRATION
2y 5m to grant Granted Nov 11, 2025
Patent 12469190
X-RAY TOMOGRAPHIC RECONSTRUCTION METHOD AND ASSOCIATED DEVICE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
95%
With Interview (+35.1%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 331 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month