Prosecution Insights
Last updated: April 19, 2026
Application No. 16/443,549

CELL IMAGE SYNTHESIS USING ONE OR MORE NEURAL NETWORKS

Final Rejection §103
Filed
Jun 17, 2019
Examiner
BITAR, NANCY
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
6 (Final)
83%
Grant Probability
Favorable
7-8
OA Rounds
2y 11m
To Grant
91%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
786 granted / 946 resolved
+21.1% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
978
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 946 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Remarks/Arguments Applicant’s Response to the Final Rejection is acknowledged but are moot in view of the new ground(s) of rejection necessitated by the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Gelbman et al (US 2018/0353072) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 7-8, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Gelbman et al (US 2018/0353072) in view of Ostayahov et al (US 2021/0383242). Regarding claim 1, Gelbman et al discloses one or more processing unit , comprising circuitry: cause a first portion of one or more neural networks (the images may be de-identifying, e.g., by using one or more neural networks before transmission to system 100 and/or at system 100, paragraph [0035]) to receive genetic information (engine 115 may receive gene variants 101. Gene variants 101 may comprise genetic variants that are representations of gene sequences (e.g., stored as text or other format that captures the sequence of cytosine (C), guanine (G), adenine (A) or thymine (T) that form different genes; paragraph [0032]) and generate one or more segmentation masks and one or more images of one or more cells exhibiting one or more features associated with the genetic information ( feature extraction 109 may output features (e.g., vectors) to predictive engine 111. Predictive engine 111 may comprise a machine learned model that accepts one or more features from one or more external soft tissue images as input and outputs one or more possible pathogens (pathogens 113) based on the one or more features, paragraph [040-0043]) While Gelbman et al teaches the limitation above, Gelbman et al. fails to teach “use a second portion of the one or more neural networks to update the one or more neural networks based, at least in part, on a loss function to compare the one or more images of the one or more cells, the one or more segmentation masks, and the genetic information with each other.” Ostyakov et al. teaches performing automated image processing, comprising: first neural network for forming a coarse image z by segmenting an object O from an original image x containing the object O and background Bx by a segmentation mask, and, using the mask, cutting off the segmented object O from the image x and pasting it onto an image y containing only background By, second neural network for constructing an enhanced version of an image (Image I) with pasted segmented object O by enhancing coarse image z based on the original images x and y and the mask m; third neural network, for restoring the background-only image (Image II) without removed segmented object O by inpainting image obtained by zeroing out pixels of image x using the mask m; wherein the first, second and third neural networks are combined into common architecture of neural network for sequential performing segmentation, enhancing and inpainting and for simultaneously learning, wherein the common architecture of neural network accepts the images and outputs processed images of the same dimensions ( abstract).Ostyakov teaches the loss function by using a first discriminator is a background discriminator that attempts to distinguish between a reference real background image and inpainted background image, and a second discriminator is an object discriminator that attempts to distinguish between a reference real object O image and enhanced object O image. It would have been obvious to one skilled in the art before filing of the claimed invention to use the loss function using the disseminators as taught by Ostyakov et al in order to achieve better results on unsupervised object segmentation, inpainting and image blending ( paragraph [008]).Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 2, Gelbman discloses wherein the one or more neural networks: accept, as input, background image data and genetic expression data, the genetic expression data associated with visual features of the one or more cells ( mages 107b may comprise visual representations of one or more of users 105 (or portions thereof, such as faces or other external soft tissues). As depicted in FIG. 1, images 107b may undergo feature extraction 109. As used in the context of images, the term “feature” refers to any property of images 107b (such as points, edges, gradients, or the like) or to any property of a face or other tissue representable by an image (such as a phenotypic feature). More broadly, “feature” may refer to any numerical representation of characteristics of a set of data, such as characteristics of text (e.g., based on words or phrases of the text), characteristics of genes (e.g., the presence one or more gene variants, locations of particular genes), characteristics of images (as explained above), or the like, paragraph [0039]). c. Regarding claims 7-8, claims 7-8 are analogous and correspond to claims 1-2. See rejection of claims 1-2 for further explanation. d. Regarding claims 13-14, claims 13-14 are analogous and correspond to claims 1-2. See rejection of claims 1-2 for further explanation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3-6, 9-12, and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gelbman et al (US 2018/0353072) in view of Ostayahov et al (US 2021/0383242) and in further view of Mahmood et al. (“Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images”). Regarding claim 3, Gelbman discloses all the previous claim limitations. However, Gelbman et al does not disclose wherein the one or more ALUs are further to be configured to: infer the one or more images, and the one or more neural networks are a multi-conditional generative adversarial network (GAN) trained using medical image data and genetic expression data. Mahmood discloses wherein the one or more ALUs are further to be configured to: infer the one or more images using a multi-conditional generative adversarial network (GAN) trained using medical image data and genetic expression data (Mahmood discloses that “ The cycle GAN framework learns a mapping between randomly generated polygon masks and unpaired pathology images” when “[t]he size, location and shape of the nuclei can vary significantly based on patients, clinical condition, organs, cell-cycle phase and aberrant phenotypes” at Fig. 1 and chapter III-D). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the cycle GAN of Mahmood to Gelman’s machine learning module. The suggestion/motivation would have been to provide “advantages in that a reproducibility of the input image is improved and boundary artifacts are reduced” (Kang; ¶0007). Regarding claim 4, the combination applied in claim 3 discloses wherein the one or more neural networks are trained in part by encoding the medical image data and the genetic expression data and fusing the encoded data to generate a synthetic image and a segmentation mask, the synthetic image including a representation of a group of cells blended with a background portion of the medical image data (Mahmood discloses normalizing pathology images at Fig. 1 and chapter III-B) Regarding claim 5, the combination applied in claim 3 discloses wherein the one or more neural networks are further trained by passing the synthetic image, the segmentation mask, and a gene code for the genetic expression data to a discriminator for determining a set of loss values, wherein one or more network parameters of the GAN were updated using the set of loss values (Mahmood discloses that “[t]he cycle GAN framework learns a mapping between randomly generated polygon masks and unpaired pathology im-ages. Since cycle GAN is based on consistency loss, the setup also learns a reverse mapping from pathology images to corresponding segmentation or polygon masks . . . To train this framework for synthetic data generation with unpaired data, the cycle GAN objective consists of an adversarial loss term LGAN and a cycle consistency loss term Lcyc. The adversarial loss is used to match the distribution of translated samples to that of the target distribution and can be expressed for both mapping functions” at Fig. 1 and chapters III-D and E). Regarding claim 6, the combination applied in claim 3 discloses wherein the one or more neural networks are trained utilizes a learned genomic map between visual features of the one or more cells and the genetic expression data (Mahmood discloses three evaluation methods related to ground truth such as Average Pompeiu-Hausdorff (aHD), F1 Score, and Aggregated Jaccard Index (AJI). All of those methods try to utilize the ground truth corresponding to the segmentation mask(s). at chapter IV-B.). Regarding claims 9-12, claims 9-12 are analogous and correspond to claims 3-6, respectively. See rejection of claims 3-6 for further explanation. Regarding claims 15-18, claims 15-18 are analogous and correspond to claims 3-6, respectively. See rejection of claims 3-6 for further explanation. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mahmoud can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. NANCY . BITAR Examiner Art Unit 2664 /NANCY BITAR/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Jun 17, 2019
Application Filed
Jun 11, 2022
Non-Final Rejection — §103
Dec 15, 2022
Response Filed
Feb 18, 2023
Final Rejection — §103
Aug 24, 2023
Notice of Allowance
Mar 25, 2024
Request for Continued Examination
Apr 01, 2024
Response after Non-Final Action
Apr 01, 2024
Response after Non-Final Action
Oct 17, 2024
Non-Final Rejection — §103
Apr 22, 2025
Response Filed
Apr 28, 2025
Final Rejection — §103
Jun 04, 2025
Interview Requested
Jul 02, 2025
Interview Requested
Jul 03, 2025
Applicant Interview (Telephonic)
Jul 03, 2025
Examiner Interview Summary
Aug 01, 2025
Request for Continued Examination
Aug 05, 2025
Response after Non-Final Action
Sep 05, 2025
Non-Final Rejection — §103
Nov 24, 2025
Interview Requested
Dec 19, 2025
Response Filed
Mar 05, 2026
Final Rejection — §103
Mar 27, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599437
PRE-PROCEDURE PLANNING, INTRA-PROCEDURE GUIDANCE FOR BIOPSY, AND ABLATION OF TUMORS WITH AND WITHOUT CONE-BEAM COMPUTED TOMOGRAPHY OR FLUOROSCOPIC IMAGING
2y 5m to grant Granted Apr 14, 2026
Patent 12597132
IMAGE PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12597240
METHOD AND SYSTEM FOR AUTOMATED CENTRAL VEIN SIGN ASSESSMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597189
METHODS AND APPARATUS FOR SYNTHETIC COMPUTED TOMOGRAPHY IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591982
MOTION DETECTION ASSOCIATED WITH A BODY PART
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
83%
Grant Probability
91%
With Interview (+8.2%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 946 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month