DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Remarks/Arguments
Applicant’s Response to the Final Rejection is acknowledged but are moot in view of the new ground(s) of rejection necessitated by the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Gelbman et al (US 2018/0353072)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 7-8, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Gelbman et al (US 2018/0353072) in view of Ostayahov et al (US 2021/0383242).
Regarding claim 1, Gelbman et al discloses one or more processing unit , comprising circuitry:
cause a first portion of one or more neural networks (the images may be de-identifying, e.g., by using one or more neural networks before transmission to system 100 and/or at system 100, paragraph [0035]) to receive genetic information (engine 115 may receive gene variants 101. Gene variants 101 may comprise genetic variants that are representations of gene sequences (e.g., stored as text or other format that captures the sequence of cytosine (C), guanine (G), adenine (A) or thymine (T) that form different genes; paragraph [0032]) and generate one or more segmentation masks and one or more images of one or more cells exhibiting one or more features associated with the genetic information ( feature extraction 109 may output features (e.g., vectors) to predictive engine 111. Predictive engine 111 may comprise a machine learned model that accepts one or more features from one or more external soft tissue images as input and outputs one or more possible pathogens (pathogens 113) based on the one or more features, paragraph [040-0043])
While Gelbman et al teaches the limitation above, Gelbman et al. fails to teach “use a second portion of the one or more neural networks to update the one or more neural networks based, at least in part, on a loss function to compare the one or more images of the one or more cells, the one or more segmentation masks, and the genetic information with each other.”
Ostyakov et al. teaches performing automated image processing, comprising: first neural network for forming a coarse image z by segmenting an object O from an original image x containing the object O and background Bx by a segmentation mask, and, using the mask, cutting off the segmented object O from the image x and pasting it onto an image y containing only background By, second neural network for constructing an enhanced version of an image (Image I) with pasted segmented object O by enhancing coarse image z based on the original images x and y and the mask m; third neural network, for restoring the background-only image (Image II) without removed segmented object O by inpainting image obtained by zeroing out pixels of image x using the mask m; wherein the first, second and third neural networks are combined into common architecture of neural network for sequential performing segmentation, enhancing and inpainting and for simultaneously learning, wherein the common architecture of neural network accepts the images and outputs processed images of the same dimensions ( abstract).Ostyakov teaches the loss function by using a first discriminator is a background discriminator that attempts to distinguish between a reference real background image and inpainted background image, and a second discriminator is an object discriminator that attempts to distinguish between a reference real object O image and enhanced object O image. It would have been obvious to one skilled in the art before filing of the claimed invention to use the loss function using the disseminators as taught by Ostyakov et al in order to achieve better results on unsupervised object segmentation, inpainting and image blending ( paragraph [008]).Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 2, Gelbman discloses wherein the one or more neural networks: accept, as input, background image data and genetic expression data, the genetic expression data associated with visual features of the one or more cells ( mages 107b may comprise visual representations of one or more of users 105 (or portions thereof, such as faces or other external soft tissues). As depicted in FIG. 1, images 107b may undergo feature extraction 109. As used in the context of images, the term “feature” refers to any property of images 107b (such as points, edges, gradients, or the like) or to any property of a face or other tissue representable by an image (such as a phenotypic feature). More broadly, “feature” may refer to any numerical representation of characteristics of a set of data, such as characteristics of text (e.g., based on words or phrases of the text), characteristics of genes (e.g., the presence one or more gene variants, locations of particular genes), characteristics of images (as explained above), or the like, paragraph [0039]).
c. Regarding claims 7-8, claims 7-8 are analogous and correspond to claims 1-2. See rejection of claims 1-2 for further explanation.
d. Regarding claims 13-14, claims 13-14 are analogous and correspond to claims 1-2. See rejection of claims 1-2 for further explanation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 3-6, 9-12, and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gelbman et al (US 2018/0353072) in view of Ostayahov et al (US 2021/0383242) and in further view of Mahmood et al. (“Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images”).
Regarding claim 3, Gelbman discloses all the previous claim limitations. However, Gelbman et al does not disclose wherein the one or more ALUs are further to be configured to: infer the one or more images, and the one or more neural networks are a multi-conditional generative adversarial network (GAN) trained using medical image data and genetic expression data.
Mahmood discloses wherein the one or more ALUs are further to be configured to: infer the one or more images using a multi-conditional generative adversarial network (GAN) trained using medical image data and genetic expression data (Mahmood discloses that “ The cycle GAN framework learns a mapping between randomly generated polygon masks and unpaired pathology images” when “[t]he size, location and shape of the nuclei can vary significantly based on patients, clinical condition, organs, cell-cycle phase and aberrant phenotypes” at Fig. 1 and chapter III-D).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the cycle GAN of Mahmood to Gelman’s machine learning module.
The suggestion/motivation would have been to provide “advantages in that a reproducibility of the input image is improved and boundary artifacts are reduced” (Kang; ¶0007).
Regarding claim 4, the combination applied in claim 3 discloses wherein the one or more neural networks are trained in part by encoding the medical image data and the genetic expression data and fusing the encoded data to generate a synthetic image and a segmentation mask, the synthetic image including a representation of a group of cells blended with a background portion of the medical image data (Mahmood discloses normalizing pathology images at Fig. 1 and chapter III-B)
Regarding claim 5, the combination applied in claim 3 discloses wherein the one or more neural networks are further trained by passing the synthetic image, the segmentation mask, and a gene code for the genetic expression data to a discriminator for determining a set of loss values, wherein one or more network parameters of the GAN were updated using the set of loss values (Mahmood discloses that “[t]he cycle GAN framework learns a mapping between randomly generated polygon masks and unpaired pathology im-ages. Since cycle GAN is based on consistency loss, the setup also learns a reverse mapping from pathology images to corresponding segmentation or polygon masks . . . To train this framework for synthetic data generation with unpaired data, the cycle GAN objective consists of an adversarial loss term LGAN and a cycle consistency loss term Lcyc. The adversarial loss is used to match the distribution of translated samples to that of the target distribution and can be expressed for both mapping functions” at Fig. 1 and chapters III-D and E).
Regarding claim 6, the combination applied in claim 3 discloses wherein the one or more neural networks are trained utilizes a learned genomic map between visual features of the one or more cells and the genetic expression data (Mahmood discloses three evaluation methods related to ground truth such as Average Pompeiu-Hausdorff (aHD), F1 Score, and Aggregated Jaccard Index (AJI). All of those methods try to utilize the ground truth corresponding to the segmentation mask(s). at chapter IV-B.).
Regarding claims 9-12, claims 9-12 are analogous and correspond to claims 3-6, respectively. See rejection of claims 3-6 for further explanation.
Regarding claims 15-18, claims 15-18 are analogous and correspond to claims 3-6, respectively. See rejection of claims 3-6 for further explanation.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mahmoud can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NANCY . BITAR
Examiner
Art Unit 2664
/NANCY BITAR/Primary Examiner, Art Unit 2664