Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
In claims 1, 5, 16, 17, 21, and 27, the phrase “and/or” will be interpreted as “or” to follow the broadest reasonable interpretation of the claims.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-28 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “substantially equivalent” in claims 1, 3, 17, 18, and 27 is a relative term which renders the claim indefinite. The term “substantially equivalent” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The aspect of the claim in which a digitally stained image is substantially equivalent to its corresponding real stained tissue is indefinite as it is not clear how equivalent the image must be and by what standard of measure the equivalency is determined.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 14, 17-19, 24, and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Combalia, Marc et al. “Digitally Stained Confocal Microscopy through Deep Learning.” International Conference on Medical Imaging with Deep Learning (2018) (hereinafter referred to as Combalia) in view of Richards-Kortum et al. of US 6187289 B1 (hereinafter referred to as Richards-Kortum).
Regarding claim 1, Combalia teaches a method of using reflectance confocal microscopy (RCM) images of unstained tissue to generate digitally histological-stained microscopic images of tissue: providing a first trained, deep neural network that is executed by image processing software, wherein the first trained deep neural network receives as input(s) a plurality of RCM images of tissue and outputs a digitally stained image that is substantially equivalent to an image of actual stained tissue; providing a second trained, deep neural network that is executed by image processing software, wherein the second trained, deep neural network receives as input(s) a plurality of RCM images of tissue and outputs digitally histological-stained images that are substantially equivalent to the images achieved by actual histological staining of tissue (see Combalia Abstract: “In this paper we propose a combination of deep learning and computer vision techniques to digitally stain confocal microscopy images into H&E-like slides”. “We… stain them using a Cycle Consistency Generative Adversarial Network”); obtaining a plurality of RCM images of the tissue (see Combalia sections 2.3.1. Confocal Microscopy and 2.3.2. H&E Histology sections indicate how the plurality of RCM images were obtained); inputting the plurality of RCM images of the tissue to the first trained, deep neural network to obtain digitally stained images of the tissue; and inputting the plurality of RCM images to the second trained, deep neural network, wherein the trained, deep neural network outputs the digitally histological-stained microscopic images of the tissue (see Combalia Figure 1 which indicates an unstained RCM image being input to a Generative Adversarial Network to output a digitally histological-stained RCM image. See Combalia 2. Materials and Methods “the Generative Adversarial Network used to create the (H&E)-like digitally stained image”. It would be predictable to clone the neural network disclosed in Combalia and adjust its output by modifying its training data.)
Combalia does not disclose in vivo methods as well as obtaining acetic acid-stained images of the tissue.
However, Richards-Kortum discloses in vivo RCM imaging as well as obtaining acetic acid-stained images of the tissue (see Richards-Kortum claims 1, 4, and 9: “A method of using acetic acid as a contrast agent for confocal imaging of cells”, “wherein the cells are in vivo”, and “using a reflectance confocal imaging system”.)
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing in vivo imaging system of Richards-Kortum to the existing deep neural network of Combalia because it is predictable that, having knowledge of Richards-Kortum’s in vivo imaging system and the importance of acetic acid staining, one could clone the neural network to then implement the known acetic acid staining technique in a digital format by modifying the training data of the neural network. Thus, it would allow the medical benefits of RCM digital staining to apply to medical contexts where normal acetic acid staining is crucial. (Richards-Kortum column 1 lines 60-67 and column 2 lines 1-9 explain the importance of in vivo reflectance confocal microscopy imaging, and column 2 lines 24-33 explain the importance of acetic acid staining in a medical context. Richards-Kortum teaches that acetic acid staining is vital in a medical context. This would make it obvious to apply the principle of acetic acid staining to a digital staining neural network to evoke the same medical benefits.)
Claims 17 and 27 are rejected under the same analysis as claim 1 above.
Regarding claim 2, Combalia discloses wherein the tissue comprises one of: skin tissue, cervical tissue, mucosal tissue, epithelial tissue (see Combalia 2.3.2. on pg. 125 “Our H&E Histology dataset consists of 29 skin tissue samples”)
Claim 19 is rejected under the same analysis as claim 2 above.
Regarding claim 3, Combalia discloses wherein the digitally histological- stained image is substantially equivalent to an image of the same tissue that is chemically/histologically stained with one of the following histology stains: Hematoxylin and Eosin (H&E) stain, hematoxylin, eosin, Jones silver stain, Masson's Trichrome stain, Periodic acid-Schiff (PAS) stains, Congo Red stain, Alcian Blue stain, Blue Iron, Silver nitrate, trichrome stains, Ziehl Neelsen, Grocott's Methenamine Silver (GMS) stains, Gram Stains, acidic stains, basic stains, Silver stains, Nissl, Weigert's stains, Golgi stain, Luxol fast blue stain, Toluidine Blue, Genta, Mallory's Trichrome stain, Gomori Trichrome, van Gieson, Giemsa, Sudan Black, Perls' Prussian, Best's Carmine, Acridine Orange, immunofluorescent stains, immunohistochemical stains, Kinyoun's- cold stain, Albert's staining, Flagellar staining, Endospore staining, Nigrosin, or India Ink stain (see Combalia pg. 124 “we use Cycle Consistency Generative Adversarial Networks … to transfer the H&E stain appearance to the CM images”)
Claim 18 is rejected under the same analysis as claim 3 above.
Regarding claim 14, Combalia discloses wherein the first and second trained, deep neural networks are trained using a Generative Adversarial Network (GAN) model (see Combalia Figure 1 and pg. 124 “we use Cycle Consistency Generative Adversarial Networks … to transfer the H&E stain appearance to the CM images”)
Claim 24 is rejected under the same analysis as claim 14 above.
Claim(s) 4, 5, 11-13, 16, 20, 21, 23, and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Combalia in view of Richards-Kortum as applied to claims 1-3, 14, 17-19, 24, and 27 above, and further in view of Jackson et al. of US 12249063 B1 (hereinafter referred to as Jackson).
Regarding claim 4, Combalia and Richards-Kortum fail to disclose wherein the first trained, deep neural network is trained with matched acetic acid-stained images or image patches serving as ground truth images or and their corresponding reflectance confocal microscopy (RCM) images or image patches of unstained tissue samples serving as network input.
However, Jackson teaches the deep neural network is trained with matched acetic acid-stained images or image patches serving as ground truth images and their corresponding reflectance confocal microscopy (RCM) images or image patches of unstained tissue samples serving as network input (see Jackson col. 6 line 65 “For the training set 132, the slide data 134 is initially produced via H & E staining. … The data in these slide pairs can then be aligned/registered by the processor as described below for comparison.”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing neural network training method using image patches and RCM images of Jackson to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would give the neural network a training method, which most neural networks require.
Claim 20 is rejected under the same analysis as claim 4 above.
Regarding claim 5, Combalia and Richards-Kortum fail to disclose wherein the second trained, deep neural network is trained with matched chemically/histologically stained images and/or pseudo-stained images serving as ground truth images and acetic acid-stained images or image patches and/or their corresponding reflectance confocal microscopy (RCM) images or image patches of unstained tissue samples, serving as network input.
However, Jackson discloses wherein the trained, deep neural network is trained with matched chemically/histologically stained images and/or pseudo-stained images serving as ground truth images and acetic acid-stained images or image patches and/or their corresponding reflectance confocal microscopy (RCM) images or image patches of unstained tissue samples, serving as network input (interpreted as: matched histologically stained images and their corresponding reflectance confocal microscopy (RCM) images; see Jackson col. 6 line 65 “For the training set 132, the slide data 134 is initially produced via H & E staining. … The data in these slide pairs can then be aligned/registered by the processor as described below for comparison.”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing neural network training method using image patches and RCM images of Jackson to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would give the neural network a training method, which most neural networks require.
Claim 21 is rejected under the same analysis as claim 5 above.
Regarding claim 11, Combalia and Richards-Kortum do not disclose wherein the matched ground truth images comprise at least some images that include melanocytes.
However, Jackson discloses wherein the matched ground truth images comprise at least some images that include melanocytes (see Jackson col. 4 line 35).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing melanocyte images of Jackson to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would allow for more detail and precision of the images as well as increased utility for image analysis, as melanocytes are essential for research of various skin conditions.
Regarding claim 12, Combalia discloses a series of trained neural networks configured to register pairs of images of image patches for training of the first trained, deep neural network (see Combalia Figures 8 and 9; Figure 8 caption: “Results from the CycleGAN trained with RCM noisy images. Top row represents the input images of the CycleGAN, which have been digitally stained”, indicating the self-training of a GAN received images that were already trained from previous digital staining).
Combalia and Richards-Kortum do not disclose wherein the matched acetic acid- stained images or image patches and their corresponding reflectance confocal microscopy (RCM) images or image patches of unstained tissue samples.
However, Jackson discloses wherein the matched acetic acid- stained images or image patches and their corresponding reflectance confocal microscopy (RCM) images or image patches of unstained tissue samples (see Jackson col. 6 line 65 “For the training set 132, the slide data 134 is initially produced via H & E staining. … The data in these slide pairs can then be aligned/registered by the processor as described below for comparison.”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing image patches and RCM images of Jackson to the existing series of networks of Combalia and Richards-Kortum because it is predictable that doing so would provide a method for the images to be more heavily trained and cause the system to be more accurate.
Regarding claim 13, Combalia and Richards-Kortum do not disclose wherein the first and second trained, deep neural networks comprise convolutional neural networks.
However, Jackson discloses wherein the first and second trained, deep neural networks comprise convolutional neural networks (see Jackson col. 4 lines 31-33 “A convolutional neural network (CNN) is generated from the image data at training time.”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing convolutional neural networks of Jackson to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would allow for more versatility and variety in how the neural network operates, establishing more usefulness.
Claim 23 is rejected under the same analysis as claim 13 above.
Regarding claim 16, Combalia and Richards-Kortum do not disclose wherein the digitally histological- stained microscopic images and/or the RCM images of the tissue are displayed on a display.
However, Jackson discloses wherein the digitally histological- stained microscopic images and/or the RCM images of the tissue are displayed on a display (see Jackson col. 7 lines 31-35 “The computing device 170 can handle or manage system settings, user inputs and result outputs. The computing device 170 herein includes an exemplary graphical user interface (GUI) having a display (e.g. a touchscreen) 172, mouse 174 and keyboard 176.”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing display of Jackson to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would give the user an interface to input data and view the results, therefore increasing usefulness and practicality.
Regarding claim 25, Combalia and Richards-Kortum do not disclose a display for displaying the digitally histological-stained microscopic images of unstained tissue.
However, Jackson discloses a display for displaying the digitally histological-stained microscopic images of unstained tissue.
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing display of Jackson to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would give the user an interface to input data and view the results, therefore increasing usefulness and practicality.
Claim(s) 6, 7, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Combalia and Richards-Kortum as applied to claims 1-3, 14, 17-19, 24, and 27 above, and further in view of Tearney et al. of US 7843572 B2 (hereinafter referred to as Tearney).
Regarding claim 6, Combalia and Richards-Kortum do not disclose the plurality of in vivo RCM images of the tissue comprise a plurality of RCM images obtained at different depths within the tissue.
However, Tearney the plurality of in vivo RCM images of the tissue comprise a plurality of RCM images obtained at different depths within the tissue (see Tearney col. 10 lines 10-13 “images of the phantom sample were acquired at five discrete focal depths over a range of 120 micrometers”)
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing method for gathering five depths of RCM images of Tearney to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would allow for multiple layers of a patient to be scanned and analyzed, therefore screening for multiple potential diseases like deep layered skin diseases that may not be directly on the skin’s surface.
Claims 7 and 22 are rejected according to the same analysis as claim 6 above.
Claim(s) 8 and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Combalia and Richards-Kortum as applied to claims 1-3, 14, 17-19, 24, and 27 above, and further in view of Gareau, Daniel S. “Feasibility of digitally stained multimodal confocal mosaics to simulate histopathology.” Journal of biomedical optics vol. 14,3 (2009): 034050 (hereinafter referred to as Gareau).
Regarding claim 8, Combalia and Richards-Kortum do not disclose wherein the second trained, deep neural network or image processing software outputs a mosaic of a plurality of digitally histological-stained microscopic images of the tissue that represent multiple fields of view (FOVs).
However, Gareau discloses wherein the second trained, deep neural network or image processing software outputs a mosaic of a plurality of digitally histological-stained microscopic images of the tissue that represent multiple fields of view (FOVs) (see Gareau Figure 1 and pg. 3 “color correlation was achieved by digital staining of the multimodal confocal mosaics”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing mosaic of Gareau to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would provide a larger field of view for the images, therefore increasing efficiency by allowing less images to be processed for the same quantity of data.
Regarding claim 28, Combalia and Richards-Kortum do not disclose wherein the acquired raw images comprise multiphoton microscopy images, fluorescence confocal microscopy images, fluorescence lifetime microscopy (FLIM) images, fluorescence microscopy images, hyperspectral microscopy images, Raman microscopy images, structured illumination microscopy images, or polarization microscopy images.
However, Gareau discloses wherein the acquired raw images comprise multiphoton microscopy images, fluorescence confocal microscopy images, fluorescence lifetime microscopy (FLIM) images, fluorescence microscopy images, hyperspectral microscopy images, Raman microscopy images, structured illumination microscopy images, or polarization microscopy images (see Gareau pg. 2 under Digital Staining: “The grayscale fluorescence-only and reflectance-only mosaics were digitally stained with color and combined”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing fluorescence microscopy of Gareau to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would increase robustness of the system and increase cost-effectiveness and ease of use, as fluorescence microscopy is cheaper and simpler to set up as compared to reflectance confocal microscopy.
Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Combalia and Richards-Kortum as applied to claims 1-3, 14, 17-19, 24, and 27 above, and further in view of Zeng et al. of US 12310698 B2 (hereinafter referred to as Zeng).
Combalia and Richards-Kortum fail to disclose wherein the second trained, deep neural network or image processing software outputs a three-dimensional volumetric image of the tissue that is digitally histological-stained.
However, Zeng discloses wherein the second trained, deep neural network or image processing software outputs a three-dimensional volumetric image of the tissue that is digitally histological-stained (see Zeng col. 4 Lines 23-24 “The method processes the set of xz plane images to provide volumetric image data for the tissue”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing volumetric image generation method of Zeng to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would increase robustness of the neural network by allowing the image data to be more visually appealing and easier to understand as a three-dimensional image takes into account depth information.
Regarding claim 10, Combalia and Richards-Kortum fail to disclose wherein the second trained, deep neural network or image processing software outputs an image of tissue in a vertical plane.
However, Zeng discloses wherein the second trained, deep neural network or image processing software outputs an image of tissue in a vertical plane (see Zeng Col. 9 lines 63-65 “the data may be viewed in any plane. For example, the data may be viewed in xz planes” and Col. 9 Lines 57-58 “An xz plane is an example of a vertical plane”).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing vertical plane image generation method of Zeng to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would increase robustness of the neural network by allowing the image data to be received in different planes and allow more details of the sample to be understood.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Combalia and Richards-Kortum as applied to claims 1-3, 14, 17-19, 24, and 27 above, and further in view of Lau et al. of US 11995810 B2 (hereinafter referred to as Lau).
Combalia and Richards-Kortum do not disclose wherein the second trained, deep neural network outputs digitally histological-stained microscopic images of the tissue in real-time.
However, Lau discloses wherein the second trained, deep neural network outputs digitally histological-stained microscopic images of the tissue in real-time (see Lau “the method for generating a stained image … may result in faster tissue processing time that is particularly advantageous in surgical procedures where fast diagnoses are required for treatment” indicating that the images will be processed fast enough to be used mid-surgical treatment, which implies near instantaneously).
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing real-time image generation method of Lau to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would allow for the neural network to be used instantaneously which, as mentioned in the reference, would be crucial for uses mid-surgical procedure as doctors can get instant diagnoses on the patient.
Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Combalia and Richards-Kortum as applied to claims 1-3, 14, 17-19, 24, and 27 above, and further in view of Tearney and Bouma et al. of US 2011/0137178 A1 (hereinafter referred to as Bouma).
Combalia and Richards-Kortum do not disclose wherein the RCM images are obtained from a bench-top or portable RCM device.
However, Bouma discloses wherein the RCM images are obtained from a bench-top or portable RCM device (see Bouma par. 0023 “This exemplary need has led to a development of an exemplary embodiment of an apparatus and a method of RCM according to the present disclosure… that can be configured to rapidly obtain higher-resolution CVM images via, e.g., a small-diameter probe”)
It would have been obvious for one of ordinary skill in the art before the effective filing date to add the existing portable RCM device of Bouma to the existing RCM deep neural network of Combalia and Richards-Kortum because it is predictable that doing so would allow the neural network to operate on RCM images that were from harder to reach areas of the body, increasing the scope of treatment areas to sensitive internal regions of the body like the esophagus, stomach, or other internal organs.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIO A. RODIN whose telephone number is (571)272-8003. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARIO ANTHONY RODIN/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675