DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 4-5 and 9 are objected to because of the following informalities:
Claims 4-5 depends on claim 1 is believed to be incorrect. Examiner believes claims 4-5 should depend on claim 2.
Claim 9 depends on claim 1 is believed to be incorrect. Examiner believes claim 9 should depend on claim 8. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wu et al., US 2019/0188446.
Regarding claim 1, Wu discloses a computer-implemented method for predicting digital fluorescence images (fig. 2; para 0005 and 0030; a computer-implemented method for generating virtually stained images (i.e., digital fluorescence images) of unstained samples), the method comprising
capturing a first digital image of a tissue sample (fig. 2, element 220; para 0026, 0028 and 0034; the imaging system 135 may be a microscope may be used to acquire the images that are stored in the image sample data storage 140. An image of the unstained second tissue sample may be accessed. The image includes a plurality of spectral images of the unstained second tissue sample. For example, the image may be accessed from the image sample data storage 140 or the image sample data storage 141) by means of a microsurgical optical system (figs. 3(a)-3(b); para 0039; a microscope) with a first digital image capturing unit (figs. 3(a)-3(b), element 310; para 0040; a camera) with a first plurality of color channel information using white light (figs. 3(a)-3(b), element 342; para 0041; white light) and at least one optical filter (figs. 3(a)-3(b), elements 331 and 332; para 0039-0040; an optical filters),
predicting a second digital image in the form of a digital fluorescence representation of the captured first digital image by means of a trained machine learning system comprising a trained learning model for predicting a corresponding digital fluorescence representation of an input image (fig. 2, elements 225-230; para 0034; the trained artificial neural network 130 may be used to generate a virtually stained image of a second unstained tissue sample; The virtually stained image may then be output),
wherein the first captured digital image is used as input image for the trained machine learning system (fig. 2, element 220; para 0034; An image of the unstained second tissue sample may be accessed. The image includes a plurality of spectral images of the unstained second tissue sample (i.e., the image is inputted into the trained artificial neural network)), and
wherein parameter values of the at least one optical filter were determined during training of the machine learning system (fig. 2, element 210; para 0006, 0008, and 0031; accessing a set of parameters for an artificial neural network. The set of parameters includes weights associated with artificial neurons within the artificial neural network; An output layer of the artificial neural network may include three artificial neurons that respectively predict red, blue, and green channels (i.e., parameter values of the optical filter) of the virtually stained image).
Regarding claim 2, the method according to claim 1, Wu further discloses wherein training the learning model of the trained machine learning system comprises:
providing a plurality of first digital training images of tissue samples (fig. 2, element 205; para 0028, 0030, and 0037-0038; the imaging system 135 may be a microscope may be operated in various modes in order to acquire different images of a sample. For example, the imaging system 135 may be used to acquire the images 120 and the images 125; The image training dataset includes a plurality of image pairs, each of which includes a first image 120 of an unstained first tissue sample and a second image 125 of the first tissue sample after staining), which were captured under white light (figs. 3(a)-3(b), element 342; para 0041; white light) by means of a microsurgical optical system (figs. 3(a)-3(b); para 0039; a microscope) with a second image capturing unit (figs. 3(a)-3(b), element 310; para 0040; a camera), wherein a second plurality of color channel information for different spectral ranges are available for each first digital training image,
providing a plurality of second digital training images each representing the same tissue samples as the first set of digital training images, wherein the second digital training images have indications of diseased elements of the tissue samples (fig. 2, element 205; para 0028, 0030, 0034, and 0037-0038; the imaging system 135 may be a microscope may be operated in various modes in order to acquire different images of a sample. For example, the imaging system 135 may be used to acquire the images 120 and the images 125; The image training dataset includes a plurality of image pairs, each of which includes a first image 120 of an unstained first tissue sample and a second image 125 of the first tissue sample after staining; The second tissue sample may include the same tissue type as the first tissue sample that was used to train the artificial neural network 130…such as whether the tissue is healthy or diseased with various types of disease and/or severity of disease),
training the machine learning system for forming the trained machine learning model for predicting a digital image of a type of the plurality of second digital training images, wherein use is made of the following as input values for the machine learning system (fig. 2, element 215; para 0032; The artificial neural network 130 may then be trained by using the image training data set and the parameter set to adjust some or all of the parameters associated with the artificial neurons within the artificial neural network 130, including the weights within the parameter set):
the plurality of first digital training images in the form of the second plurality of color channel information (fig. 2, element 210; para 0006, 0008, and 0031; accessing a set of parameters for an artificial neural network. The set of parameters includes weights associated with artificial neurons within the artificial neural network; An output layer of the artificial neural network may include three artificial neurons that respectively predict red, blue, and green of the virtually stained image),
the plurality of second digital training images as ground truth (fig. 2, element 215; para 0032-0033; The artificial neural network 130 may then be trained by using the image training data set and the parameter set (i.e., ground truth)),
parameter values for reducing the second plurality of color channel information by means of at least one digitally simulated optical filter for forming the first plurality of color channel information (fig. 2, element 210; para 0006, 0008, and 0031; accessing a set of parameters for an artificial neural network. The set of parameters includes weights associated with artificial neurons within the artificial neural network; An output layer of the artificial neural network may include three artificial neurons that respectively predict red, blue, and green channels (i.e., parameter values of the optical filter) of the virtually stained image),
wherein the plurality of first digital training images are used as training data for predicting a digital image of the type of the plurality of second digital training images after the second plurality of color channel information has been reduced to the first plurality of color channel information by means of the digitally simulated optical filter (fig. 2, element 215; para 0032; The artificial neural network 130 may then be trained by using the image training data set and the parameter set to adjust some or all of the parameters associated with the artificial neurons within the artificial neural network 130, including the weights within the parameter set. For example, the weights may be adjusted to reduce or minimize a loss function of the artificial neural network 130), and
wherein at least one portion of the parameter values of the at least one optical filter are output as output values of the machine learning system after the training of the machine learning system has ended (fig. 2, element 210; para 0006, 0008, and 0031; accessing a set of parameters for an artificial neural network. The set of parameters includes weights associated with artificial neurons within the artificial neural network; An output layer of the artificial neural network may include three artificial neurons that respectively predict red, blue, and green channels (i.e., parameter values of the optical filter) of the virtually stained image).
Regarding claim 3, the method according to claim 1, Wu further discloses wherein the parameter values of the at least one optical filter comprise: the plurality of first color channel information and/or a filter shape of the digitally simulated optical filter (para 0006, 0008, 0031, and 0039-0040).
Regarding claim 4, the method according to claim 1, Wu further discloses wherein the second plurality of color channel information is greater than the first plurality of color channel information (para 0036-0038).
Regarding claim 5, the method according to claim 1, Wu further discloses wherein the parameter values for reducing the second number of color channels are at least one selected from the group consisting of a filter shape and a respective central frequency of the first plurality of color channel information (fig. 2, element 210; para 0006, 0008, and 0031).
Regarding claim 6, the method according to claim 2, Wu further discloses wherein parameter values for controlling the source of the white light during the capturing of the first digital image are generated as additional output values of the machine learning system after the training of the machine learning system has ended (fig. 2, element 210; para 0006, 0008, and 0031).
Regarding claim 7, the method according to claim 1, Wu further discloses wherein the digital fluorescence representation corresponds to a representation such as would be generated using a light source in the UV range (figs. 3(a)-3(b), element 301; para 0039).
Regarding claim 8, the method according to claim 1, Wu further discloses wherein the learning model corresponds to an encoder-decoder model in terms of its set-up (para 0031 and 0033; U-NET and/or Convolutional Neural Network).
Regarding claim 9, the method according to claim 1, Wu further discloses wherein the encoder-decoder model is a convolutional network in the form of a U-Net architecture (para 0031 and 0033).
Regarding claim 10, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Regarding claim 11, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ozcan et al., US 2021/0043331 discloses a deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope.
Valdes et al., US 2016/0278678 discloses An imaging system, such as a surgical microscope, laparoscope, or endoscope or integrated with these devices, includes an illuminator providing patterned white light and/or fluorescent stimulus light.
Johnson et al., US 2023/0281825 discloses performing image analysis on 3D microscopy images to predict localization and/or labeling of various structures or objects of interest, by predicting the location in such images at which a dye or other marker associated with such structures would appear.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAN D HUYNH/Primary Examiner, Art Unit 2665