DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Classifying Cells Based on Machine Learning of Composite Image Combining Images At Mutually Different Planes of Focus.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 4 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 4 recites “wherein out of the plurality of outputs, one output having a highest appearance frequency is the final output”. This claim element lacks necessary context for “appearance frequency” as it is wholly unclear how this element relates to the “classification model outputs” of antecedent claim 3. For example, if the sole output classification is, for example, a count of four cells then would this output be considered to have the highest appearance frequency? Just what exactly is meant by “appearance frequency” and how does it relate to the outputs of the classification model?
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6, 7, 9, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Raudonis {Raudonis, Vidas, Agne Paulauskaite-Taraseviciene, and Kristina Sutiene. "Fast multi-focus fusion based on deep learning for early-stage embryo image enhancement." Sensors 21.3 (28 January 2021): 863} and Yun (CN-111724379-A).
Claim 1
In regards to claim 1, Raudonis discloses an image processing method for analyzing a specimen including a cell {see title, abstract, and cites below. As to computer implementations, see Introduction discussing computer-assisted algorithms, Section 3.2 including an Nvidia GPU processor running a machine learning model}, the image processing method comprising:
obtaining a plurality of original images at mutually different depths of focus captured by imaging the specimen {section 3.2 Data Preparation including obtaining images taken at seven different focal planes};
generating a composite image including images of the specimen included in each of the plurality of original images in one image plane {Section 3.2, focus stacked image, section 3.3 Multi-focus image Fusion which employs a U-Net architecture to take the seven focal planes as input to generate a composition image (focal stacked, aka extended Depth of Field composite image. See also section 3.4, fig. 5, for alternative method of image fusion/compositing. See also Section 4 and Fig. 6 illustrating the fused/composite image h generated from several focal stack images a-g}; and
inputting the composite image to a classification model constructed in advance and obtaining an output of the classification model
{see abstract: Section 1: Embryo assessment applying convolutional neural networks (CNNs) classify into two classes, good and poor quality. " Embryo selection on Day 2 or 3 is usually based on morphological appearance, assessing the size of cells in blastomere, morphokinetics and the degree of fragmentation. For instance, if the number of cells is four, the fragmentation percent is less than 10 percent and the cells are symmetrical, then the quality of embryo is considered as the best one.", i.e. cells are classified based on the measurements},
{section 4 including "All three approaches have been tested on the same image pairs including 1-cell, 2-cell, 4-cell and 8-cell embryos (see Figure 7). No obvious difference is seen when looking at the fused images generated by U-Net approach and LP method. Both these methods provide images with sharp edges of cells, clearly visible fragmentation and surrounding artefacts. Comparatively, the fused images generated by ECC differ significantly, with strong blurriness visible relative to the previous set of images. To validate the proposed approach, the quantitative metrics (see Section 3.5) are computed to assess the similarity between two images, that is, image ILP generated by inverse Laplacian pyramid transform (see Section 3.4.1) and image IU generated using the proposed approach employing deep learning technique-U-Net convolutional network architecture."}
Although Raudonis discloses a classification model for determining embryo quality using the composite image, the classification model is not constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane.
Yun is a highly analogous reference from the same field of image processing for analyzing specimens including cells, blastomeres, embryos including obtaining a plurality of original images at mutually different depths of focus captured by imaging the specimen {Fig. 1, pg. 8, S101 obtaining multiple images using Huffman modulation phase contrast to shoot a group of 7 images taken at different focal lengths}, generating a composite image including images of the specimen included in each of the plurality of original images in one image plane {combination strategy, pgs. 8-9 which superimposes the images taken at different focal lengths}.
inputting the composite image to a classification model constructed in advance and obtaining an output of the classification model, wherein the classification model is constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane {pgs. 3-4, 6-7 multi-view data fusion enhances effect of cell counting, classification model training by obtaining multiple images shot at different focal lengths, marking the number of cells manually counted in each image and using these marked images to train a deep neural network to count the number of cells. S103, pg. 8, construct deep neural network cell number prediction model}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Raudonis that obtains a plurality of original images at mutually different depths of focus captured by imaging the specimen; and generates a composite image including images of the specimen included in each of the plurality of original images in one image plane such that the composite image is input to a classification model constructed in advance and obtaining an output of the classification model, wherein the classification model is constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane as taught by Yun because Raudonis motivates doing so in the abstract stating that multi-focus image fusion approach based on deep learning is proposed in this paper which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Raudonis further explains that all cells of an embryo may not be visible in a single image because cells are focused at different distances from each other which can lead to classification errors; therefore, to identify the exact number of cells each image from different focal planes must be analyzed separately and that a solution is to capture a sequence of images focused at different positions and fuse them into a single all-in-focus image. See pg. 2. Therefore, Raudonis strongly motivates image fusion (composite image as claimed) to improve classification models such as cell counting models. Moreover, Yun also employs an image combination strategy and trains a neural network to count the number of cells in the combined image thus further contributing to a reasonable expectation of success. Still further, the combination is also obvious because doing so merely combines prior art elements according to known methods to yield predictable results (more reliable cell counting) as explained above.
Claim 2
In regards to claim 2, Raudonis is not relied upon to disclose but Yun teaches wherein:
the machine learning is performed based on a plurality of the teacher images and one type of classification class taught for each teacher image; and the classification model outputs the classification class corresponding to inputted composite image.
{pgs. 3-4, 6-7 multi-view data fusion enhances effect of cell counting, classification model training by obtaining multiple images shot at different focal lengths, marking the number of cells manually counted in each image and using these marked images to train a deep neural network to count the number of cells. S103, pg. 8, construct deep neural network cell number prediction model}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Raudonis that obtains a plurality of original images at mutually different depths of focus captured by imaging the specimen; and generates a composite image including images of the specimen included in each of the plurality of original images in one image plane such that the composite image is input to a classification model constructed in advance and obtaining an output of the classification model, wherein the classification model is constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane as taught by Yun and wherein teaches wherein: the machine learning is performed based on a plurality of the teacher images and one type of classification class taught for each teacher image; and the classification model outputs the classification class corresponding to inputted composite image as also taught by Yun because Raudonis motivates doing so in the abstract stating that multi-focus image fusion approach based on deep learning is proposed in this paper which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Raudonis further explains that all cells of an embryo may not be visible in a single image because cells are focused at different distances from each other which can lead to classification errors; therefore, to identify the exact number of cells each image from different focal planes must be analyzed separately and that a solution is to capture a sequence of images focused at different positions and fuse them into a single all-in-focus image. See pg. 2. Therefore, Raudonis strongly motivates image fusion (composite image as claimed) to improve classification models such as cell counting models. Moreover, Yun also employs an image combination strategy and trains a neural network to count the number of cells in the combined image thus further contributing to a reasonable expectation of success. Still further, the combination is also obvious because doing so merely combines prior art elements according to known methods to yield predictable results (more reliable cell counting) as explained above.
Claim 6
In regards to claim 6, Raudonis is not relied upon to disclose but Yun teaches wherein the classification model is constructed by a deep learning algorithm {see above cites which include a deep neural network}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Raudonis that obtains a plurality of original images at mutually different depths of focus captured by imaging the specimen; and generates a composite image including images of the specimen included in each of the plurality of original images in one image plane such that the composite image is input to a classification model constructed in advance and obtaining an output of the classification model, wherein the classification model is constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane as taught by Yun and wherein the classification model is constructed by a deep learning algorithm as also taught by Yun because Raudonis motivates doing so in the abstract stating that multi-focus image fusion approach based on deep learning is proposed in this paper which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Raudonis further explains that all cells of an embryo may not be visible in a single image because cells are focused at different distances from each other which can lead to classification errors; therefore, to identify the exact number of cells each image from different focal planes must be analyzed separately and that a solution is to capture a sequence of images focused at different positions and fuse them into a single all-in-focus image. See pg. 2. Therefore, Raudonis strongly motivates image fusion (composite image as claimed) to improve classification models such as cell counting models. Moreover, Yun also employs an image combination strategy and trains a neural network to count the number of cells in the combined image thus further contributing to a reasonable expectation of success. Still further, the combination is also obvious because doing so merely combines prior art elements according to known methods to yield predictable results (more reliable cell counting) as explained above.
Claim 7
In regards to claim 7, Raudonis is not relied upon to disclose but Yun teaches, wherein the teacher images include images of same type of cell as the cell included in the specimen {see above cites wherein the same embryo cell type is used as the teacher images}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Raudonis that obtains a plurality of original images at mutually different depths of focus captured by imaging the specimen; and generates a composite image including images of the specimen included in each of the plurality of original images in one image plane such that the composite image is input to a classification model constructed in advance and obtaining an output of the classification model, wherein the classification model is constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane as taught by Yun and wherein the teacher images include images of same type of cell as the cell included in the specimen as also taught by Yun because Raudonis motivates doing so in the abstract stating that multi-focus image fusion approach based on deep learning is proposed in this paper which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Raudonis further explains that all cells of an embryo may not be visible in a single image because cells are focused at different distances from each other which can lead to classification errors; therefore, to identify the exact number of cells each image from different focal planes must be analyzed separately and that a solution is to capture a sequence of images focused at different positions and fuse them into a single all-in-focus image. See pg. 2. Therefore, Raudonis strongly motivates image fusion (composite image as claimed) to improve classification models such as cell counting models. Moreover, Yun also employs an image combination strategy and trains a neural network to count the number of cells in the combined image thus further contributing to a reasonable expectation of success. Still further, the combination is also obvious because doing so merely combines prior art elements according to known methods to yield predictable results (more reliable cell counting) as explained above.
Claim 9
In regards to claim 9, Raudonis discloses wherein the specimen is an embryo and the classification model outputs
Yu teaches wherein the specimen is an embryo and the classification model outputs number of cells included in the embryo.
{pgs. 3-4, 6-7 multi-view data fusion enhances effect of cell counting, classification model training by obtaining multiple images shot at different focal lengths, marking the number of cells manually counted in each image and using these marked images to train a deep neural network to count the number of cells. S103, pg. 8, construct deep neural network cell number prediction model}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Raudonis that obtains a plurality of original images at mutually different depths of focus captured by imaging the specimen; and generates a composite image including images of the specimen included in each of the plurality of original images in one image plane such that the composite image is input to a classification model constructed in advance and obtaining an output of the classification model, wherein the classification model is constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane as taught by Yun and wherein the specimen is an embryo and the classification model outputs number of cells included in the embryo as also taught by Yun because Raudonis motivates doing so in the abstract stating that multi-focus image fusion approach based on deep learning is proposed in this paper which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Raudonis further explains that all cells of an embryo may not be visible in a single image because cells are focused at different distances from each other which can lead to classification errors; therefore, to identify the exact number of cells each image from different focal planes must be analyzed separately and that a solution is to capture a sequence of images focused at different positions and fuse them into a single all-in-focus image. See pg. 2. Therefore, Raudonis strongly motivates image fusion (composite image as claimed) to improve classification models such as cell counting models. Moreover, Yun also employs an image combination strategy and trains a neural network to count the number of cells in the combined image thus further contributing to a reasonable expectation of success. Still further, the combination is also obvious because doing so merely combines prior art elements according to known methods to yield predictable results (more reliable cell counting) as explained above.
Claim 10
In regards to claim 10, Raudonis is not relied upon to disclose but Yun teaches
wherein the machine learning is performed based on: a plurality of the teacher images generated based on images of embryos having mutually different numbers of blastomeres; and classification classes represented by taught values as the numbers of blastomeres corresponding to the respective teacher images {pgs. 3-4, 6-7 multi-view data fusion enhances effect of cell counting, classification model training by obtaining multiple images shot at different focal lengths, marking the number of cells manually counted in each image and using these marked images to train a deep neural network to count the number of cells. S103, pg. 8, construct deep neural network cell number prediction model}.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Raudonis that obtains a plurality of original images at mutually different depths of focus captured by imaging the specimen; and generates a composite image including images of the specimen included in each of the plurality of original images in one image plane such that the composite image is input to a classification model constructed in advance and obtaining an output of the classification model, wherein the classification model is constructed by performing machine learning in advance using teacher images including a plurality of images of same cell or cell mass at mutually different depths of focus in one image plane as taught by Yun and wherein the specimen is an embryo and the classification model outputs number of cells included in the embryo as also taught by Yun and wherein the machine learning is performed based on: a plurality of the teacher images generated based on images of embryos having mutually different numbers of blastomeres; and classification classes represented by taught values as the numbers of blastomeres corresponding to the respective teacher images as also taught by Yun because Raudonis motivates doing so in the abstract stating that multi-focus image fusion approach based on deep learning is proposed in this paper which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Raudonis further explains that all cells of an embryo may not be visible in a single image because cells are focused at different distances from each other which can lead to classification errors; therefore, to identify the exact number of cells each image from different focal planes must be analyzed separately and that a solution is to capture a sequence of images focused at different positions and fuse them into a single all-in-focus image. See pg. 2. Therefore, Raudonis strongly motivates image fusion (composite image as claimed) to improve classification models such as cell counting models. Moreover, Yun also employs an image combination strategy and trains a neural network to count the number of cells in the combined image thus further contributing to a reasonable expectation of success. Still further, the combination is also obvious because doing so merely combines prior art elements according to known methods to yield predictable results (more reliable cell counting) as explained above.
Claim 12
In regards to claim 12, Raudonis discloses a computer-readable recording medium, storing non-transitorily a computer program for causing a computer device to perform each processing of the image processing method according to claim 1 {see above cites including, as to computer implementations, see Introduction discussing computer-assisted algorithms, Section 3.2 including an Nvidia GPU processor running a machine learning model}.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Raudonis and Lu as applied to claim 1 above, and further in view of Dave (US 20230127698 A1).
Claim 8
In regards to claim 8, Raudonis discloses wherein the original images are
Dave is a highly analogous reference from the same field of image processing for analyzing specimens including cells and obtaining a plurality of original images at mutually different depths of focus captured by imaging the specimen and counting cells. See abstract, Fig. 12, 19, 21, 22a, 22b, 24 and corresponding text.
Dave also teaches that bright field images of the specimen using an optical microscope is highly conventional (see [0202]-[0212]).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to have modified Radonis z-stack image capture using an optical microsope to use bright field illumination as taught by Dave bright field microscopes are simple, do not change the color of specimen, and are commonly used to image cells, because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Allowable Subject Matter
Claims 3-5 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Although Raudonis forms a composite image, none of the prior art discloses or fairly suggests (claim 3) the composite image is generated based on the original images selected from the plurality of the original images and a plurality of the composite images are generated by changing a combination of the original images; and one of a plurality of outputs obtained by inputting each of the plurality of composite images to the classification model is selected as a final output.
Claim 4 would be allowable if revised to overcome the 112(b) rejection and to remain dependent upon claim 3. Claim 5 depends from claim 3.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
K. Liimatainen, P. Ruusuvuori, L. Latonen and H. Huttunen, "Supervised method for cell counting from bright field focus stacks," 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 2016, pp. 391-394, doi: 10.1109/ISBI.2016.7493290 discloses a cell counting method using Z-stack images from a bright field microscope. See Fig. 2 copied below.
PNG
media_image1.png
400
458
media_image1.png
Greyscale
S. Wang, C. Zhou, D. Zhang, L. Chen and H. Sun, "A Deep Learning Framework Design for Automatic Blastocyst Evaluation With Multifocal Images," in IEEE Access, vol. 9, pp. 18927-18934, 2021, doi: 10.1109/ACCESS.2021.3053098 discloses machine learning of blastocysts using multi-focus images. See Figs. 3 and 4 below.
PNG
media_image2.png
480
1166
media_image2.png
Greyscale
PNG
media_image3.png
498
1196
media_image3.png
Greyscale
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael R Cammarata whose telephone number is (571)272-0113. The examiner can normally be reached M-Th 7am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL ROBERT CAMMARATA/Primary Examiner, Art Unit 2667