DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-10 are pending.
Examiner's Notes
The method of claim 1 does not explicitly specify that the method is performed in a machine. While the present claim 1 is not being rejected under 35 U.S.C. § 101, the Examiner notes that the claims could be strengthened by explicitly reciting that the method is, e.g., computer-implemented or performed by a data processing system. The claims currently recite steps (acquiring, generating, filtering, estimating) without specifying whether these operations are performed by a machine or processor.
To avoid potential future § 101 issues and to clarify the scope of the claimed invention, the Examiner suggests that Applicant consider amending the claim 1 to recite, for example:
"A computer-implemented cell image analysis method...",
"A cell image analysis method performed by a computer processor..." ... etc.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claim(s) 1 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al (US20210278655) in view of Ma et al (US20210116380).
Regarding claim 1, Wilson teaches a cell image analysis method comprising:
a step of acquiring a cell image including a cell;
(Wilson, Fig. 1, “The set of operations 100 can comprise, at 110, accessing an optical microscopy image comprising a set of corneal endothelial cells of a patient of a keratoplasty”, [0043])
Wilson does not expressly disclose but Ma teaches:
a step of generating a background component image that extracts a distribution of a brightness component of a background from the cell image by filtering the acquired cell image;
(Ma, “a background correction method for a fluorescence microscopy system is provided that includes receiving a raw image stack generated by the fluorescence microscopy system, the raw image stack having a plurality of pixel locations associated therewith and comprising a plurality of temporally spaced image frames of a sample, determining a number of temporal minimum intensity values for each pixel location from the raw image stack, and calculating an expected background value for each pixel location based on the number of temporal minimum intensity values for the pixel location”, [0012]; extracting background by statistical filtering of a cell image stack (filtering for obtaining “temporal minimum intensity values”), generating a background component image ("expected background value") per pixel for microscope data)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Ma into the system or method of Wilson in order to use Ma's background correction approach to reduce uneven illumination and background noise in cell images, thereby providing clearer input for Wilson’s cell analysis and improving the accuracy and reliability of cell detection and measurement. The combination of Wilson and Ma also teaches other enhanced capabilities.
The combination of Wilson and Ma further teaches:
a step of generating a corrected cell image that is acquired by correcting the cell image based on the cell image and the background component image to reduce unevenness of brightness; and
(Ma, “generating a background corrected raw image stack based on the raw image stack and each expected background value”, [claim 2]; the expected background values (see above) are subtracted from each pixel of the raw image stack to generate a "background corrected raw image stack", i.e., a corrected cell image with reduced unevenness of brightness due to background)
a first estimation step of estimating whether the cell in the image is a normal cell or an abnormal cell by using the corrected cell image and a learned model that has learned to analyze the cell.
(Wilson, “segmenting, based at least in part on a trained deep learning (DL) model, a plurality of corneal endothelial cells of the set of corneal endothelial cells in the pre-processed optical microscopy image” ([claim 16]) ... “analyzing the segmented plurality of corneal endothelial cells to determine whether one or more cells of the segmented plurality of corneal endothelial cells are potentially under-segmented or potentially over-segmented” ([claim 17]) ... “identifying, on the GUI, the one or more cells of the segmented plurality of corneal endothelial cells that are potentially under-segmented or potentially over-segmented” ([claim 18]); using a trained DL model to segment and analyze cell images, classifying under-segmented or potentially over-segmented cells implicitly as “abnormal cells” and others as “normal cells”)
Regarding claim 10, the combination of Wilson and Ma teaches its/their respective base claim(s).
The combination further teaches the cell image analysis method according to claim 1 further comprising a step of producing the learned model by training a learning model by using corrected cell images.
(Wilson, Fig. 2, “at 220, pre-processing each optical microscopy image of the training set to correct for at least one of shading or illumination artifacts”, [0050]; “at 230, training a model via deep learning based on the training set of images and the associated ground truth segmentations of the endothelial cells of each image of the training set”, [0051]; training a learning model using cell images that are pre-processed, i.e., corrected for shading/illumination artifacts—matching exactly the claimed “corrected cell images.”)
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al (US20210278655) in view of Ma et al (US20210116380) and further in view of Iga et al (US20180032787).
Regarding claim 2, the combination of Wilson and Ma teaches its/their respective base claim(s).
The combination does not expressly disclose but Iga teaches the cell image analysis method according to claim 1, wherein the cell image is an image including a cell that
is cultivated in a cultivation solution with which the cultivation container is filled, and
is located in a near-edge area of the cultivation container as the cell.
(Iga, “An aspect of the present invention provides a cell analysis device including a cell-image acquiring unit that acquires an image of cells within a culture container in which the cells are cultured”, [0007]; “performs the determination on the acquired image with respect to ... whether an edge of the culture container 2 appears in the image”, [0027]; “detecting whether or not an edge of the culture container appears in the image and determining that the image is usable if the edge of the culture container does not appear in the image”, [0014]; acquisition of a cell image in a culture container; counting cells near or at the edge (“determining whether an edge...appears in the image”) addresses “near-edge area” contexts)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Iga into the modified system or method of Wilson and Ma in order to enable robust cell image acquisition and analysis for cells located near the edge of cultivation containers filled with solution, thereby ensuring the combined system addresses real-world scenarios where cells are distributed throughout the entire vessel, including edge regions that may affect imaging quality and biological behavior. The combination of Wilson, Ma and Iga also teaches other enhanced capabilities.
Claim(s) 3-5 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al (US20210278655) in view of Ma et al (US20210116380) and further in view of Hooper (US20130022287).
Regarding claim 3, the combination of Wilson and Ma teaches its/their respective base claim(s).
The combination does not expressly disclose but Hooper teaches the cell image analysis method according to claim 1, wherein the filtering is a process of applying a median filter to the cell image to generate the background component image.
(Hooper, “a modified median filter is applied to a source image, I_source, to derive a filtered image I_filter. The median filter ... is a hybrid multi-stage median filter, which is a modified version of the finite-impulse response (FIR) median hybrid filter”, [0170]; a median filter may be applied to the background filtering of Ma ([0012]) for preserving edge information while filtering impulse noise of the background image)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Hooper into the modified system or method of Wilson and Ma in order to apply a median filter in background image filtering for preserving edge information while filtering impulse noise of the background image. The combination of Wilson, Ma and Hooper also teaches other enhanced capabilities.
Regarding claim 4, the combination of Wilson and Ma teaches its/their respective base claim(s).
The combination of Wilson, Ma and Hooper teaches the cell image analysis method according to claim 1 further
comprising a step of reducing the cell image,
wherein in the step of generating a background component image, a reduced background component image is generated as the background component image by filtering the reduced cell image; and
the cell image analysis method further comprises a step of increasing the reduced background component image.
(Hooper, “For images having color channels with more than eight bits per color, the present invention down-samples the images to eight bits per color prior to application of the filter” ([0354]) ... “enhancing, say, a six mega-pixel image and then sub-sampling to a one mega-pixel image produces an image that is nearly identical to the image produced by first sub-sampling and then enhancing. Such invariance to scale is an important advantage of the present invention, since enhancement can be performed on a sub-sampled image used for previewing while a user is adjusting enhancement parameters. When the user then commits the parameters, for example, by clicking on an “Apply” button, the full-resolution image can be enhanced, and the resulting enhanced image will appear as the user expects” ([0206]); “If a user employs magnification to zoom in on a photo, then the contrast-enhanced image of reduced resolution may be missing much of the detail in the photo”, [0529]; downsampling (reducing) images for preview, performing filtering/enhancement at low resolution, then applying parameters to the original full (increased) image for the final result)
Regarding claim 5, the combination of Wilson and Ma teaches its/their respective base claim(s).
The combination of Wilson, Ma and Hooper teaches the cell image analysis method according to claim 1 further
comprising a step of accepting a user input instruction to select whether to generate the corrected cell image or not,
wherein if the corrected cell image is to be generated,
the step of generating a corrected cell image is executed, and
the first estimation step is executed by using corrected cell images and the learned model; and
the cell image analysis method further comprises a second estimation step of estimating whether the cell in the image is a normal cell or an abnormal cell by using the cell image and the learned model without executing the step of generating a background component image if the corrected cell image is not to be generated.
(Hooper, “a user interface that enables a user to adjust the brightening and darkening response curves using two user parameters; namely, a brighten and a darken parameter”, [0136]; “a user interface that enables a user to directly modify the brightening and darkening response curves by dragging them upwards or downwards, or by clicking on a pixel location with the image or by clicking within a response curve visualization panel”, [0138]; enables a user to select, via GUI, whether and how to apply enhancement and corrections, including bypassing steps and directly controlling filtered/corrected result versus unprocessed image; this supports a workflow wherein user input controls the method branch)
Regarding claim 7, the combination of Wilson and Ma teaches its/their respective base claim(s).
The combination of Wilson, Ma and Hooper teaches the cell image analysis method according to claim 1, wherein in the step of generating a corrected cell image, the corrected cell image is generated by subtracting the background component image from the cell image and by adding a predetermined brightness value.
(Hooper, “The enhancement process subtracts the local offset values from color values of the original source image, and multiplies the resulting differences by the local brightening and darkening multipliers”, [0159]; eqs. 5(A) –(5D), [0210 -0213]; a process of subtracting a background component (offset), then adding a user-determined value or offset (brightness))
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilson et al (US20210278655) in view of Ma et al (US20210116380) and further in view of Imakubo (US20200311927).
Regarding claim 8, the combination of Wilson and Ma teaches its/their respective base claim(s).
The combination does not expressly disclose but Imakubo teaches the cell image analysis method according to claim 1,
wherein in the step of estimating whether the cell included in the corrected cell image is a normal cell or an abnormal cell, a normal cell area, which is an area of the normal cell, and an abnormal cell area, which is an area of the abnormal cell, are estimated based on an estimation result estimated by the learned model; and
the cell image analysis method further comprises a step of displaying the normal cell area and the abnormal cell area discriminatively from each other.
(Imakubo, “The processing unit ... causes the display unit to display information of at least one of the number of abnormal cells included in the sample, a proportion of the number of the abnormal cells, the number of normal cells included in the sample, and a proportion of the normal cells”, [abstract]; “the processing unit (11) may cause the display unit (13) to display a graph image indicating at least one of a proportion of the abnormal cells included in the sample (10) or a proportion of the normal cells included in the sample (10), together with text information indicating at least one of the proportion of the abnormal cells or the proportion of the normal cells”, [0018]; display of normal and abnormal areas discriminatively (“portion...with a first marker...second marker”; “display...proportion of abnormal/normal cells”))
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Imakubo into the modified system or method of Wilson and Ma in order to clearly display and distinguish normal and abnormal cell areas in analyzed images, allowing users to visually interpret cell health status at a glance and thereby improving the clarity, usability, and diagnostic value of cell image analysis results. The combination of Wilson, Ma and Imakubo also teaches other enhanced capabilities.
Allowable Subject Matter
Claim(s) 6 and 9 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening Claim(s).
The following is a statement of reasons for the indication of allowable subject matter:
Claim(s) 6 and 9 recite(s) limitation(s) related to model selection for cell image analysis based on whether the image is corrected or not, and overlaying marks to indicate normal and abnormal cell areas on a corrected cell image. There are no explicit teachings to the above limitation(s) found in the prior art cited in this office action and from the prior art search.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIANXUN YANG/
Primary Examiner, Art Unit 2662 2/15/2026