Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to under 37 CFR 1.83(a) because Figure 2 fails to show “202B”, as described in the specification (Page 19 line 12) and Figure 4B fails to show “415a”, as described in the specification (Page 54 line 11). Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 18, 20-22 and 24-29 are rejected under 35 U.S.C. 103 as being unpatentable over Bera (US Patent No.: US 11,487,967 B2), hereinafter Bera, in view of Kumar (PCT Patent Pub. No.: WO 2020/081582 A1), hereinafter Kumar.
Regarding claim 1, Bera teaches a method for determining image filters for classifying particles of a sample in a particle analyzer, the method comprising: inputting into a machine learning algorithm one or more training data sets comprising a plurality of images (In some embodiments, the image data and the environmental data may be inputted into the machine learning algorithm while it is still being trained (therefore the image data and the environmental data may be training data, in this instance). Column 6 line 57) and quantified parameters (The classifier may help predict outputs of the machine learning algorithm, which in this instance may be the selection parameters to be used in the filtering algorithm. Column 3 line 47. To build a machine learning algorithm for predicting optimal selection parameters for the filtering algorithm (i.e., selection parameters that accurately and efficiently filter the image), the previously stored cloud environmental data and cloud selection parameters may be requested and received from the cloud on which they were stored. Column 4 line 46) of a plurality of image filters (To filter the image and capture its features, one or more image filters may be applied to the image in order to better learn and identify the various contents of the image. Column 4 line 23); generating a dynamic (Method 100 includes operation 170 to predict optimal selection parameters. Column 7 line 4) particle classification algorithm (In some embodiments, the machine learning algorithm includes a classifier ( or a classification model). Column 5 line 37) based on the training data sets (In some embodiments, the image data and the environmental data may be inputted into the machine learning algorithm while it is still being trained (therefore the image data and the environmental data may be training data, in this instance). Column 6 line 57) and the quantified parameters of the image filters (Once the machine learning algorithm begins training, the determined selection parameters (determined using the machine learning algorithm) should become more customized/tailored to different environmental data and image data. Column 5 line 63); and calculating an adjustment to one or more of the quantified parameters of the image filters (The method may also include predicting, using the machine learning algorithm, optimal selection parameters for the image. The method may also include applying the optimal selection parameters to a filtering algorithm for the image. Abstract).
Bera does not teach the following limitations as further recited, but Kumar further teaches inputting into a machine learning algorithm one or more training data sets comprising a plurality of images of particles (2D image data 1409 such as a lung X-ray or other X-ray or other 2D image data may be provided to a 2D convolutional neural network input 1410. The 2D CNN 1410 may be trained to recognize diagnostically useful features in x-rays, skin photographs, or other 2D image data. [0256]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bera to incorporate the teachings of Kumar to determine image filters for classifying particles of a sample in a particle analyzer by inputting into a machine learning algorithm one or more training data sets comprising a plurality of images of particles in order to identify target cells reliably, economically, and with the required specificity and sensitivity is needed.
Regarding claim 2, Kumar in the combination teaches the method according to claim 1, wherein each training data set comprises a plurality of unfiltered images of particles (2D image data 1409 such as a lung X-ray or other X-ray or other 2D image data may be provided to a 2D convolutional neural network input 1410. The 2D CNN 1410 may be trained to recognize diagnostically useful features in x-rays, skin photographs, or other 2D image data. [0256]).
Regarding claim 3, Kumar in the combination teaches the method according to claim 1, wherein each training data set comprises a plurality of ground-truth images of particles (After a large number of training iterations, the ANN output will closely match the desired target for each sample in the input training set. [0089]. The data set is conventionally divided into a training set, a test set, and, in some cases, a validation set. A target is specified that contains the correct classification of each sample in the data set. [0089]).
Regarding claim 4, Bera in the combination teaches the method according to claim 1, wherein the machine learning algorithm comprises a neural network (Image detection system 200 includes an image module 210, an environmental module 215, a machine learning module 220, a filtering module 230, a regions of interest (ROI) module 240, and a convolutional neural network (CNN) module 250. Column 8 line 12).
Regarding claim 5, Kumar in the combination teaches the method according to claim 4, wherein the neural network is selected from the group consisting of an artificial neural network (FIG. 4A illustrates an example of an artificial neuron in an artificial neural network (ANN). [0017]), a convolutional neural network (2D image data 1409 such as a lung X-ray or other X-ray or other 2D image data may be provided to a 2D convolutional neural network input 1410. The 2D CNN 1410 may be trained to recognize diagnostically useful features in x-rays, skin photographs, or other 2D image data. [0256]) and a recurrent neural network (Note: the claim language is interpreted as disjunctive according to specification. Page 2 line 29).
Regarding claim 18, Bera in the combination teaches the method according to claim 1, wherein the quantified parameters of the image filters are inputted into the machine learning algorithm in a predetermined order (Referring to FIG. 3, a schematic diagram of an example machine learning decision tree 300 for predicting selection parameters for image processing is depicted, according to some embodiments. Machine learning decision tree 300 is an example machine learning algorithm used to predict the optimal selection parameters for the image. The machine learning decision tree 300 reviews the image data and the environmental data of the specific image, and makes decisions based on the data. Column 8 line 44.
PNG
media_image1.png
610
886
media_image1.png
Greyscale
).
Regarding claim 20, Bera in the combination teaches the method according to claim 1, wherein calculating an adjustment to one or more of the quantified parameters of the image filters comprises determining accuracy (In some embodiments, determining whether the machine learning algorithm is sufficiently trained further includes determining whether the accuracy of the optimal selection parameters is above a threshold accuracy value. Column 7 line 59) and loss statistics (Determining an accuracy of the optimal selection parameters may include analyzing the effectiveness of the filtering mechanism with the current predicted optimal selection parameters compared to the effectiveness of the filtering mechanism with older predicted selection parameters, and maybe even the initial predicted selection parameters (i.e., loss statistics). Column 7 line 48) of the generated dynamic (Method 100 includes operation 170 to predict optimal selection parameters. Column 7 line 4) particle classification algorithm (The machine learning algorithm may include a classifier and/or classification model. Column 3 line 45).
Regarding claim 21, Bera in the combination teaches the method according to claim 20, wherein the accuracy (In some embodiments, determining whether the machine learning algorithm is sufficiently trained further includes determining whether the accuracy of the optimal selection parameters is above a threshold accuracy value. Column 7 line 59) and loss statistics (Determining an accuracy of the optimal selection parameters may include analyzing the effectiveness of the filtering mechanism with the current predicted optimal selection parameters compared to the effectiveness of the filtering mechanism with older predicted selection parameters, and maybe even the initial predicted selection parameters (i.e., loss statistics). Column 7 line 48) of the dynamic (Method 100 includes operation 170 to predict optimal selection parameters. Column 7 line 4) particle classification algorithm (The machine learning algorithm may include a classifier and/or classification model. Column 3 line 45) is calculated by an iterative optimization approach (Determining an accuracy of the optimal selection parameters may include analyzing the effectiveness of the filtering mechanism with the current predicted optimal selection parameters compared to the effectiveness of the filtering mechanism with older predicted selection parameters, and maybe even the initial predicted selection parameters. Column 7 line 48).
Regarding claim 22, Kumar in the combination teaches the method according to claim 21, wherein the iterative optimization approach is a first-order optimization algorithm (In particular, a type of neural network called a feed-forward back-propagation classifier (i.e., a first-order optimization algorithm) can be trained on an input data set to classify input samples as belonging to a pre-defined category according to a target. [0089]).
Regarding claim 24, Kumar in the combination teaches the method according to claim 20, wherein the accuracy and loss statistics of the dynamic particle classification algorithm is calculated by backpropagation (In particular, a type of neural network called a feed-forward back-propagation classifier (i.e., a first-order optimization algorithm) can be trained on an input data set to classify input samples as belonging to a pre-defined category according to a target. [0089]).
Regarding claim 25, Bera in the combination teaches the method according to claim 20, wherein the method further comprises adjusting one or more of the image filters (Referring to FIG. 3, a schematic diagram of an example machine learning decision tree 300 for predicting selection parameters for image processing is depicted, according to some embodiments. Machine learning decision tree 300 is an example machine learning algorithm used to predict the optimal selection parameters for the image. The machine learning decision tree 300 reviews the image data and the environmental data of the specific image, and makes decisions based on the data. Column 8 line 44) based on the calculated accuracy (In some embodiments, determining whether the machine learning algorithm is sufficiently trained further includes determining whether the accuracy of the optimal selection parameters is above a threshold accuracy value. Column 7 line 59) and loss statistics (Determining an accuracy of the optimal selection parameters may include analyzing the effectiveness of the filtering mechanism with the current predicted optimal selection parameters compared to the effectiveness of the filtering mechanism with older predicted selection parameters, and maybe even the initial predicted selection parameters. Column 7 line 48).
Regarding claim 26, Bera in the combination teaches the method according to claim 25, wherein each one of the image filters is iteratively adjusted to converge on an optimized set of image filters (Referring to FIG. 3, a schematic diagram of an example machine learning decision tree 300 for predicting selection parameters for image processing is depicted, according to some embodiments. Machine learning decision tree 300 is an example machine learning algorithm used to predict the optimal selection parameters for the image. The machine learning decision tree 300 reviews the image data and the environmental data of the specific image, and makes decisions based on the data. Column 8 line 44) for the dynamic (Method 100 includes operation 170 to predict optimal selection parameters. Column 7 line 4) particle classification algorithm (The machine learning algorithm may include a classifier and/or classification model. Column 3 line 45).
Kumar in the combination teaches in each photodetector channel (
PNG
media_image2.png
590
952
media_image2.png
Greyscale
).
Regarding claim 27, Bera in the combination teaches the method according to claim 26, wherein the method further comprises applying the determined image filters (The classifier may help predict outputs of the machine learning algorithm, which in this instance may be the selection parameters to be used in the filtering algorithm. Column 3 line 47) to a plurality of images (The method may include receiving an image. Abstract).
Kumar in the combination further teaches single cell images (Characterization of a single cell can comprise a set of measured light intensities that may be represented as a coordinate position in a multidimensional space (e.g., a feature coordinate space). [0063]) generated for cells in a flow stream (In general, flow cytometry involves the passage of individual cells through the path of one or more laser beams. [0062]).
Regarding claim 28, Kumar in the combination teaches the method according to claim 1, wherein the method comprises irradiating the particles of the sample with a light source and detecting light from the particles with a light detection system (FIG. 3 is a simplified illustration of a flow cytometer. Cells can be labeled with one or more fluorescent probes and passed single-file in a stream of fluid past a laser light source. Fluorescence detectors measure the fluorescence emitted from labeled cells. [0016]
PNG
media_image3.png
590
952
media_image3.png
Greyscale
).
Regarding claim 29, Kumar in the combination teaches the method according to claim 28, wherein the method comprises generating an image of each particle based on the detected light (In some instances, cell analysis by flow cytometry on the basis of fluorescent level is combined with a determination of other flow cytometry readable outputs, such as granularity or cell size to provide a correlation between the activation level of a multiplicity of elements and other cell qualities measurable by flow cytometry for single cells.).
Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Bera (US Patent No.: US 11,487,967 B2), hereinafter Bera, in view of Kumar (PCT Patent Pub. No.: WO 2020/081582 A1), hereinafter Kumar, further in view of Gorthi (Fluorescence imaging of flowing cells using a temporally coded excitation, Opt. Express, 2013, 21(4), 5164–5170), hereinafter Gorthi.
Regarding claim 6, Bera and Kumar teach all of the elements of the claimed invention as stated in claim 1 except for the following limitations as further recited. However, Gorthi teaches wherein the image filters are quantified (Using the known code sequence, the velocity and the decoded image are found computationally using a parametric Wiener filter. Page 5167 1st paragraph) in photodetector channels (Fluorescence images of moving cells can be captured without motion-blur by using short camera exposure times, but the image quality suffers dramatically. Page 5165 3rd paragraph).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bera and Kumar to incorporate the teachings of Gorthi to utilize quantified image filters in photodetector channels in order to identify target cells reliably, economically, and with the required specificity and sensitivity is needed.
Kumar in the combination further teaches a plurality of photodetector channels (FIG. 3 is a simplified illustration of a flow cytometer. Cells can be labeled with one or more fluorescent probes and passed single-file in a stream of fluid past a laser light source. Fluorescence detectors measure the fluorescence emitted from labeled cells. [0016]
PNG
media_image2.png
590
952
media_image2.png
Greyscale
).
Regarding claim 7, Gorthi in the combination teaches the method according to claim 6, wherein the image filters are quantified (Using the known code sequence, the velocity and the decoded image are found computationally using a parametric Wiener filter. Page 5167 1st paragraph) in one or more fluorescence photodetector channels (Fluorescence images of moving cells can be captured without motion-blur by using short camera exposure times, but the image quality suffers dramatically. Page 5165 3rd paragraph).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Bera (US Patent No.: US 11,487,967 B2), hereinafter Bera, in view of Kumar (PCT Patent Pub. No.: WO 2020/081582 A1), hereinafter Kumar, further in view of Walsh (Great Britain Patent Pub. No.: GB 2377349 A), hereinafter Walsh.
Regarding claim 8, Bera and Kumar teach all of the elements of the claimed invention as stated in claim 1 except for the following limitations as further recited. However, Walsh teaches wherein for each photodetector channel an enabled image filter is quantified as a 1 and a not-enabled image filter is quantified as a 0 (Updating filter coefficients, for use in a digital adaptive filter, only when a signal quality parameter exceeds a predetermined threshold (i.e., an enabled image filter is quantified as a 1 and a not-enabled image filter is quantified as a 0 if a signal quality parameter does not exceed a predetermined threshold). Title).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bera and Kumar to incorporate the teachings of Walsh to specify for each photodetector channel an enabled image filter being quantified as a 1 and a not-enabled image filter being quantified as a 0 in order to determine if the filter coefficients are optimal and can therefore be applied, or whether the filter coefficients are not optimal and should not, therefore, be applied.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Bera (US Patent No.: US 11,487,967 B2), hereinafter Bera, in view of Kumar (PCT Patent Pub. No.: WO 2020/081582 A1), hereinafter Kumar, further in view of Setiawan (HISTOPATHOLOGY OF LUNG CANCER CLASSIFICATION USING CONVOLUTIONAL NEURAL NETWORK WITH GAMMA CORRECTION, Commun. Math. Biol. Neurosci. 2022, 2022:81), hereinafter Setiawan.
Regarding claim 9, Bera teaches the method according to claim 1, wherein the plurality of image filters comprise one or more image filter parameters selected from: smooth, sharpen, blur (To filter the image and capture its features, one or more image filters may be applied to the image in order to better learn and identify the various contents of the image. Some example filters include edge detecting, sobel, blurring, sharpening, embossing, reducing Gaussian noise (i.e., smooth), etc. Column 4 line 23), threshold (As discussed herein, through the filtering of the image, adaptive threshold value(s) may be determined and used to distinguish regions of interest (ROI) of the image. Column 7 line 36), edges (To filter the image and capture its features, one or more image filters may be applied to the image in order to better learn and identify the various contents of the image. Some example filters include edge detecting, sobel, blurring, sharpening, embossing, reducing Gaussian noise, etc. Column 4 line 23), and intensity (In this decision tree 300, the brightness value 310 can be either high 311 or low 329. Column 8 line 58).
Kumar teaches invert (For example, when training a neural network in image recognition, a set of images can be processed by having images translated and/or rotated through a plurality of rotation angles and a plurality of translation distances and directions. [0185]).
The combination of Bera and Kumar does not teach the following limitations as further recited, but Setiawan further teaches gamma correction (In this study, Convolutional Neural Network (CNN) with gamma correction was implemented. Gamma correction is a process to adjust the image light, while CNN is for feature extraction and classification. Abstract).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bera and Kumar to incorporate the teachings of Setiawan for gamma correction to be selected as one of the image filter parameters in order to obtain better results when the gamma correction process is carried out.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Bera (US Patent No.: US 11,487,967 B2), hereinafter Bera, in view of Kumar (PCT Patent Pub. No.: WO 2020/081582 A1), hereinafter Kumar, further in view of Setiawan (HISTOPATHOLOGY OF LUNG CANCER CLASSIFICATION USING CONVOLUTIONAL NEURAL NETWORK WITH GAMMA CORRECTION, Commun. Math. Biol. Neurosci. 2022, 2022:81), hereinafter Setiawan, further in view of Walsh (Great Britain Patent Pub. No.: GB 2377349 A), hereinafter Walsh.
Regarding claim 19, Bera in the combination teaches the method according to claim 18, wherein the quantified parameters of the image filters are inputted into the machine learning algorithm (Referring to FIG. 3, a schematic diagram of an example machine learning decision tree 300 for predicting selection parameters for image processing is depicted, according to some embodiments. Machine learning decision tree 300 is an example machine learning algorithm used to predict the optimal selection parameters for the image. The machine learning decision tree 300 reviews the image data and the environmental data of the specific image, and makes decisions based on the data. Column 8 line 44) in the order of (Note: examiner does not interpret “in the order of” as referring to a sequence as the specific order listed is not disclosed in the specification as having any particular benefit.): 2) smooth; 3) sharpen; 4) blur (To filter the image and capture its features, one or more image filters may be applied to the image in order to better learn and identify the various contents of the image. Some example filters include edge detecting, sobel, blurring, sharpening, embossing, reducing Gaussian noise (i.e., smooth), etc. Column 4 line 23); 5) threshold (As discussed herein, through the filtering of the image, adaptive threshold value(s) may be determined and used to distinguish regions of interest (ROI) of the image. Column 7 line 36); 7) edges (To filter the image and capture its features, one or more image filters may be applied to the image in order to better learn and identify the various contents of the image. Some example filters include edge detecting, sobel, blurring, sharpening, embossing, reducing Gaussian noise, etc. Column 4 line 23); and 9) intensity (In this decision tree 300, the brightness value 310 can be either high 311 or low 329. Column 8 line 58).
Kumar in the combination teaches 8) invert (For example, when training a neural network in image recognition, a set of images can be processed by having images translated and/or rotated through a plurality of rotation angles and a plurality of translation distances and directions. [0185]).
Setiawan in the combination teaches 6) gamma correction (In this study, Convolutional Neural Network (CNN) with gamma correction was implemented. Gamma correction is a process to adjust the image light, while CNN is for feature extraction and classification. Abstract).
The combination of Bera, Kumar, and Setiawan does not teach the following limitations as further recited, but Walsh further teaches 1) enabled (Updating filter coefficients, for use in a digital adaptive filter, only when a signal quality parameter exceeds a predetermined threshold (i.e., an enabled image filter is quantified as a 1 and a not-enabled image filter is quantified as a 0 if a signal quality parameter does not exceed a predetermined threshold). Title).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bera, Kumar and Setiawan to incorporate the teachings of Walsh to input the quantified parameters of the image filters into the machine learning algorithm in the order of 1) enabled in order to determine if the filter coefficients are optimal and can therefore be applied, or whether the filter coefficients are not optimal and should not, therefore, be applied.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEI ZHAO whose telephone number is (703)756-1922. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VU LE can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEI ZHAO/Examiner, Art Unit 2668
/VU LE/Supervisory Patent Examiner, Art Unit 2668