Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to patent application as filed on 2/20/2024
This action is made Non-Final.
Claims 1 – 20 are pending in the case. Claims 1 and 13 are independent claims.
Drawings
The drawings filed on 2/20/2024 have been accepted by the Examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 9 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 9 recites the limitation "the majority phenotype attributes are phenotypes that have a high frequency of occurrence in a dataset of images” while also reciting “the minority phenotype attributes are phenotypes that have a low frequency of occurrence in a dataset of images”. It is unclear if Applicant intended for their to be a first and a second dataset of images, a single dataset of images (in which case the second recitation would recite ‘the dataset of images”), or something else entirely. There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 9-13, 14 and 18-20 and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 1 and 13 recite at least “receiving an image, wherein the image is a microscopic image of an organism; determining, using a first machine learning model, whether the image is usable for processing; identifying, using a second machine learning model, whether majority phenotype attributes are in the image in response to the first machine learning model determining the image is usable for processing; identifying, using a third machine learning model, whether minority phenotype attributes are in the image in response to the first machine learning model determining the image is usable for processing; and providing an image output with the image and any detected majority phenotype attributes and any minority phenotype attributes identified in the image”. These limitations are construed as abstract ideas for being performable in the human mind and/or on paper. A human can certainly observe and determine whether an image is usable for processing, identify whether majority phenotype attributes are in the image, identify whether minority phenotype attributes are in the image, and provide with any detected majority and minority phenotype attributes identified in the image.
This judicial exception is not integrated into a practical application because the additional limitations of “a processor” and “memory” (from claim 13; no additional limitations in claim 1) are merely generic computing components on which the instructions to implement the abstract idea are applied. Additional limitations directed toward mere instructions to apply the exception to generic computing components, alone or in combination, do not integrate the judicial exception into a practical application (See MPEP§2106.05(f)).
As per using ML technology for data processing limitations (from claim 1, no additional limitations in claim 13), said steps are nothing more than an attempt to recycle preexisting artificial intelligence or machine-learning (AI/ML) technologies to apply for phenotype attribute identification. There are no improvements in said ML techniques, such as advances in the field of computer science itself, or designing a new neural network, and there is no controlling of a technological process using the outcome of said AI/ML operations.
Further, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually; there is no indication that the combination of elements improves the functioning of a computer or improves any other technology including AI/ML technology, - their collective functions merely provide conventional computer implementation. None of the additional elements "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Alice Corp., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements identified above, being directed toward mere instructions to apply the exception to generic computing components, alone or in combination, are well-understood routine and conventional, do not provide an inventive concept, and thus, do not amount to significantly more than the judicial exception. Therefore, independent claims 1 and 13 are directed toward ineligible subject matter.
Dependent claims 2, 9-12, 14, and 18-20 recite additional limitations that are also construed as additional abstract ideas, mere instructions to apply the judicial exception to generic computing components, or insignificant extra solution activity, and are, therefore, also directed toward ineligible subject matter.
The analysis of dependent claims 2, 9-12, 14, and 18-20 has resulted in the determination that these claims recite eligible subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, 9, 10, 11, 13, 14, 19 and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mavropoulos (USPUB 20250218201 A1).
Claim 1:
Mavropoulos discloses A method comprising: receiving an image, wherein the image is a microscopic image of an organism (0066: flowing a cell through an imaging area of a fluidic channel and generating an image of that cell; generating an image of a cell and extracting features from that image); determining, using a first machine learning model, whether the image is usable for processing (0180: the model(s) may determine that the cells are in-focus or out-of-focus in the images/videos); identifying, using a second machine learning model, whether majority phenotype attributes are in the image in response to the first machine learning model determining the image is usable for processing; identifying, using a third machine learning model, whether minority phenotype attributes are in the image in response to the first machine learning model determining the image is usable for processing (0100: The machine learning algorithm as disclosed herein may be configured to extract one or more morphological features of a cell from the image data of the cell. The machine learning algorithm may form a new data set based on the extracted morphological features, and the new data set need not contain the original image data of the cell...A cell analysis platform as disclosed herein may be operatively coupled to one or more databases comprising non-morphological data of cells processed (e.g., genomics data, transcriptomics data, proteomics data, metabolomics data), a selected population of cells exhibiting the newly extracted morphological feature(s) may be further analyzed by their non-morphometric features to identify proteins or genes of interest that are common in the selected population of cells but not in other cells, thereby determining such proteins or genes of interest to be new molecular markers that may be used to identify such selected population of cells); and providing an image output with the image and any detected majority phenotype attributes and any minority phenotype attributes identified in the image (0143: When the image data is processed, e.g., to extract one or more morphological features of a cell, each cell image may be annotated with the extracted one or more morphological features and/or with information that the cell image belongs to a particular cluster).
Claim 2:
Mavropoulos discloses an output of the first machine learning model is provided as input to the second machine learning model and the output of the first machine learning model is provided as input to the third machine learning model (0168: the cell analysis platform may be used to process image data comprising tag-free images of single cells to compare the cell to pre-determined (e.g., pre-analyzed) images of known cells or cell morphology map(s), such that the single cells from the image data may be classified, e.g., for cell sorting. FIG. 7 illustrates an example cell analysis platform (e.g., machine learning/artificial intelligence platform) for analyzing image data of one or more cells. The cell analysis platform 700 may comprise a cell morphology atlas (CMA) 705. The CMA 705 may comprise a database 710 having a plurality of annotated single cell images that are grouped into morphologically-distinct clusters (e.g., represented a texts, as cell morphology map(s), or cell morphological ontology(ies)) corresponding to a plurality of classifications (e.g., predefined cell classes). The CMA 705 may comprise a modeling unit comprising one or more models (e.g., modeling library 720 comprising, such as, one or more machine learning algorithms disclosed herein) that are trained and validated using datasets from the CMA 705, to process image data comprising images/videos of one or more cells to identify different cell types and/or states based at least on morphological features. The CMA 705 may comprise an analysis module 730 comprising one or more classifiers as disclosed herein. The classifier(s) may use one or more of the models from the modeling library 720 to, e.g., (1) classify one or more images taken from a sample, (2) assess a quality or state of the sample based on the one or more images, (3) map one or more datapoints representing such one or more images onto a cell morphology map (or cell morphological ontology) using a mapping module 740. The CMA 705 may be operatively coupled to one or more additional database 770 to receive the image data comprising the images/videos of one or more cells. For example, the image data from the database 770 may be obtained from an imaging module 792 of a cartridge 790, which may also be operatively coupled to the CMA 705).
Claim 9:
Mavropoulos discloses the majority phenotype attributes are phenotypes that have a high frequency of occurrence in a dataset of images and the minority phenotype attributes are phenotypes that have a low frequency of occurrence in a dataset of images (0100: The machine learning algorithm as disclosed herein may be configured to extract one or more morphological features of a cell from the image data of the cell. The machine learning algorithm may form a new data set based on the extracted morphological features, and the new data set need not contain the original image data of the cell...A cell analysis platform as disclosed herein may be operatively coupled to one or more databases comprising non-morphological data of cells processed (e.g., genomics data, transcriptomics data, proteomics data, metabolomics data), a selected population of cells exhibiting the newly extracted morphological feature(s) may be further analyzed by their non-morphometric features to identify proteins or genes of interest that are common in the selected population of cells but not in other cells, thereby determining such proteins or genes of interest to be new molecular markers that may be used to identify such selected population of cells).
Claim 10:
Mavropoulos discloses the image output includes a location where a majority phenotype attribute is in the image and a score indicating a confidence level of the second machine learning model that the majority phenotype attribute is at the location in the image (0143 and 0168: When the image data is processed, e.g., to extract one or more morphological features of a cell, each cell image may be annotated with the extracted one or more morphological features and/or with information that the cell image belongs to a particular cluster (e.g., a probability)...the cell analysis platform may be used to process image data comprising tag-free images of single cells to compare the cell to pre-determined (e.g., pre-analyzed) images of known cells or cell morphology map(s), such that the single cells from the image data may be classified, e.g., for cell sorting. FIG. 7 illustrates an example cell analysis platform (e.g., machine learning/artificial intelligence platform) for analyzing image data of one or more cells. The cell analysis platform 700 may comprise a cell morphology atlas (CMA) 705. The CMA 705 may comprise a database 710 having a plurality of annotated single cell images that are grouped into morphologically-distinct clusters (e.g., represented a texts, as cell morphology map(s), or cell morphological ontology(ies)) corresponding to a plurality of classifications (e.g., predefined cell classes). The CMA 705 may comprise a modeling unit comprising one or more models (e.g., modeling library 720 comprising, such as, one or more machine learning algorithms disclosed herein) that are trained and validated using datasets from the CMA 705, to process image data comprising images/videos of one or more cells to identify different cell types and/or states based at least on morphological features. The CMA 705 may comprise an analysis module 730 comprising one or more classifiers as disclosed herein. The classifier(s) may use one or more of the models from the modeling library 720 to, e.g., (1) classify one or more images taken from a sample, (2) assess a quality or state of the sample based on the one or more images, (3) map one or more datapoints representing such one or more images onto a cell morphology map (or cell morphological ontology) using a mapping module 740. The CMA 705 may be operatively coupled to one or more additional database 770 to receive the image data comprising the images/videos of one or more cells. For example, the image data from the database 770 may be obtained from an imaging module 792 of a cartridge 790, which may also be operatively coupled to the CMA 705).
Claim 11:
Mavropoulos discloses the image output includes a location where a minority phenotype attribute is in the image and a score indicating a confidence level of the third machine learning model that the minority phenotype attribute is at the location in the image (0085, 0143 and 0168: For example, a heatmap may be used as colorimetric scale to represent the classifier prediction percentages for each cell against a cell class, cell type, or cell state....When the image data is processed, e.g., to extract one or more morphological features of a cell, each cell image may be annotated with the extracted one or more morphological features and/or with information that the cell image belongs to a particular cluster (e.g., a probability)...the cell analysis platform may be used to process image data comprising tag-free images of single cells to compare the cell to pre-determined (e.g., pre-analyzed) images of known cells or cell morphology map(s), such that the single cells from the image data may be classified, e.g., for cell sorting. FIG. 7 illustrates an example cell analysis platform (e.g., machine learning/artificial intelligence platform) for analyzing image data of one or more cells. The cell analysis platform 700 may comprise a cell morphology atlas (CMA) 705. The CMA 705 may comprise a database 710 having a plurality of annotated single cell images that are grouped into morphologically-distinct clusters (e.g., represented a texts, as cell morphology map(s), or cell morphological ontology(ies)) corresponding to a plurality of classifications (e.g., predefined cell classes). The CMA 705 may comprise a modeling unit comprising one or more models (e.g., modeling library 720 comprising, such as, one or more machine learning algorithms disclosed herein) that are trained and validated using datasets from the CMA 705, to process image data comprising images/videos of one or more cells to identify different cell types and/or states based at least on morphological features. The CMA 705 may comprise an analysis module 730 comprising one or more classifiers as disclosed herein. The classifier(s) may use one or more of the models from the modeling library 720 to, e.g., (1) classify one or more images taken from a sample, (2) assess a quality or state of the sample based on the one or more images, (3) map one or more datapoints representing such one or more images onto a cell morphology map (or cell morphological ontology) using a mapping module 740. The CMA 705 may be operatively coupled to one or more additional database 770 to receive the image data comprising the images/videos of one or more cells. For example, the image data from the database 770 may be obtained from an imaging module 792 of a cartridge 790, which may also be operatively coupled to the CMA 705).
Claim 13:
Mavropoulos discloses A system, comprising: a memory to store data and instructions; and a processor operable to communicate with the memory (0287), wherein the processor is operable to: receive an image, wherein the image is a microscopic image of an organism (0066: flowing a cell through an imaging area of a fluidic channel and generating an image of that cell; generating an image of a cell and extracting features from that image); determine, using a first machine learning model, whether the image is usable for processing (0180: the model(s) may determine that the cells are in-focus or out-of-focus in the images/videos); identify, using a second machine learning model, whether majority phenotype attributes are in the image in response to the first machine learning model determining the image is usable for processing; identify, using a third machine learning model, whether minority phenotype attributes are in the image in response to the first machine learning model determining the image is usable for processing (0100: The machine learning algorithm as disclosed herein may be configured to extract one or more morphological features of a cell from the image data of the cell. The machine learning algorithm may form a new data set based on the extracted morphological features, and the new data set need not contain the original image data of the cell...A cell analysis platform as disclosed herein may be operatively coupled to one or more databases comprising non-morphological data of cells processed (e.g., genomics data, transcriptomics data, proteomics data, metabolomics data), a selected population of cells exhibiting the newly extracted morphological feature(s) may be further analyzed by their non-morphometric features to identify proteins or genes of interest that are common in the selected population of cells but not in other cells, thereby determining such proteins or genes of interest to be new molecular markers that may be used to identify such selected population of cells); and provide an image output with the image and any detected majority phenotype attributes identified in the image and any minority phenotype attributes identified in the image (0143: When the image data is processed, e.g., to extract one or more morphological features of a cell, each cell image may be annotated with the extracted one or more morphological features and/or with information that the cell image belongs to a particular cluster).
Claim 14:
Mavropoulos discloses an output of the first machine learning model is provided as input to the second machine learning model and the output of the first machine learning model is provided as input to the third machine learning model (0168: the cell analysis platform may be used to process image data comprising tag-free images of single cells to compare the cell to pre-determined (e.g., pre-analyzed) images of known cells or cell morphology map(s), such that the single cells from the image data may be classified, e.g., for cell sorting. FIG. 7 illustrates an example cell analysis platform (e.g., machine learning/artificial intelligence platform) for analyzing image data of one or more cells. The cell analysis platform 700 may comprise a cell morphology atlas (CMA) 705. The CMA 705 may comprise a database 710 having a plurality of annotated single cell images that are grouped into morphologically-distinct clusters (e.g., represented a texts, as cell morphology map(s), or cell morphological ontology(ies)) corresponding to a plurality of classifications (e.g., predefined cell classes). The CMA 705 may comprise a modeling unit comprising one or more models (e.g., modeling library 720 comprising, such as, one or more machine learning algorithms disclosed herein) that are trained and validated using datasets from the CMA 705, to process image data comprising images/videos of one or more cells to identify different cell types and/or states based at least on morphological features. The CMA 705 may comprise an analysis module 730 comprising one or more classifiers as disclosed herein. The classifier(s) may use one or more of the models from the modeling library 720 to, e.g., (1) classify one or more images taken from a sample, (2) assess a quality or state of the sample based on the one or more images, (3) map one or more datapoints representing such one or more images onto a cell morphology map (or cell morphological ontology) using a mapping module 740. The CMA 705 may be operatively coupled to one or more additional database 770 to receive the image data comprising the images/videos of one or more cells. For example, the image data from the database 770 may be obtained from an imaging module 792 of a cartridge 790, which may also be operatively coupled to the CMA 705).
Claim 19:
Mavropoulos discloses the first machine learning model is a classifier machine learning model, the second machine learning model is a classifier machine learning model, and the third machine learning model is an object detection machine learning model (0075, 0092, 0101, 0104, 0283).
Claim 20:
Mavropoulos discloses the second machine learning model is trained using a large dataset of images to accurately detect majority phenotype attributes and the third machine learning model is trained using a small dataset of images to accurately detect minority phenotype attributes (0095).
Allowable Subject Matter
Claims 3-8 and 15-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Note
The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED H ZUBERI/Primary Examiner, Art Unit 2178