DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-13 are currently pending and under examination herein.
Claims 1-13 are rejected.
Priority
Foreign priority is acknowledged to EP21184360.2 filed 07/07/2021. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. In this action, claims 1-13 are examined as though they had an effective filing date of 07/07/2021. In future actions, the effective filing date of one or more claims may change, due to amendments to the claims, or further analysis of the disclosures of the priority applications.
Information Disclosure Statement
The information disclosure statements (IDS) filed on 07/06/2022 and 07/20/2022 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Drawings
The drawings filed on 07/06/2022 are accepted.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“An apparatus for predicting a future state of a biological system, configured to” in claim 11
“the apparatus configured to” in claim 12
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Specific structure for the apparatuses recited in claims 11 and 12 were not found in the specification beyond mention of generic computer components. For the purposes of examination these are therefore considered to be generic computers.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3 and 5 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention.
Claim 3 recites the limitation - wherein the metadata comprises information on at least one of a configuration of a microscope, used for generating the microscope image, a surrounding. The meets and bounds of “surrounding” are unclear and no definition was found in the specification. No claim depends on claim 3.
Claim 5 recites the limitation - by means of a further trained artificial neural network, the further trained artificial neural network being trained based on a sequence of microscope images, depicting biological systems over time, and a corresponding sequence of metadata over the time. It is unclear what “further” in further trained artificial neural network refers to. In other claims, “further” refers to additional images, metadata, and features based on dependent claims. Claim 5 is dependent on claim 1, which does not refer to any neural network. Other claims refer to trained artificial neural networks. No definition of a further trained artificial neural network was found in the specification. No claim depends on claim 5.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea and a law of nature without significantly more. In accordance with MPEP § 2106, claims found to recite statutory subject matter (Step 1: YES) are then analyzed to determine if the claims recite any concepts that equate to an abstract idea (Step 2A, Prong 1). In the instant application, the claims recite the following limitations that equate to an abstract idea:
Claim 1 recites the limitation extracting features from the microscope image having information on a state of the biological system. Based on the broadest reasonable interpretation, extracting features could practically be done by the human mind, which draws the limitation to the mental process grouping of abstract ideas. Claim 6 also recites the limitation using the features and the metadata to predict the future state of the biological system. Based on the broadest reasonable interpretation, predicting the future state could involve the use of equations or practically be done by the human mind, which draws the limitation the mental process grouping of abstract ideas. Claims 2-10 and 13 depend on claim 1, and thus contain the above issues due to said dependence.
Claim 6 recites the limitation extracting further features from the further microscope image having information on the state of the biological system. Based on the broadest reasonable interpretation, extracting features could practically be done by the human mind, which draws the limitation to the mental process grouping of abstract ideas. Claim 6 also recites the limitation using the features and the further features and the metadata and the further metadata to predict the future state by detecting anomalies based on a temporal development between the features and the further features and between the metadata and further metadata. Based on the broadest reasonable interpretation, predicting the future state could involve equations or practically be done by the human mind, which draws the limitation the mental process grouping of abstract ideas.
Claim 8 recites the limitation identifying a risk parameter of the metadata, the risk parameter having a positive correlation with a degradation of the state of the biological system with respect to the future state. Based on the broadest reasonable interpretation, identifying a risk parameter could practically be done by the human mind, which draws the limitation the mental process grouping of abstract ideas. Additionally, this limitation describes a natural correlation which draws the limitation to a natural law. Claims 9-10 depend on claim 8, and thus contain the above issues due to said dependence.
Claim 11 recites the limitation an apparatus configured to extract features from the microscope image. Based on the broadest reasonable interpretation, extracting features could practically be done by the human mind, which draws the limitation the mental process grouping of abstract ideas. Claim 11 also recites the limitation an apparatus configured to use the features and the metadata to predict the future state of the biological system. Based on the broadest reasonable interpretation, making the prediction could involve the use of equations or practically be done by the human mind, which draws the limitation the mental process grouping of abstract ideas. Claims 12 depends on claim 11, and thus contain the above issues due to said dependence.
These limitations recite concepts of extracting, predicting, and identifying information that are so generically recited that they can be practically performed in the human mind as claimed, which falls under the “Mental processes” and “Mathematical concepts” grouping of abstract ideas. These recitations are similar to the concepts of collecting information, analyzing it and displaying certain results of the collection and analysis in Electric Power Group, LLC, v. Alstom (830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016)), organizing and manipulating information through mathematical correlations in Digitech Image Techs., LLC v Electronics for Imaging, Inc. (758 F.3d 1344, 111 U.S.P.Q.2d 1717 (Fed. Cir. 2014)) and comparing information regarding a sample or test to a control or target data in Univ. of Utah Research Found. v. Ambry Genetics Corp. (774 F.3d 755, 113 U.S.P.Q.2d 1241 (Fed. Cir. 2014)) and Association for Molecular Pathology v. USPTO (689 F.3d 1303, 103 U.S.P.Q.2d 1681 (Fed. Cir. 2012)) that the courts have identified as concepts that can be practically performed in the human mind or mathematical relationships. Therefore, these limitations fall under the “Mental process” and “Mathematical concepts” groupings of abstract ideas. Additionally, the limitations describe natural correlations, which fall under natural laws. This is similar to a correlation that is the consequence of how a certain compound is metabolized by the body (Mayo Collaborative Servs. v. Prometheus Labs., 566 U.S. 66, 75-77, 101 USPQ2d 1961, 1967-68 (2012)) that the courts have identified as a law of nature. As such claims 1-13 recite an abstract idea and a natural law (Step 2A, Prong 1: YES).
Claims found to recite a judicial exception under Step 2A, Prong 1 are then further analyzed to determine if the claims as a whole integrate the recited judicial exception into a practical application or not (Step 2A, Prong 2). These judicial exceptions are not integrated into a practical application because the claims do not recite an additional element that reflects an improvement to technology (MPEP § 2106.04(d)(1)). Rather, the claims provide insignificant extra-solution activity (MPEP § 2106.05(g)) and provide mere instructions to apply a judicial exception (MPEP § 2106.05(f)). Specifically, the claims recite the following additional elements:
Claim 1 recites a method for predicting a future state of a biological system, comprising: receiving a microscope image depicting the biological system at an associated time; and receiving metadata corresponding to the microscope image.
Claim 2 recites wherein the state of the biological system is related to at least one of a health, an activity, and a growth of the biological system.
Claim 3 recites wherein the metadata comprises information on at least one of a configuration of a microscope, used for generating the microscope image, a surrounding, an agent interacting with the biological system at the associated time, a temperature, a pH, a partial pressure of carbon dioxide, a partial pressure of oxygen, a humidity, a culture condition of the biological system, a type or amount of a buffer solution, nutrient, antibiotic or growth factor of the biological system at the associated time.
Claim 4 recites wherein extracting features from the microscope image comprises using an encoder of a trained artificial neural network having an encoder-decoder architecture.
Claim 5 recites wherein using the features and the metadata comprises detecting anomalies by means of a further trained artificial neural network, the further trained artificial neural network being trained based on a sequence of microscope images, depicting biological systems over time, and a corresponding sequence of metadata over the time.
Claim 6 recites receiving a further microscope image depicting the biological system at another associated time; and receiving further metadata corresponding to the further microscope image.
Claim 7 recites using a decoder of the trained artificial neural network having the encoder-decoder architecture to reconstruct a segmented image based on the future state being predicted, the segmented image depicting the biological system as one or more segments according to the future state.
Claim 9 recites generating data for an external entity having an influence on the risk parameter, the data comprises a command for the external entity to adapt a configuration related to the risk parameter for mitigating the degradation.
Claim 10 recites wherein the risk parameter relates to an illumination property, a temperature, a humidity, an oxygen level, a carbon dioxide level or an agent having an influence on the state of the biological system.
Claim 11 recites an apparatus for predicting a future state of a biological system, configured to receive a microscope image depicting a biological system at an associated time; and receive metadata corresponding to the microscope image.
Claim 12 recites a system, comprising a microscope configured to generate a microscope image depicting a biological system at an associated time.
Claim 13 recites a non-transitory, computer-readable medium comprising a program code.
There are no limitations that indicate that the claimed extracting, predicting, and identifying require anything other than generic computing systems. As such, these limitations equate to mere instructions to implement the abstract idea on a generic computer that the courts have stated does not render an abstract idea eligible in Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984. There is no indication that these steps are affected by the judicial exception in any way and thus do not integrate the recited judicial exception into a practical application. As such, claims 1-13 are directed to an abstract idea and a natural law (Step 2A, Prong 2: NO).
Claims found to be directed to a judicial exception are then further evaluated to determine if the claims recite an inventive concept that provides significantly more than the judicial exception itself (Step 2B). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite conventional additional elements that equate to mere instructions to apply the recited exception in a generic way or in a generic computing environment. The claims also recite conventional additional elements that represent insignificant extra-solution activities. The instant claims recite the following additional elements:
Claim 1 recites a method for predicting a future state of a biological system, comprising: receiving a microscope image depicting the biological system at an associated time; and receiving metadata corresponding to the microscope image.
Claim 2 recites wherein the state of the biological system is related to at least one of a health, an activity, and a growth of the biological system.
Claim 3 recites wherein the metadata comprises information on at least one of a configuration of a microscope, used for generating the microscope image, a surrounding, an agent interacting with the biological system at the associated time, a temperature, a pH, a partial pressure of carbon dioxide, a partial pressure of oxygen, a humidity, a culture condition of the biological system, a type or amount of a buffer solution, nutrient, antibiotic or growth factor of the biological system at the associated time.
Claim 4 recites wherein extracting features from the microscope image comprises using an encoder of a trained artificial neural network having an encoder-decoder architecture.
Claim 5 recites wherein using the features and the metadata comprises detecting anomalies by means of a further trained artificial neural network, the further trained artificial neural network being trained based on a sequence of microscope images, depicting biological systems over time, and a corresponding sequence of metadata over the time.
Claim 6 recites receiving a further microscope image depicting the biological system at another associated time; and receiving further metadata corresponding to the further microscope image.
Claim 7 recites using a decoder of the trained artificial neural network having the encoder-decoder architecture to reconstruct a segmented image based on the future state being predicted, the segmented image depicting the biological system as one or more segments according to the future state.
Claim 9 recites generating data for an external entity having an influence on the risk parameter, the data comprises a command for the external entity to adapt a configuration related to the risk parameter for mitigating the degradation.
Claim 10 recites wherein the risk parameter relates to an illumination property, a temperature, a humidity, an oxygen level, a carbon dioxide level or an agent having an influence on the state of the biological system.
Claim 11 recites an apparatus for predicting a future state of a biological system, configured to receive a microscope image depicting a biological system at an associated time; and receive metadata corresponding to the microscope image.
Claim 12 recites a system, comprising a microscope configured to generate a microscope image depicting a biological system at an associated time.
Claim 13 recites a non-transitory, computer-readable medium comprising a program code.
As discussed above, there are no additional limitations to indicate that the claimed extracting, predicting, and identifying require anything other than generic computer components in order to carry out the recited abstract idea or natural law in the claims. Claims that amount to nothing more than an instruction to apply the abstract idea or natural law using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984. MPEP 2106.05(f) discloses that mere instructions to apply the judicial exception cannot provide an inventive concept to the claims. As specified in MPEP 2106.05(g), extra-solution activities can be understood as incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Insignificant extra-solution activities include mere data gathering, selecting a particular data source or type of data to be manipulated, and displaying information. Additionally, Choi et al. (2020, Translational Vision Science and Technology, Vol. 9, No. 2: 1-12) teaches that using neural networks for image analysis and identifying disease, especially within the biomedical field, is a conventional technique (Page 10, Column 1, paragraph 2).
The additional elements do not comprise an inventive concept when considered individually or as an ordered combination that transforms the claimed judicial exception into a patent-eligible application of the judicial exception. Therefore, the claims do not amount to significantly more than the judicial exception itself (Step 2B: No). As such, claims 1-13 are not patent eligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4, 7, and 11-13 are rejected under 35 U.S.C. 102(a)(1) as anticipated by Haan et al. (2020, Digital Medicine, Vol. 3, No. 76: 1-9). Text from the reference art is italicized.
Below the applicable claims are listed:
Claim 1. A method for predicting a future state of a biological system, comprising: i. receiving a microscope image depicting the biological system at an associated time; ii. receiving metadata corresponding to the microscope image; iii. extracting features from the microscope image having information on a state of the biological system; and iv. using the features and the metadata to predict the future state of the biological system.
Claim 2. The method according to claim 1, wherein the state of the biological system is related to at least one of a health, an activity, and a growth of the biological system.
Claim 4. The method according to claim 1, wherein extracting features from the microscope image comprises using an encoder of a trained artificial neural network having an encoder-decoder architecture.
Claim 7. The method according to claim 4, further comprising: using a decoder of the trained artificial neural network having the encoder-decoder architecture to reconstruct a segmented image based on the future state being predicted, the segmented image depicting the biological system as one or more segments according to the future state.
Claim 11. An apparatus for predicting a future state of a biological system, configured to: i. receive a microscope image depicting a biological system at an associated time; ii. receive metadata corresponding to the microscope image; iii. extract features from the microscope image having information on a state of the biological system; and iv. use the features and the metadata to predict the future state of the biological system.
Claim 12. A system, comprising: i. a microscope configured to generate a microscope image depicting a biological system at an associated time; and ii. an apparatus for predicting a future state of a biological system according to claim 11, the apparatus configured to receive the microscope image to predict a future state of the biological system.
Claim 13. A non-transitory, computer-readable medium comprising a program code for performing the method according to claim 1 when the program code is executed by a processor.
Regarding Claim 1, Haan et al. teaches (Claim 1.i) receiving a microscope image depicting the biological system at an associated time (Page 5, Column 2, Paragraph 1: Thin blood smear slides were used for image analysis. Microscope images were obtained using a scanning benchtop microscope and a smartphone-based microscope). Haan et al. also teaches (Claim 1.ii) receiving metadata corresponding to the microscope image (Page 5, Column 2, Paragraph 4: co-registration between the smartphone microscope images and those taken by the clinical benchtop microscope (microscope configuration data was maintained); Page 7, Column 2, Paragraph 1: field of view data were used in subsequent analyses). Haan et al. also teaches (Claim 1.iii) extracting features from the microscope image having information on a state of the biological system (Page 7, Column 1, Paragraph 2: A second deep neural network is used to perform semantic segmentation of the blood cells imaged by our smartphone microscope). Haan et al. also teaches (Claim 1.iv) using the features and the metadata to predict the future state of the biological system (Page 7, Column 2, Paragraph 1: a slide is classified as being positive for sickle cell disease; Page 1, Column 1, Paragraph: 2: Red blood cells characterized by sickle cell disease are predicted to be less deformable, have one-tenth the life span of a healthy cell, and form occlusions in blood vessels).
Regarding Claim 2, Haan et al. teaches the state of the biological system is related to at least one of a health, an activity, and a growth of the biological system (Page 7, Column 1, Paragraph 2: A second deep neural network is used to perform semantic segmentation of the blood cells imaged by our smartphone microscope (i.e., characterizing the extracted red blood cells determines the health of the blood cells and of the individual)).
Regarding Claim 4, Haan et al. teaches extracting features from the microscope image using an encoder of a trained artificial neural network having an encoder-decoder architecture (Page 7 Column 1, Paragraph 2: The deep neural network used to perform semantic segmentation had a U-net architecture containing down-blocks and up-blocks (U-net architecture, including down-blocks and up-blocks, correspond to encoder and decoder) (see page 6, figure 5); Page 7, Column 1: Paragraph 4: the segmentation model was trained for 80,000 iterations using a batch size of 20).
Regarding Claim 7, Haan et al. teaches using a decoder of the trained artificial neural network having the encoder-decoder architecture to reconstruct a segmented image based on the future state predicted, the segmented image depicting the biological system as one or more segments according to the future state (Page 7, Column 2, Paragraph 2: as this network performs segmentation, it uses the SoftMax cross entropy loss function to differentiate between the three classes (sickle cell (i.e. future state of sickle cell disease complications), normal red blood cell, and background); see regarding claim 4 of the current rejection for the segmentation model training and architecture details; see Page 2, Figure 1 for generated segmented images indicting the future state).
Regarding Claim 11, Haan et al. teaches each limitation (Claim 11.i-iv) as the limitations are repeated from Claim 1 (see regarding claim 1 of the current rejection). The apparatus used for executing each limitation is described on Page 7, Column 2, Paragraph 5: The networks were trained and test images were processed on a desktop computer.
Regarding Claim 12, Haan et al. teaches (Claim 12.i) a microscope configured to generate a microscope image depicting a biological system at an associated time (Page 5, Column 1, Paragraph 5: Design of the smartphone-based brightfield microscope - We used a Nokia Lumia 1020 smartphone attached to a custom-designed 3D-printed unit to capture images of the blood smear slides). Haan et al. also teaches (Claim 12.ii) an apparatus for predicting a future state of a biological system according to claim 11, the apparatus configured to receive the microscope image to predict a future state of the biological system (see regarding claim 11 of the current rejection for a description of the apparatus).
Regarding Claim 13, Haan et al. teaches a non-transitory, computer-readable medium comprising a program code for performing the method according to claim 1 when the program code is executed by a processor (Page 7, Column 2, Paragraph 5: The networks were trained and test images were processed on a desktop computer). A standard computer would inherently contain non-transitory, computer-readable mediums.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Haan et al. (2020, Digital Medicine, Vol. 3, No. 76: 1-9), as applied to Claims 1-2, 4, 7, and 11-13 in the 35 USC 102 rejection above, in view of Vicar et al. (2020, Scientific Reports, Vol. 10: 1-12). Italicized text from reference art.
Below the applicable claims are listed:
Claim 1. A method for predicting a future state of a biological system, comprising: i. receiving a microscope image depicting the biological system at an associated time; ii. receiving metadata corresponding to the microscope image; iii. extracting features from the microscope image having information on a state of the biological system; and iv. using the features and the metadata to predict the future state of the biological system.
Claim 2. The method according to claim 1, wherein the state of the biological system is related to at least one of a health, an activity, and a growth of the biological system.
Claim 3. The method according to claim 1, wherein the metadata comprises information on at least one of a configuration of a microscope, used for generating the microscope image, a surrounding, an agent interacting with the biological system at the associated time, a temperature, a pH, a partial pressure of carbon dioxide, a partial pressure of oxygen, a humidity, a culture condition of the biological system, a type or amount of a buffer solution, nutrient, antibiotic or growth factor of the biological system at the associated time.
Claim 4. The method according to claim 1, wherein extracting features from the microscope image comprises using an encoder of a trained artificial neural network having an encoder-decoder architecture.
Claim 5. The method according to claim 1, wherein using the features and the metadata comprises detecting anomalies by means of a further trained artificial neural network, the further trained artificial neural network being trained based on a sequence of microscope images, depicting biological systems over time, and a corresponding sequence of metadata over the time.
Claim 6. The method according to claim 1, further comprising: i. receiving a further microscope image depicting the biological system at another associated time; ii. receiving further metadata corresponding to the further microscope image; iii. extracting further features from the further microscope image having information on the state of the biological system; and iv. using the features and the further features and the metadata and the further metadata to predict the future state by detecting anomalies based on a temporal development between the features and the further features and between the metadata and further metadata.
Claim 7. The method according to claim 4, further comprising: using a decoder of the trained artificial neural network having the encoder-decoder architecture to reconstruct a segmented image based on the future state being predicted, the segmented image depicting the biological system as one or more segments according to the future state.
Claim 8. The method according to claim 1, further comprising: identifying a risk parameter of the metadata, the risk parameter having a positive correlation with a degradation of the state of the biological system with respect to the future state.
Claim 9. The method according to claim 8, further comprising: generating data for an external entity having an influence on the risk parameter, the data comprises a command for the external entity to adapt a configuration related to the risk parameter for mitigating the degradation.
Claim 10. The method according to claim 8, wherein the risk parameter relates to an illumination property, a temperature, a humidity, an oxygen level, a carbon dioxide level or an agent having an influence on the state of the biological system.
Claim 11. An apparatus for predicting a future state of a biological system, configured to: i. receive a microscope image depicting a biological system at an associated time; ii. receive metadata corresponding to the microscope image; iii. extract features from the microscope image having information on a state of the biological system; and iv. use the features and the metadata to predict the future state of the biological system.
Claim 12. A system, comprising: i. a microscope configured to generate a microscope image depicting a biological system at an associated time; and ii. an apparatus for predicting a future state of a biological system according to claim 11, the apparatus configured to receive the microscope image to predict a future state of the biological system.
Claim 13. A non-transitory, computer-readable medium comprising a program code for performing the method according to claim 1 when the program code is executed by a processor.
Regarding Claim 1, Haan et al. teaches (Claim 1.i) receiving a microscope image depicting the biological system at an associated time (Page 5, Column 2, Paragraph 1: Thin blood smear slides were used for image analysis. Microscope images were obtained using a scanning benchtop microscope and a smartphone-based microscope). Haan et al. also teaches (Claim 1.ii) receiving metadata corresponding to the microscope image (Page 5, Column 2, Paragraph 4: co-registration between the smartphone microscope images and those taken by the clinical benchtop microscope (i.e., microscope configuration data was maintained); Page 7, Column 2, Paragraph 1: field of view data were used in subsequent analyses). Haan et al. also teaches (Claim 1.iii) extracting features from the microscope image having information on a state of the biological system (Page 7, Column 1, Paragraph 2: A second deep neural network is used to perform semantic segmentation of the blood cells imaged by our smartphone microscope). Haan et al. also teaches (Claim 1.iv) using the features and the metadata to predict the future state of the biological system (Page 7, Column 2, Paragraph 1: a slide is classified as being positive for sickle cell disease; Page 1, Column 1, Paragraph: 2: Red blood cells characterized by sickle cell disease are predicted to be less deformable, have one-tenth the life span of a healthy cell, and form occlusions in blood vessels).
Regarding Claim 2, Haan et al. teaches the state of the biological system is related to at least one of a health, an activity, and a growth of the biological system (Page 7, Column 1, Paragraph 2: A second deep neural network is used to perform semantic segmentation of the blood cells imaged by our smartphone microscope (i.e., characterizing the extracted red blood cells determines the health of the blood cells and of the individual)).
Regarding Claim 4, Haan et al. teaches extracting features from the microscope image using an encoder of a trained artificial neural network having an encoder-decoder architecture (Page 7 Column 1, Paragraph 2: The deep neural network used to perform semantic segmentation had a U-net architecture containing down-blocks and up-blocks (U-net architecture, including down-blocks and up-blocks, correspond to encoder and decoder) (see page 6, figure 5); Page 7, Column 1: Paragraph 4: the segmentation model was trained for 80,000 iterations using a batch size of 20).
Regarding Claim 7, Haan et al. teaches using a decoder of the trained artificial neural network having the encoder-decoder architecture to reconstruct a segmented image based on the future state predicted, the segmented image depicting the biological system as one or more segments according to the future state (Page 7, Column 2, Paragraph 2: as this network performs segmentation, it uses the SoftMax cross entropy loss function to differentiate between the three classes (sickle cell (i.e. future state of sickle cell disease complications), normal red blood cell, and background); see regarding claim 4 of the current rejection for the segmentation model training and architecture details; see Page 2, Figure 1 for generated segmented images indicting the future state).
Regarding Claim 11, Haan et al. teaches each limitation (Claim 11.i-iv), which are repeated from Claim 1 (see regarding claim 1 of the current rejection). The apparatus used for executing each limitation is described on Page 7, Column 2, Paragraph 5: The networks were trained and test images were processed on a desktop computer.
Regarding Claim 12, Haan et al. teaches (Claim 12.i) a microscope configured to generate a microscope image depicting a biological system at an associated time (Page 5, Column 1, Paragraph 5: Design of the smartphone-based brightfield microscope - We used a Nokia Lumia 1020 smartphone attached to a custom-designed 3D-printed unit to capture images of the blood smear slides). Haan et al. also teaches (Claim 12.ii) an apparatus for predicting a future state of a biological system according to claim 11, the apparatus configured to receive the microscope image to predict a future state of the biological system (see regarding claim 11 of the current rejection for a description of the apparatus).
Regarding Claim 13, Haan et al. teaches a non-transitory, computer-readable medium comprising a program code for performing the method according to claim 1 when the program code is executed by a processor (Page 7, Column 2, Paragraph 5: The networks were trained and test images were processed on a desktop computer). A standard computer would inherently contain non-transitory, computer-readable mediums.
Haan et al. does not teach the additional specific metadata data according to claim 3. Haan et al. also does not teach using the features and metadata comprises detecting anomalies by another trained artificial neural network, the other artificial neural network being trained based on a sequence of microscope images, depicting biological systems over time, and a corresponding sequence of metadata over the time (Claim 5). Haan et al. also does not teach receiving a further microscope image depicting the biological system at another associated time; receiving further metadata corresponding to the further microscope image; extracting further features from the further microscope image having information on the state of the biological system; and using the features and the further features and the metadata and the further metadata to predict the future state by detecting anomalies based on a temporal development between the features and metadata (Claim 6). Haan et al. also does not teach identifying a risk parameter of the metadata that is correlated with the degradation of the biological system (Claim 8). Haan et al. also does not teach generating data for an external entity, which includes a command for the external entity to adapt a configuration related to mitigating the risk of degradation (Claim 9). Haan et al. also does not teach the risk parameter relates to an illumination property, a temperature, a humidity, an oxygen level, a carbon dioxide level or an agent having an influence on the state of the biological system (Claim 10).
Regarding Claim 3, Vicar et al. teaches metadata associated with photos taken from a microscope include temperature (37[Symbol font/0xB0]C), a partial pressure of carbon dioxide (5% CO2), a humidity (60%), a culture condition of the biological system (To maintain standard cultivation conditions during time-lapse experiments, cells were placed in the gas chamber) (Page 5, Paragraphs 1-2).
Regarding Claim 5, Vicar et al. teaches detecting anomalies by means of a further trained artificial neural network, the further trained artificial neural network being trained based on a sequence of microscope images, depicting biological systems over time, and a corresponding sequence of metadata over the time (Page 9, Paragraph 1: Long-short term memory network has been trained for the regression of Gaussian curves created on the time of cell death, where the whole method is summarized in Fig. 1; Page 2, Figure 1: the figure shows using multiple images of cells taken over time (videos) to train a model to predict cell death timing).
Regarding Claim 6, Vicar et al. teaches (Claim 6.i) receiving a further microscope image depicting the biological system at another associated time (Page 5, Paragraph 2: Holograms were captured by CCD camera, fluorescence images were captured using ANDOR Zyla 5.5 sCMOS camera. Complete quantitative phase image reconstruction and image processing were performed in Q-PHASE control software; Page 5, Paragraph 1: For each of three cell lines and each of three treatments, seven fields of view were observed with the frame rate 3 mins/frame (multiple images were taken over time)). Vicar et al. also teaches (Claim 6.ii) receiving further metadata corresponding to the further microscope image (Page 5, Paragraph 2: The value of the Phase was measured directly by the microscope and used for each frame to calculate mass). Vicar et al. also teaches (Claim 6.iii) extracting further features from the further microscope image having information on the state of the biological system (Page 7, Paragraph 1: For further analysis, we extracted several cell features; Page 8, paragraph 1: All these features were evaluated in all frames, where the result is a set of signals describing the cell behaviour in time). Vicar et al. also teaches (Claim 6.iv) using the features and metadata across time to predict the future state by detecting anomalies based on a temporal development between the features and the further features and between the metadata and further metadata (Page 9, Paragraph 2: Cell death time was identified (i.e., predicted) as a maximum in the network response with a value higher than the chosen threshold).
Regarding Claim 8, Vicar et al. teaches identifying a risk parameter of the metadata, the risk parameter having a positive correlation with a degradation of the state of the biological system with respect to the future state (Page 5, Figure 4: caspase 3,7 and propidium iodide signal values were shown to be positively correlated with the cell death time prediction).
Regarding Claim 9, Vicar et al. teaches generating data for an external entity having an influence on the risk parameter, the data comprises a command for the external entity to adapt a configuration related to the risk parameter for mitigating the degradation (Page 5, Figure 4: The figure indicates caspase 3,7 and propidium iodide signal values were positively correlated with the cell death time prediction). The data presented by this figure could be used as an indicator to alter the conditions prior to degradation (cell death).
Regarding Claim 10, Vicar et al. teaches the risk parameter relates to an illumination property (Page 5, Paragraph: Cell dry mass values (correlated with cell death) were derived from the value of the Phase (an illumination property), which was measured directly by the microscope).
It would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date of the invention to combine the prior art teachings of Haan et al. and Vicar et al. Both Haan et al. and Vicar et al. utilize at least one image of cells generated from a microscope to make predictions with a machine learning framework. Vicar et al. utilizes images taken over time, which adds an additional dimension of data that can be integrated into the prediction and segmentation machine learning models. Haan et al. suggests a lager training dataset could improve their predictive modeling but acquiring larger image datasets is difficult (Page 4, Column 1, Paragraph 1). Taking videos (i.e., images of the same system over time) would increase the amount of data available for training and developing the predictive model. Therefore, it would have been obvious to someone of ordinary skill in the art the time of the effective filling date to combine the methods from both of the references indicated above.
Furthermore, one of ordinary skill in the art would predict that the method taught by Haan et al. could be readily added to the methods of Vicar et al. with a reasonable expectation of success because using videos, instead of single static images, in a machine learning framework to identify sick cell disease had already been demonstrated by O’Connor et al. (2020, Biomedical Optical Express, Vol. 11, No. 8: 4491-4508) before the effective filing date. Accordingly, claims 1-13 taken as a whole would have been prima facie obvious before the effective filing date and are rejected under 35 U.S.C. 103.
Double Patenting
No issues of double patenting were identified.
Conclusion
No Claims are allowed.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BLAKE H ELKINS whose telephone number is (571)272-2649. The examiner can normally be reached Monday-Friday 8-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at (571) 272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.H.E./
Examiner, Art Unit 1687
/Karlheinz R. Skowronek/Supervisory Patent Examiner, Art Unit 1687