DETAILED ACTION
This Office action is in response to the Application filed on May 16, 2024, which claims benefit of U.S. Provisional Application No. 63/467582, filed on May 18, 2023. An action on the merits follows. Claims 1-20 have been cancelled and new claims 21-40 have been entered via preliminary amendment. An action on the merits follows. Claims 21-40 are pending on the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claims 21-36 and 39-40 are provisionally rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 1-16 and 19-20 of copending Application No. 19/357697 (reference application) and claims 1-16 and 19-20 of copending Application No. 19/391805 (reference application), respectively. This is a provisional statutory double patenting rejection since the claims directed to the same invention have not in fact been patented.
Claim Objections
Claims 21-22 and 39-40 are objected to because of the following informalities:
Claim 21 recites the limitation “a set of time-series image data depicting one or more cells” in line 7 of the claim. However, it is not clear if the claimed “one or more cells” recited in line 7 of the claim corresponds to the previously claimed “one or more cells” recited in line 1 of the claim, or not, for example.
Therefore, based on above, for examination purposes the claimed “a set of time-series image data depicting one or more cells” recited in line 7 of the claim will be interpreted as “a set of time-series image data depicting the one or more cells”.
Claim 21 further recites the limitation “the one or more cells of the subject” in line 13 of the claim. However, there is insufficient antecedent basis for the claimed “the subject” limitation recited in line 13 of the claim.
Therefore, based on above, for examination purposes the claimed “determining a cell state of one or more cells” and “determining the cell state of the one or more cells of the subject” recited in lines 1 and 13 of the claim, respectively, will be interpreted as “determining a cell state of one or more cells of a subject” and “determining the cell state of the one or more cells of the subject”.
Claim 22 recites the limitation “a first embedding in the sequence of embeddings and a second embedding in the sequence of embeddings” in lines 5-6 of the claim. However, it is not clear if the claimed “a first embedding in the sequence of embeddings and a second embedding in the sequence of embeddings” recited in lines 5-6 of the claim corresponds to the claimed “a first embedding in the sequence of embeddings and a second embedding in the sequence of embeddings” previously recited in lines 3-4 of the claim, or not, for example.
Therefore, based on above, for examination purposes the claimed “a first embedding in the sequence of embeddings and a second embedding in the sequence of embeddings” recited in lines 5-6 of the claim will be interpreted as “the first embedding in the sequence of embeddings and the second embedding in the sequence of embeddings”.
Claim 39 recites the limitation “a set of time-series image data depicting one or more cells” in line 2 of the claim. However, it is not clear if the claimed “one or more cells” recited in line 2 of the claim corresponds to the previously claimed “one or more cells” recited in line 1 of the claim, or not, for example.
Therefore, based on above, for examination purposes the claimed “a set of time-series image data depicting one or more cells” recited in line 2 of the claim will be interpreted as “a set of time-series image data depicting the one or more cells”.
Claim 39 further recites the limitation “the one or more cells of the subject” in line 8 of the claim. However, there is insufficient antecedent basis for the claimed “the subject” limitation recited in line 8 of the claim.
Therefore, based on above, for examination purposes the claimed “determining a cell state of one or more cells” and “determining the cell state of the one or more cells of the subject” recited in lines 1 and 8 of the claim, respectively, will be interpreted as “determining a cell state of one or more cells of a subject” and “determining the cell state of the one or more cells of the subject”.
Claim 40 recites the limitation “a set of time-series image data depicting one or more cells” in line 5 of the claim. However, it is not clear if the claimed “one or more cells” recited in line 5 of the claim corresponds to the previously claimed “one or more cells” recited in line 2 of the claim, or not, for example.
Therefore, based on above, for examination purposes the claimed “a set of time-series image data depicting one or more cells” recited in line 5 of the claim will be interpreted as “a set of time-series image data depicting the one or more cells”.
Claim 40 further recites the limitation “the one or more cells of the subject” in line 11 of the claim. However, there is insufficient antecedent basis for the claimed “the subject” limitation recited in line 11 of the claim.
Therefore, based on above, for examination purposes the claimed “determining a cell state of one or more cells” and “determining the cell state of the one or more cells of the subject” recited in lines 2 and 11 of the claim, respectively, will be interpreted as “determining a cell state of one or more cells of a subject” and “determining the cell state of the one or more cells of the subject”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 26 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 26 recites the limitation “the plurality of fluorescence images is captured using an imager with a frame rate of at least four frames per second” in lines 1-2 of claim 26. However, the examiner cannot clearly ascertain if the claimed “an imager with a frame rate of at least four frames per second” recited in lines 1-2 of claim 26 encompass embodiments corresponding to the claimed “an imager with a frame rate of at least four frames per second” previously recited in lines 1-2 of claim 24, or if the claimed “an imager with a frame rate of at least four frames per second” recited in lines 1-2 of claim 26 encompass embodiments corresponding another “imager with a frame rate of at least four frames per second” different from the claimed “an imager with a frame rate of at least four frames per second” previously recited in lines 1-2 of claim 24, for example. Therefore, the metes and bounds of the claim are not clearly set forth and the examiner cannot clearly determine which elements are encompassed by the claim language, which renders the claim indefinite.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 21-23, 27-28, 33-36, and 39-40 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Middlestead et al. (US PG Publication No. 2025/0131749 A1), hereafter referred to as Middlestead.
Regarding claim 21, Middlestead discloses a system for determining a cell state of one or more cells [of a subject] (Par. [0008-11]: systems and methods that may combine high-throughput flow or static imaging technology and machine learning, such as convolutional neural networks… a digital filter can be a Convolutional neural network (ConvNet) that can analysis, to analyze cells… a biological sample from a subject may undergo acoustic separation followed by flow imaging microscopy with parameters that have been adjusted so as to obtain multiple images of a microparticle or feature of interest, and ending with machine learning analysis… The convolutional neural network first classifies images of each cell… Acoustic separation removes larger particles and subject-derived cells; Par. [0176]: systems and methods for identifying and optionally characterizing a cell, cells of interest as a target cell by analyzing a signature of the cell of interest… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state), comprising:
one or more processors (Par. [0211-213]: computing systems described herein, whether controlled by end users at the site of the sample or by a remote entity controlling a machine learning model, can be implemented as software components executing on one or more general purpose processors or specially designed processors such as programmable logic devices (e.g., Field Programmable Gate Arrays (FPGAs)) and/or Application Specific Integrated Circuits (ASICs) designed to perform certain functions or a combination thereof… computer hardware, typically implemented as one or more processors (e.g., CPUs or ASICs);
a memory (Par. [0211-213]: code executed during operation of image acquisition systems and/or machine learning models (computational elements) can be embodied by a form of software elements which can be stored in a nonvolatile storage medium (such as optical disk, flash storage device, mobile hard disk, cloud-based systems etc.)… computer hardware, typically implemented as one or more processors… and associated memory); and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions (Par. [0008]: systems and methods that may combine high-throughput flow or static imaging technology and machine learning, such as convolutional neural networks, in variety of medical applications… the approaches described herein may use high-throughput flow imaging microscopy instrumentation and machine learning module application, such as a digital filter, which may include a computer executable program, hardware application or combination of the same, that can differentiate different microparticles by one or more characteristic… a Convolutional neural network (ConvNet) that can analysis, to analyze cells; Par. [0211-213]: embodiments disclosed herein may be implemented as a system for topographical computer vision through automatic imaging, analysis and classification of physical samples using machine learning techniques and/or stage-based scanning. Any of the computing systems described herein… can be implemented as software components executing on one or more general purpose processors or specially designed processors such as programmable logic devices (e.g., Field Programmable Gate Arrays (FPGAs)) and/or Application Specific Integrated Circuits (ASICs) designed to perform certain functions or a combination thereof… code executed during operation of image acquisition systems and/or machine learning models (computational elements) can be embodied by a form of software elements which can be stored in a nonvolatile storage medium (such as optical disk, flash storage device, mobile hard disk, cloud-based systems etc.), including a number of instructions for making a computer device (such as personal computers, servers, network equipment, etc.)… Each computational element may be implemented as an organized collection of computer data and instructions… an image acquisition algorithm and a machine learning model can each be viewed as a form of application software that interfaces with a user and with system software. System software typically interfaces with computer hardware, typically implemented as one or more processors (e.g., CPUs or ASICs as mentioned) and associated memory) for:
receiving a set of time-series image data depicting [the] one or more cells (Par. [0008-89]: systems and methods that may combine high-throughput flow or static imaging technology and machine learning, such as convolutional neural networks… a digital filter can be a Convolutional neural network (ConvNet) that can analysis, to analyze cells… a biological sample from a subject may undergo acoustic separation followed by flow imaging microscopy with parameters that have been adjusted so as to obtain multiple images of a microparticle or feature of interest, and ending with machine learning analysis… Acoustic separation removes larger particles and subject-derived cells… The convolutional neural network first classifies images of each cell as being images of either microbes or blood cells… Each individual image receives a classification likelihood, for each class… multiple, sequential images of are recorded during passage of cells through the flow imaging microscope. Using these sequentially recorded images, the accuracy of identifying a microbe within a given time series of images can be increased by taking into account (e.g., using a sliding window calculation) the likely identity of the images that appear in the time series before and after the image of interest…multiple images of each cell or microbe are recorded in a correlated time series as the sample flows through the flow microscopy instrument, and the likely identity (as determined by the machine learning module) of the image before and the image after an image of interest are also taken into account when determining the identity of a cell or microbe in the image of interest using a moving mean calculation or other weighted average technique… invention includes the analysis of transduction rates in cultured T-cells during the production of CAR-T cells used in cell therapy for the treatment of cancer using immunotherapy… transduction rate determinations may be made by establishing training data sets that are grouped by transduced samples, and non-transduced samples, at various time points. For unsupervised learning, transduced and/or non-transduced cells at various time points can be used to estimate low-dimensional representations of the images of interest… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially… imaging instrument captures multiple, sequential digital image signals of said microparticles… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially… imaging instrument is configured to capture multiple, sequential digital images of cell culture components… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially; Par. [0176-195]: systems and methods for identifying and optionally characterizing a cell, cells of interest as a target cell by analyzing a signature of the cell of interest… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state… Systems and methods as described herein can involve analysis of one or more test samples from a subject compared against one or more reference samples/datasets. A sample may be any suitable type that allows for the analysis of different discrete populations of cells. A sample may be any suitable type that allows for analysis of a single cell population. Samples may be obtained once or multiple times from a subject. Multiple samples may be obtained from different locations in the individual (e.g., blood samples, bone marrow samples, and/or tissue samples), at different times from the individual (e.g., a series of samples taken to diagnose a disease or to monitor for return of a pathological condition), or any combination thereof. These and other possible sampling combinations based on sample type, location, and time of sampling allow for the detection of the presence of cells before and/or after infection and monitoring for disease… models take as inputs one or more features of interest, such as cellular artifacts extracted from an image of a sample pass through a high-throughput system, and, with little or no additional preprocessing, they classify individual feature of interest as particular cell types; receiving a set of time-series image data depicting the one or more cells (e.g. systems and methods for identifying and characterizing cells of interest include multiple images of each cell are recorded in a correlated time series as a sample flows through a flow microscopy instrument, including subject-derived cells obtained from test samples from a subject, as indicated above, for example);
determining a sequence of embeddings by inputting the set of time-series image data into a first trained machine learning model;
determining a summary embedding based on the sequence of embeddings, the summary embedding comprising a temporal dimension based on temporal information associated with the sequence of embeddings; and
determining the cell state of the one or more cells of the subject by inputting the summary embedding into a second trained machine learning model (Par. [0041-74]: digital filter comprises a convolutional neural network further comprising a machine learning-based automated classifier configured to determine if the microparticles are a microbe of interest, or a subject-derived cell, and/or wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based embedding scheme configured to determine if the cell culture components comprising the microparticles are microbes of interest, or a subject-derived cells… digital filter comprises a machine learning-based embedding scheme configured to determine if the cell culture components comprise transduced CAR-T cells, or non-transduced T-cell; Par. [0128-149]: methods and systems described herein may further include the step of generating a reference distribution by embedding the previously extracted features of interest from the reference sample, in this case a reference biological sample containing a microbe or cell population. In what follows, “embedding” refers to generic dimension reduction (also sometimes referred to as an “encoding”); the “embedding” can be accomplished via supervised techniques such as neural network embeddings calibrated by triplet-loss or unsupervised techniques like Principal Components Analysis (PCA) or extracted from the latent space representations of obtained by other unsupervised methods such as Variational Auto-Encoders (VAE) or Generative Adversarial Network (GAN); optionally with further dimension reduction via UMAP or t-SNE… this embedding process may convert the extracted features of interest to a lower dimensional feature set which can be used for classification or prediction… one or more additional samples identified above may be utilized to generate additional reference distributions through the process of embedding the extracted features of interest from the images capture of the additional samples so as to again, convert the extracted features of interest to a lower dimensional feature set… the reference distributions of the reference's embedding, and optionally the additional embeddings of additional samples, may be defined by using a loss function to separate the embedded lower dimensional feature sets associated with each reference distribution. Further, the probability density of the individual extracted feature embeddings of the reference and optionally the additional samples may be estimated, and in a preferred embodiment, the probability density of one or more of the additional samples on the embedding space may be further estimated… the low dimensional embeddings are obtained by altering the machine leaning module (4) output… after obtaining a plurality of images from the image capture module (3), the images are process by a machine learning module (4) that is configured identify the presence of individual microparticles, such as cell and microbes, including the identification of microbial species… the machine learning module (4) may employ a multiple step classification process, and a sliding window sampler to make use of the image redundancy settings, resulting in highly accurate classification results. The machine learning module (4) may alternatively consist of an embedding obtained from a neural network trained in an unsupervised or supervised fashion… all of the parameters required to specify the function evaluations in the various modules may be assumed to have already been estimated using a large collection of labeled raw or processed image data (where “processed” implies that the modules upstream have produced the correct input) by minimizing a suitable “cost function”, where the cost function can aim at classification (e.g. a “cross entropy loss” function) as would be needed, for example, in pathogen analysis or the cost function can aim at developing a low dimensional representation through “image embeddings” for applications in fault detection (e.g. using a supervised triplet loss cost function or a least squares type reconstruction loss as used in unsupervised learning)… a Machine learning module (4) may include Fusion module that may be optionally used to leverage data and/or meta-information from other sources. The features from a ConvNet may be combined with other measurement or descriptive features through a variety of methods (e.g. a two input Artificial Neural Network, a Random Forest algorithm or Gradient Boosting algorithm for feature selection) producing a new set of feature of interest outputs or image embeddings… module can use another Artificial Neural Network (ANN) to produce a new set of features or embeddings (depending on the specific application)… a machine learning module (4) may include one or more classification or classifier modules that assign a predefined label and probability of a class based on the passed in features/images using another classifier. The subsequent class and class probability output can either be the final output, or the features/raw input features can be embedded via another pretrained ANN and passed to the other branch, in this instance an optional fault detection module. The fault detection module,” as an optional part of the Machine learning module (4) may take low-dimensional embedding representations of the raw images and runs statistical hypothesis tests to check if it is statistically probable that the collection of embeddings has been drawn from a precomputed reference distribution of interest. This step may incorporate a precomputed empirically determined probability distribution (where the distribution function estimation can be parametric or nonparametric) of a suitable goodness-of-fit test statistic characterizing a large collection of labeled ground truth data. The aforementioned distribution may then be used to compute a p-value for each image in the “test dataset” enabling a user to detect if the test statistic generated by the collection of embeddings of the unlabeled data are statistically similar to the embeddings of the labeled reference distribution; Par. [0168-195]: a deep learning model may have significant depth and can classify a large or heterogeneous array of features of interest, such as particles in a liquid suspension, or cellular artifacts, such as pathogens or gene expression… classify a large heterogeneous range of features of interest, such as cells, microorganisms, cells expressing one or more genes, or microorganisms that may have a phenotypic or genotypic traits, such as antibiotic resistance… systems and methods for identifying and optionally characterizing a feature of interest, by analyzing the feature of interest from a test sample and thereby generating a test dataset and comparing it to a training dataset generated from a reference sample, and optionally one or more additional samples. A feature of interest in this embodiment may include a feature of the cell, such as cell morphology among others… identifying and optionally characterizing a cell of interest as a target cell by analyzing a signature of the cell of interest, quantified by a “feature of interest” extracted from the image via a ConvNet, in a test sample and comparing it to a signature of the target cell from a reference sample. A signature of a cell, or “feature of interest” may also include a physical feature of the cell, such as cell morphology, as well as the presence, absence, or relative amount of gene expression within and/or associated with the cell, a phenotypic or genotypic traits, such as antibiotic resistance in a microorganism… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state… goal of training a neural network is typically to have the ANN make an accurate prediction of a new sample, for example, a sample not used during training or validation. Accuracy of the prediction is often measured against the objective function, for example, classification accuracy may be enabled by providing the truth label for the new sample. However, in one embodiment of the present inventor's method, is the use of neural networks for embedding/dimension reduction, namely takes a set large number of pixels in a source HTI image, and summarize the information content with low (2-256) dimensional feature output embedding values from the ANN; the feature embedding can be reduced to 2-6 via post-processing techniques like t-SNE or UMAP; the statistical distribution of the 2-6 dimensional embedding point cloud is determined… models take as inputs one or more features of interest, such as cellular artifacts extracted from an image of a sample pass through a high-throughput system, and, with little or no additional preprocessing, they classify individual feature of interest as particular cell types;
determining a sequence of embeddings by inputting the set of time-series image data into a first trained machine learning model (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme configured to determine if cell culture components comprising microparticles are subject-derived cells or microbes, including embeddings obtained from a neural network trained in an unsupervised or supervised fashion, for example, and generating a reference distribution by embedding previously extracted features of interest from a reference biological sample containing cell population or microbes, including multiple images of each subject-derived cells obtained from test samples from a subject recorded in a correlated time series, such as sequential images recorded during passage of cells through a flow imaging microscope, as indicated above, for example);
determining a summary embedding based on the sequence of embeddings, the summary embedding comprising a temporal dimension based on temporal information associated with the sequence of embeddings (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme includes generating a reference distribution by embedding previously extracted features of interest from a reference biological sample containing cell population or microbes, including multiple images of each subject-derived cells obtained from test samples from a subject recorded in a correlated time (i.e. temporal) series, for example, and using networks for embedding/dimension reduction by taking a set large number of pixels in a source image, and summarize the information content with low dimensional feature output embedding values from an Artificial Neural Network (ANN), as indicated above, for example); and
determining the cell state of the one or more cells of the subject by inputting the summary embedding into a second trained machine learning model (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme includes generating a reference distribution by embedding previously extracted features of interest from a reference biological sample containing cell population or microbes, including multiple images of each subject-derived cells obtained from test samples from a subject recorded in a correlated time series, for example, including cells of interest that are identified using the systems and methods as described above, including cell types implicated in a disease, disorder, or a non-disease state, for example, and using networks for embedding/dimension reduction by taking a set large number of pixels in a source image, and summarize the information content with low dimensional feature output embedding values from an ANN, as indicated above, in which a class probability output can either be the final output, or embedded via another (i.e. a second) pretrained ANN, for example).
Regarding claim 22, Middlestead discloses the system of claim 21, wherein the temporal information associated with the sequence of embeddings comprises at least one of:
a temporal relationship between a first embedding in the sequence of embeddings and a second embedding in the sequence of embeddings,
a sequential relationship between a [the] first embedding in the sequence of embeddings and a [the] second embedding in the sequence of embeddings, and
a time stamp associated with each embedding in the sequence of embeddings (Par. [0008-89]: systems and methods that may combine high-throughput flow or static imaging technology and machine learning, such as convolutional neural networks… a digital filter can be a Convolutional neural network (ConvNet) that can analysis, to analyze cells… a biological sample from a subject may undergo acoustic separation followed by flow imaging microscopy with parameters that have been adjusted so as to obtain multiple images of a microparticle or feature of interest, and ending with machine learning analysis… Acoustic separation removes larger particles and subject-derived cells… The convolutional neural network first classifies images of each cell as being images of either microbes or blood cells… Each individual image receives a classification likelihood, for each class… multiple, sequential images of are recorded during passage of cells through the flow imaging microscope. Using these sequentially recorded images, the accuracy of identifying a microbe within a given time series of images can be increased by taking into account (e.g., using a sliding window calculation) the likely identity of the images that appear in the time series before and after the image of interest…multiple images of each cell or microbe are recorded in a correlated time series as the sample flows through the flow microscopy instrument, and the likely identity (as determined by the machine learning module) of the image before and the image after an image of interest are also taken into account when determining the identity of a cell or microbe in the image of interest using a moving mean calculation or other weighted average technique… invention includes the analysis of transduction rates in cultured T-cells during the production of CAR-T cells used in cell therapy for the treatment of cancer using immunotherapy… transduction rate determinations may be made by establishing training data sets that are grouped by transduced samples, and non-transduced samples, at various time points. For unsupervised learning, transduced and/or non-transduced cells at various time points can be used to estimate low-dimensional representations of the images of interest… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially… imaging instrument captures multiple, sequential digital image signals of said microparticles… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially… imaging instrument is configured to capture multiple, sequential digital images of cell culture components… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially; Par. [0176-195]: systems and methods for identifying and optionally characterizing a cell, cells of interest as a target cell by analyzing a signature of the cell of interest… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state… Systems and methods as described herein can involve analysis of one or more test samples from a subject compared against one or more reference samples/datasets. A sample may be any suitable type that allows for the analysis of different discrete populations of cells. A sample may be any suitable type that allows for analysis of a single cell population. Samples may be obtained once or multiple times from a subject. Multiple samples may be obtained from different locations in the individual (e.g., blood samples, bone marrow samples, and/or tissue samples), at different times from the individual (e.g., a series of samples taken to diagnose a disease or to monitor for return of a pathological condition), or any combination thereof. These and other possible sampling combinations based on sample type, location, and time of sampling allow for the detection of the presence of cells before and/or after infection and monitoring for disease… models take as inputs one or more features of interest, such as cellular artifacts extracted from an image of a sample pass through a high-throughput system, and, with little or no additional preprocessing, they classify individual feature of interest as particular cell types; wherein the temporal information associated with the sequence of embeddings comprises at least one of:
a temporal relationship between a first embedding in the sequence of embeddings and a second embedding in the sequence of embeddings,
a sequential relationship between a [the] first embedding in the sequence of embeddings and a [the] second embedding in the sequence of embeddings, and
a time stamp associated with each embedding in the sequence of embeddings (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme configured to determine if cell culture components comprising microparticles are subject-derived cells or microbes, including embeddings obtained from a neural network trained in an unsupervised or supervised fashion, for example, and generating a reference distribution by embedding previously extracted features of interest from a reference biological sample containing cell population or microbes, including multiple images of each subject-derived cells obtained from test samples from a subject recorded in a correlated time series (i.e. sequence), such as sequential images recorded during passage of cells through a flow imaging microscope, as indicated above, for example).
Regarding claim 23, Middlestead discloses the system of claim 21, wherein the set of time-series image data comprises at least one of: a time series of images, a video segment, a plurality of fluorescence images, a plurality of phase images, and image data acquired at a frame rate of at least four frames per second (Par. [0008-89]: systems and methods that may combine high-throughput flow or static imaging technology and machine learning, such as convolutional neural networks… a digital filter can be a Convolutional neural network (ConvNet) that can analysis, to analyze cells… a biological sample from a subject may undergo acoustic separation followed by flow imaging microscopy with parameters that have been adjusted so as to obtain multiple images of a microparticle or feature of interest, and ending with machine learning analysis… Acoustic separation removes larger particles and subject-derived cells… The convolutional neural network first classifies images of each cell as being images of either microbes or blood cells… Each individual image receives a classification likelihood, for each class… multiple, sequential images of are recorded during passage of cells through the flow imaging microscope. Using these sequentially recorded images, the accuracy of identifying a microbe within a given time series of images can be increased by taking into account (e.g., using a sliding window calculation) the likely identity of the images that appear in the time series before and after the image of interest…multiple images of each cell or microbe are recorded in a correlated time series as the sample flows through the flow microscopy instrument, and the likely identity (as determined by the machine learning module) of the image before and the image after an image of interest are also taken into account when determining the identity of a cell or microbe in the image of interest using a moving mean calculation or other weighted average technique… invention includes the analysis of transduction rates in cultured T-cells during the production of CAR-T cells used in cell therapy for the treatment of cancer using immunotherapy… transduction rate determinations may be made by establishing training data sets that are grouped by transduced samples, and non-transduced samples, at various time points. For unsupervised learning, transduced and/or non-transduced cells at various time points can be used to estimate low-dimensional representations of the images of interest… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially… imaging instrument captures multiple, sequential digital image signals of said microparticles… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially… imaging instrument is configured to capture multiple, sequential digital images of cell culture components… wherein said plurality of digital image signals comprises a plurality of digital image signals captured sequentially; Par. [0176-195]: systems and methods for identifying and optionally characterizing a cell, cells of interest as a target cell by analyzing a signature of the cell of interest… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state… Systems and methods as described herein can involve analysis of one or more test samples from a subject compared against one or more reference samples/datasets. A sample may be any suitable type that allows for the analysis of different discrete populations of cells. A sample may be any suitable type that allows for analysis of a single cell population. Samples may be obtained once or multiple times from a subject. Multiple samples may be obtained from different locations in the individual (e.g., blood samples, bone marrow samples, and/or tissue samples), at different times from the individual (e.g., a series of samples taken to diagnose a disease or to monitor for return of a pathological condition), or any combination thereof. These and other possible sampling combinations based on sample type, location, and time of sampling allow for the detection of the presence of cells before and/or after infection and monitoring for disease… models take as inputs one or more features of interest, such as cellular artifacts extracted from an image of a sample pass through a high-throughput system, and, with little or no additional preprocessing, they classify individual feature of interest as particular cell types; wherein the set of time-series image data comprises at least one of: a time series of images, a video segment, a plurality of fluorescence images, a plurality of phase images, and image data acquired at a frame rate of at least four frames per second (e.g. systems and methods for identifying and characterizing cells of interest include multiple images of each cell are recorded in a correlated time series as a sample flows through a flow microscopy instrument, including subject-derived cells obtained from test samples from a subject, as indicated above, for example).
Regarding claim 27, Middlestead discloses the system of claim 21, wherein the cell state is indicative of a diseased state, a healthy state, or a degree of the diseased state, and wherein the one or more programs include instructions for: determining, based on the cell state of the one or more cells and the set of time-series image data, a relationship between one or more time-variant morphological characteristics depicted in the set of time-series image data and the cell state of the one or more cells (Par. [0120-22]: FIG. 11: Illustration of Using Unsupervised Embedding Representations to Characterize CAR-T cell Morphologies Encoded in Images Captured… The morphology differences between the two distinct cell populations allow discrimination of two cell conditions via neural networks and also enables monitoring the morphology evolution over time… FIG. 12: Illustration of Using Unsupervised Embedding Representations to Characterize Protein Aggregate Morphologies Encoded in Images Obtained by Backgrounded Membrane Imaging (BMI)… The morphology differences between the two distinct particles obtained under different conditions allow discrimination of two conditions via neural networks and the approach also enables monitoring the morphology evolution of the particles over time… ConvNets can be trained using high-throughput microfluidic images, where each image is not provided a detailed class label, and the resulting network can be applied in order to extract and utilize the morphological information contained within the image; Par. [0168-195]: a deep learning model may have significant depth and can classify a large or heterogeneous array of features of interest, such as particles in a liquid suspension, or cellular artifacts, such as pathogens or gene expression… classify a large heterogeneous range of features of interest, such as cells, microorganisms, cells expressing one or more genes, or microorganisms that may have a phenotypic or genotypic traits, such as antibiotic resistance… systems and methods for identifying and optionally characterizing a feature of interest, by analyzing the feature of interest from a test sample and thereby generating a test dataset and comparing it to a training dataset generated from a reference sample, and optionally one or more additional samples. A feature of interest in this embodiment may include a feature of the cell, such as cell morphology among others… identifying and optionally characterizing a cell of interest as a target cell by analyzing a signature of the cell of interest, quantified by a “feature of interest” extracted from the image via a ConvNet, in a test sample and comparing it to a signature of the target cell from a reference sample. A signature of a cell, or “feature of interest” may also include a physical feature of the cell, such as cell morphology, as well as the presence, absence, or relative amount of gene expression within and/or associated with the cell, a phenotypic or genotypic traits, such as antibiotic resistance in a microorganism… An isolated cell may be present in an enriched fraction from the biological sample, and thus its use is not meant to be limited to a purified cell. In some embodiments, the morphology of an isolated cell is analyzed. For target cells indicative of infection, analysis of a cell signature is useful for a number of methods including diagnosing infection, determining the extent of infection, determining a type of infection, and monitoring progression of infection within a host or within a given treatment of the infection. Some of these methods may involve monitoring a change in the signature of the target cell, which includes an increase and/or decrease, and/or any change in morphology… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state… goal of training a neural network is typically to have the ANN make an accurate prediction of a new sample, for example, a sample not used during training or validation. Accuracy of the prediction is often measured against the objective function, for example, classification accuracy may be enabled by providing the truth label for the new sample. However, in one embodiment of the present inventor's method, is the use of neural networks for embedding/dimension reduction, namely takes a set large number of pixels in a source HTI image, and summarize the information content with low (2-256) dimensional feature output embedding values from the ANN; the feature embedding can be reduced to 2-6 via post-processing techniques like t-SNE or UMAP; the statistical distribution of the 2-6 dimensional embedding point cloud is determined… models take as inputs one or more features of interest, such as cellular artifacts extracted from an image of a sample pass through a high-throughput system, and, with little or no additional preprocessing, they classify individual feature of interest as particular cell types; wherein the cell state is indicative of a diseased state, a healthy state, or a degree of the diseased state, and wherein the one or more programs include instructions for: determining, based on the cell state of the one or more cells and the set of time-series image data, a relationship between one or more time-variant morphological characteristics depicted in the set of time-series image data and the cell state of the one or more cells (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme includes generating a reference distribution by embedding previously extracted features of interest from a reference biological sample containing cell population or microbes, including multiple images of each subject-derived cells obtained from test samples from a subject recorded in a correlated time series, for example, including cells of interest that are identified using the systems and methods as described above, including cell types implicated in a disease, disorder, or a non-disease state, for example, and including a morphology of isolated cells that is analyzed by monitoring the morphology evolution of particles over time, as indicated above, for example).
Regarding claim 28, Middlestead discloses the system of claim 27, wherein the one or more programs include instructions for: determining, based on the cell state of the one or more cells and the set of time-series image data, a relationship between one or more subcellular or cellular movements or processes depicted in the set of time-series image data and the cell state of the one or more cells (Par. [0008-11]: systems and methods that may combine high-throughput flow or static imaging technology and machine learning, such as convolutional neural networks… a digital filter can be a Convolutional neural network (ConvNet) that can analysis, to analyze cells… a biological sample from a subject may undergo acoustic separation followed by flow imaging microscopy with parameters that have been adjusted so as to obtain multiple images of a microparticle or feature of interest, and ending with machine learning analysis… The convolutional neural network first classifies images of each cell… Acoustic separation removes larger particles and subject-derived cells; Par. [0120-22]: FIG. 11: Illustration of Using Unsupervised Embedding Representations to Characterize CAR-T cell Morphologies Encoded in Images Captured… The morphology differences between the two distinct cell populations allow discrimination of two cell conditions via neural networks and also enables monitoring the morphology evolution over time… FIG. 12: Illustration of Using Unsupervised Embedding Representations to Characterize Protein Aggregate Morphologies Encoded in Images Obtained by Backgrounded Membrane Imaging (BMI)… The morphology differences between the two distinct particles obtained under different conditions allow discrimination of two conditions via neural networks and the approach also enables monitoring the morphology evolution of the particles over time… ConvNets can be trained using high-throughput microfluidic images, where each image is not provided a detailed class label, and the resulting network can be applied in order to extract and utilize the morphological information contained within the image; Par. [0168-195]: a deep learning model may have significant depth and can classify a large or heterogeneous array of features of interest, such as particles in a liquid suspension, or cellular artifacts, such as pathogens or gene expression… classify a large heterogeneous range of features of interest, such as cells, microorganisms, cells expressing one or more genes, or microorganisms that may have a phenotypic or genotypic traits, such as antibiotic resistance… systems and methods for identifying and optionally characterizing a feature of interest, by analyzing the feature of interest from a test sample and thereby generating a test dataset and comparing it to a training dataset generated from a reference sample, and optionally one or more additional samples. A feature of interest in this embodiment may include a feature of the cell, such as cell morphology among others… identifying and optionally characterizing a cell of interest as a target cell by analyzing a signature of the cell of interest, quantified by a “feature of interest” extracted from the image via a ConvNet, in a test sample and comparing it to a signature of the target cell from a reference sample. A signature of a cell, or “feature of interest” may also include a physical feature of the cell, such as cell morphology, as well as the presence, absence, or relative amount of gene expression within and/or associated with the cell, a phenotypic or genotypic traits, such as antibiotic resistance in a microorganism… An isolated cell may be present in an enriched fraction from the biological sample, and thus its use is not meant to be limited to a purified cell. In some embodiments, the morphology of an isolated cell is analyzed. For target cells indicative of infection, analysis of a cell signature is useful for a number of methods including diagnosing infection, determining the extent of infection, determining a type of infection, and monitoring progression of infection within a host or within a given treatment of the infection. Some of these methods may involve monitoring a change in the signature of the target cell, which includes an increase and/or decrease, and/or any change in morphology… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state… goal of training a neural network is typically to have the ANN make an accurate prediction of a new sample, for example, a sample not used during training or validation. Accuracy of the prediction is often measured against the objective function, for example, classification accuracy may be enabled by providing the truth label for the new sample. However, in one embodiment of the present inventor's method, is the use of neural networks for embedding/dimension reduction, namely takes a set large number of pixels in a source HTI image, and summarize the information content with low (2-256) dimensional feature output embedding values from the ANN; the feature embedding can be reduced to 2-6 via post-processing techniques like t-SNE or UMAP; the statistical distribution of the 2-6 dimensional embedding point cloud is determined… models take as inputs one or more features of interest, such as cellular artifacts extracted from an image of a sample pass through a high-throughput system, and, with little or no additional preprocessing, they classify individual feature of interest as particular cell types; determining, based on the cell state of the one or more cells and the set of time-series image data, a relationship between one or more subcellular or cellular movements or processes depicted in the set of time-series image data and the cell state of the one or more cells (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme includes generating a reference distribution by embedding previously extracted features of interest from a reference biological sample containing cell population or microbes, including multiple images of each subject-derived cells obtained from test samples from a subject recorded in a correlated time series, for example, including cells of interest that are identified using the systems and methods as described above, including cell types implicated in a disease, disorder, or a non-disease state, for example, and including a morphology of isolated cells that is analyzed by monitoring the morphology evolution of particles over time by using flow (i.e. motion, movement, etc.) cytometry to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest, as indicated above, for example).
Regarding claim 33, Middlestead discloses the system of claim 21, wherein the first machine learning model is pre-trained using unlabeled images that do not depict biological samples and retrained using unlabeled images of biological samples (Par. [0128-149]: methods and systems described herein may further include the step of generating a reference distribution by embedding the previously extracted features of interest from the reference sample, in this case a reference biological sample containing a microbe or cell population. In what follows, “embedding” refers to generic dimension reduction (also sometimes referred to as an “encoding”); the “embedding” can be accomplished via supervised techniques such as neural network embeddings calibrated by triplet-loss or unsupervised techniques like Principal Components Analysis (PCA) or extracted from the latent space representations of obtained by other unsupervised methods such as Variational Auto-Encoders (VAE) or Generative Adversarial Network (GAN); optionally with further dimension reduction via UMAP or t-SNE… this embedding process may convert the extracted features of interest to a lower dimensional feature set which can be used for classification or prediction… one or more additional samples identified above may be utilized to generate additional reference distributions through the process of embedding the extracted features of interest from the images capture of the additional samples so as to again, convert the extracted features of interest to a lower dimensional feature set… the reference distributions of the reference's embedding, and optionally the additional embeddings of additional samples, may be defined by using a loss function to separate the embedded lower dimensional feature sets associated with each reference distribution. Further, the probability density of the individual extracted feature embeddings of the reference and optionally the additional samples may be estimated, and in a preferred embodiment, the probability density of one or more of the additional samples on the embedding space may be further estimated… the low dimensional embeddings are obtained by altering the machine leaning module (4) output… after obtaining a plurality of images from the image capture module (3), the images are process by a machine learning module (4) that is configured identify the presence of individual microparticles, such as cell and microbes, including the identification of microbial species… the machine learning module (4) may employ a multiple step classification process, and a sliding window sampler to make use of the image redundancy settings, resulting in highly accurate classification results. The machine learning module (4) may alternatively consist of an embedding obtained from a neural network trained in an unsupervised or supervised fashion… all of the parameters required to specify the function evaluations in the various modules may be assumed to have already been estimated using a large collection of labeled raw or processed image data (where “processed” implies that the modules upstream have produced the correct input) by minimizing a suitable “cost function”, where the cost function can aim at classification (e.g. a “cross entropy loss” function) as would be needed, for example, in pathogen analysis or the cost function can aim at developing a low dimensional representation through “image embeddings” for applications in fault detection (e.g. using a supervised triplet loss cost function or a least squares type reconstruction loss as used in unsupervised learning)… a Machine learning module (4) may include Fusion module that may be optionally used to leverage data and/or meta-information from other sources. The features from a ConvNet may be combined with other measurement or descriptive features through a variety of methods (e.g. a two input Artificial Neural Network, a Random Forest algorithm or Gradient Boosting algorithm for feature selection) producing a new set of feature of interest outputs or image embeddings… module can use another Artificial Neural Network (ANN) to produce a new set of features or embeddings (depending on the specific application)… a machine learning module (4) may include one or more classification or classifier modules that assign a predefined label and probability of a class based on the passed in features/images using another classifier. The subsequent class and class probability output can either be the final output, or the features/raw input features can be embedded via another pretrained ANN and passed to the other branch, in this instance an optional fault detection module. The fault detection module,” as an optional part of the Machine learning module (4) may take low-dimensional embedding representations of the raw images and runs statistical hypothesis tests to check if it is statistically probable that the collection of embeddings has been drawn from a precomputed reference distribution of interest. This step may incorporate a precomputed empirically determined probability distribution (where the distribution function estimation can be parametric or nonparametric) of a suitable goodness-of-fit test statistic characterizing a large collection of labeled ground truth data. The aforementioned distribution may then be used to compute a p-value for each image in the “test dataset” enabling a user to detect if the test statistic generated by the collection of embeddings of the unlabeled data are statistically similar to the embeddings of the labeled reference distribution; wherein the first machine learning model is pre-trained using unlabeled images that do not depict biological samples and retrained using unlabeled images of biological samples (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme configured to determine if cell culture components comprising microparticles are subject-derived cells or microbes, including embeddings obtained from a neural network trained in an unsupervised or supervised fashion, for example, including parameters of the neural network trained required to specify function evaluations in various modules, including a collection of embeddings of unlabeled data are statistically similar to embeddings of a labeled reference distribution, as indicated above, for example).
Regarding claim 34, Middlestead discloses the system of claim 21, wherein determining the summary embedding based on the sequence of embeddings comprises: inputting the sequence of embeddings into a third trained machine learning model (Par. [0041-74]: digital filter comprises a convolutional neural network further comprising a machine learning-based automated classifier configured to determine if the microparticles are a microbe of interest, or a subject-derived cell, and/or wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based embedding scheme configured to determine if the cell culture components comprising the microparticles are microbes of interest, or a subject-derived cells… digital filter comprises a machine learning-based embedding scheme configured to determine if the cell culture components comprise transduced CAR-T cells, or non-transduced T-cell; Par. [0128-149]: methods and systems described herein may further include the step of generating a reference distribution by embedding the previously extracted features of interest from the reference sample, in this case a reference biological sample containing a microbe or cell population. In what follows, “embedding” refers to generic dimension reduction (also sometimes referred to as an “encoding”); the “embedding” can be accomplished via supervised techniques such as neural network embeddings calibrated by triplet-loss or unsupervised techniques like Principal Components Analysis (PCA) or extracted from the latent space representations of obtained by other unsupervised methods such as Variational Auto-Encoders (VAE) or Generative Adversarial Network (GAN); optionally with further dimension reduction via UMAP or t-SNE… this embedding process may convert the extracted features of interest to a lower dimensional feature set which can be used for classification or prediction… one or more additional samples identified above may be utilized to generate additional reference distributions through the process of embedding the extracted features of interest from the images capture of the additional samples so as to again, convert the extracted features of interest to a lower dimensional feature set… the reference distributions of the reference's embedding, and optionally the additional embeddings of additional samples, may be defined by using a loss function to separate the embedded lower dimensional feature sets associated with each reference distribution. Further, the probability density of the individual extracted feature embeddings of the reference and optionally the additional samples may be estimated, and in a preferred embodiment, the probability density of one or more of the additional samples on the embedding space may be further estimated… the low dimensional embeddings are obtained by altering the machine leaning module (4) output… after obtaining a plurality of images from the image capture module (3), the images are process by a machine learning module (4) that is configured identify the presence of individual microparticles, such as cell and microbes, including the identification of microbial species… the machine learning module (4) may employ a multiple step classification process, and a sliding window sampler to make use of the image redundancy settings, resulting in highly accurate classification results. The machine learning module (4) may alternatively consist of an embedding obtained from a neural network trained in an unsupervised or supervised fashion… all of the parameters required to specify the function evaluations in the various modules may be assumed to have already been estimated using a large collection of labeled raw or processed image data (where “processed” implies that the modules upstream have produced the correct input) by minimizing a suitable “cost function”, where the cost function can aim at classification (e.g. a “cross entropy loss” function) as would be needed, for example, in pathogen analysis or the cost function can aim at developing a low dimensional representation through “image embeddings” for applications in fault detection (e.g. using a supervised triplet loss cost function or a least squares type reconstruction loss as used in unsupervised learning)… a Machine learning module (4) may include Fusion module that may be optionally used to leverage data and/or meta-information from other sources. The features from a ConvNet may be combined with other measurement or descriptive features through a variety of methods (e.g. a two input Artificial Neural Network, a Random Forest algorithm or Gradient Boosting algorithm for feature selection) producing a new set of feature of interest outputs or image embeddings… module can use another Artificial Neural Network (ANN) to produce a new set of features or embeddings (depending on the specific application)… a machine learning module (4) may include one or more classification or classifier modules that assign a predefined label and probability of a class based on the passed in features/images using another classifier. The subsequent class and class probability output can either be the final output, or the features/raw input features can be embedded via another pretrained ANN and passed to the other branch, in this instance an optional fault detection module. The fault detection module,” as an optional part of the Machine learning module (4) may take low-dimensional embedding representations of the raw images and runs statistical hypothesis tests to check if it is statistically probable that the collection of embeddings has been drawn from a precomputed reference distribution of interest. This step may incorporate a precomputed empirically determined probability distribution (where the distribution function estimation can be parametric or nonparametric) of a suitable goodness-of-fit test statistic characterizing a large collection of labeled ground truth data. The aforementioned distribution may then be used to compute a p-value for each image in the “test dataset” enabling a user to detect if the test statistic generated by the collection of embeddings of the unlabeled data are statistically similar to the embeddings of the labeled reference distribution; Par. [0168-195]: a deep learning model may have significant depth and can classify a large or heterogeneous array of features of interest, such as particles in a liquid suspension, or cellular artifacts, such as pathogens or gene expression… classify a large heterogeneous range of features of interest, such as cells, microorganisms, cells expressing one or more genes, or microorganisms that may have a phenotypic or genotypic traits, such as antibiotic resistance… systems and methods for identifying and optionally characterizing a feature of interest, by analyzing the feature of interest from a test sample and thereby generating a test dataset and comparing it to a training dataset generated from a reference sample, and optionally one or more additional samples. A feature of interest in this embodiment may include a feature of the cell, such as cell morphology among others… identifying and optionally characterizing a cell of interest as a target cell by analyzing a signature of the cell of interest, quantified by a “feature of interest” extracted from the image via a ConvNet, in a test sample and comparing it to a signature of the target cell from a reference sample. A signature of a cell, or “feature of interest” may also include a physical feature of the cell, such as cell morphology, as well as the presence, absence, or relative amount of gene expression within and/or associated with the cell, a phenotypic or genotypic traits, such as antibiotic resistance in a microorganism… Flow cytometry may be used to measure a signature of a cell such as the presence, absence, or relative amount of the cell, or through differentiating physical or functional characteristics of the target cells of interest. Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state… goal of training a neural network is typically to have the ANN make an accurate prediction of a new sample, for example, a sample not used during training or validation. Accuracy of the prediction is often measured against the objective function, for example, classification accuracy may be enabled by providing the truth label for the new sample. However, in one embodiment of the present inventor's method, is the use of neural networks for embedding/dimension reduction, namely takes a set large number of pixels in a source HTI image, and summarize the information content with low (2-256) dimensional feature output embedding values from the ANN; the feature embedding can be reduced to 2-6 via post-processing techniques like t-SNE or UMAP; the statistical distribution of the 2-6 dimensional embedding point cloud is determined… models take as inputs one or more features of interest, such as cellular artifacts extracted from an image of a sample pass through a high-throughput system, and, with little or no additional preprocessing, they classify individual feature of interest as particular cell types; wherein determining the summary embedding based on the sequence of embeddings comprises: inputting the sequence of embeddings into a third trained machine learning model (e.g. systems and methods for identifying and characterizing cells of interest include a machine learning-based embedding scheme includes generating a reference distribution by embedding previously extracted features of interest from a reference biological sample containing cell population or microbes, including multiple images of each subject-derived cells obtained from test samples from a subject recorded in a correlated time series, for example, including cells of interest that are identified using the systems and methods as described above, including cell types implicated in a disease, disorder, or a non-disease state, for example, and using networks for embedding/dimension reduction by taking a set large number of pixels in a source image, and summarize the information content with low dimensional feature output embedding values from an ANN, as indicated above, in which a class probability output can either be the final output, or embedded via another (i.e. a second) pretrained ANN, for example).
Regarding claim 35, Middlestead discloses the system of claim 21, wherein the set of time-series image data depicts a single cell, wherein the single cell is identified using an image segmentation model (Par. [0011-12]: multiple images of each cell or microbe are recorded in a correlated time series as the sample flows through the flow microscopy instrument, and the likely identity (as determined by the machine learning module) of the image before and the image after an image of interest are also taken into account when determining the identity of a cell or microbe in the image of interest… multiple images of each cell or microbe are recorded in a correlated time series as the sample flows through the flow microscopy instrument, and the likely identity (as determined by the machine learning module) of the image before and the image after an image of interest are also taken into account when determining the identity of a cell or microbe in the image of interest; Par. [0170-198]: a “feature,” “feature of interest” or “sample feature” is a feature of a sample that represents a quantifiable and/or observable feature of an object or particle passing through a high-throughput system, and preferably a feature of a prokaryotic organism in a biological sample… a “feature of interest” may potentially correlate to a clinically relevant condition. In certain embodiments, a feature of interest is a feature that appears in an image of a sample, such as a biological sample, and may be recognized, segmented, and/or classified by a machine learning module… systems and methods for identifying and optionally characterizing a cell of interest as a target cell by analyzing a signature of the cell of interest, quantified by a “feature of interest” extracted from the image via a ConvNet, in a test sample and comparing it to a signature of the target cell from a reference sample. A signature of a cell, or “feature of interest” may also include a physical feature of the cell, such as cell morphology, as well as the presence, absence, or relative amount of gene expression within and/or associated with the cell, a phenotypic or genotypic traits, such as antibiotic resistance in a microorganism… A “feature of interest” of a cell of interest may be useful for diagnosing or otherwise characterizing a disease or a condition in a patient from which the potential target cell was isolated. As used herein, an “isolated cell” refers to a cell separated from other material in a biological sample using any separation method, and preferably a separation module (2) of the invention. An isolated cell may be present in an enriched fraction from the biological sample, and thus its use is not meant to be limited to a purified cell… the morphology of an isolated cell is analyzed. For target cells indicative of infection, analysis of a cell signature is useful for a number of methods including diagnosing infection, determining the extent of infection, determining a type of infection, and monitoring progression of infection within a host or within a given treatment of the infection. Some of these methods may involve monitoring a change in the signature of the target cell, which includes an increase and/or decrease, and/or any change in morphology… a “feature of interest” of a cell of interest is analyzed in a fraction of a biological sample of a subject, wherein the biological sample has been processed to enrich for a target cell… one or more processors are optionally configured to segment the one or more images of the biological sample to obtain a plurality of images of the individual components of the sample passing through, in this embodiment a high-throughput HTI instrument).
Regarding claim 36, Middlestead discloses the system of claim 21, wherein the one or more cells comprise one or more live biological cells (Par. [0010]: a biological sample, and preferably a blood sample is processed in a three-step sequence: acoustic separation by a separation module, high-resolution oil-immersion flow microscopy utilizing a digital image capture module, and classification by the convolutional neural network. Acoustic separation removes larger blood cells from blood samples, leaving smaller microbial cells; Par. [0055]: a biological sample containing a quantity of a cell culture further containing a quantity of engineered cells), and wherein the one or more live biological cells comprise at least one of: one or more mammalian cells, one or more neurons, healthy cells, diseased cells, one or more genetic mutations, or any combination thereof (Par. [0161-176]: biological sample may be taken from a multicellular organism or it may be of one or more single cellular organisms. In some cases, the biological sample is taken from a multicellular organism, such as a mammal, and includes both cells comprising the genome of the organism and cells from another organism such as a parasite or pathogen… Cells of interest identified using the systems and methods as described herein include cell types implicated in a disease, disorder, or a non-disease state).
Regarding claim 39, is a corresponding method claim rejected as applied to the apparatus claim 21 above.
Regarding claim 40, is a corresponding computer readable medium claim rejected as applied to the apparatus claim 21 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 24-25, 29-31 are rejected under 35 U.S.C. 103 as being unpatentable over Middlestead, as applied to claim 21 above, in view of WAGNER et al. (US PG Publication No. 2023/0065504 A1), hereafter referred to as WAGNER.
Regarding claim 24, claim 23 is incorporated and Middlestead discloses the system (Par. [0008-11]), but fails to teach the following as further recited in claim 24.
However, WAGNER teaches wherein the plurality of phase images is captured using an imager with a frame rate of at least four frames per second (Par. [0290]: imager 2002 may run at 180 frames per second at full resolution… a total of 21 rows in the imager may be used, and an imaging rate of ˜8650 frames per second may be achieved, meaning that over 4 complete fields of view (2048×2048 pixels) can be captured per second).
Middlestead and WAGNER are considered to be analogous art because they pertain to image processing applications. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus for identifying and characterizing cells of interest (as disclosed by Middlestead) with wherein the plurality of phase images is captured using an imager with a frame rate of at least four frames per second (as taught by WAGNER, Abstract, Par. [0290]) to achieve an improvement of the conventional cell culture process and to improve the predicted yield, functionality, phenotype, or other properties of the output cell product, to quickly and accurately produce output cell products and easily scalable to enable large scale biological manufacturing (WAGNER, Abstract, Par. [0003-9, 190, 209, 290]).
Regarding claim 25, claim 24 is incorporated and the combination of Middlestead and WAGNER, as a whole, teaches the system (Middlestead, Par. [0008-11]), wherein the frame rate is about 40 frames per second (WAGNER, Par. [0290]: imager 2002 may run at 180 frames per second at full resolution… a total of 21 rows in the imager may be used, and an imaging rate of ˜8650 frames per second may be achieved, meaning that over 4 complete fields of view (2048×2048 pixels) can be captured per second).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 24.
Regarding claim 29, claim 21 is incorporated and Middlestead discloses the system (Middlestead, Par. [0008-11]), but fails to teach the following as further recited in claim 29.
However, WAGNER teaches wherein the cell state includes an indication of an accumulation of lipids (Par. [0192-200]: input cells 102 may be analyzed with one or more input cell assays 108 which serve to quantify the state of the input cells 102… sense the state of the cell culture 104… computing subsystem 110 may be configured to control the other components of the cell culture system 100 to perform the specified cell culture process on the cell culture 104 to produce output cell products 118… Output cell products 118 that may be produced by the computing subsystem 110 may include, but are not limited to… lipid particles).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 24.
Regarding claim 30, claim 21 is incorporated and Middlestead discloses the system (Middlestead, Par. [0008-11]), but fails to teach the following as further recited in claim 30.
However, WAGNER teaches wherein the cell state is indicative of at least one of: a level of metabolic activity and a kinetic state (Par. [0198]: cell culture system 100 may also include a number of sensors and controls 116 which may measure or act upon the cell culture 104… sensors to measure cell culture media constituents (such as nutrients, waste products, vitamins, metabolites; Par. [0318]: several factors such as cell cycle stage, metabolic activity; Par. [0746-847]: what is measured is the conditions of the fluid media 9902 as it is affected by cell condition and metabolism… cell culture control system uses the cell images, as well as the differential images corresponding to the plasmonic film absorption, to calculate the state of the extracellular matrix, and in regions with cells, the level, shape, and distribution of cell adhesion foci. The image of adhesion foci may be used in conjunction with other images of the cells (for example, quantitative phase images produced by the same illumination and imaging system) to make predictions of cell function, health, phenotype, cell cycle, metabolism, etc.).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 24.
Regarding claim 31, claim 21 is incorporated and Middlestead discloses the system (Par. [0008-11]), but fails to teach the following as further recited in claim 31.
However, WAGNER teaches wherein a rate of change in the cell state is indicative of a variation of a cellular process, wherein the cellular process includes any one or more of a cargo transport, an organelle assembly, and an organelle disassembly (Par. [0021-26]: system further comprises a transport mechanism configured to transport the cell culture container between locations within the server rack… film absorbs energy from the pulsed laser and forms microbubbles proximal to one or more cells in the cell culture… microbubbles dislodge the one or more cells from the film… the microbubbles porate the one or more cells, thereby allowing transport of cargo into and out of the one or more cells; Par. [0453-479]: imaging resolution used to identify cells or cell components (e.g., organelles)… transferring cargo into and out of cells).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 24.
Claim 32 is rejected under 35 U.S.C. 103 as being unpatentable over Middlestead, as applied to claim 21 above, in view of Ryan et al. (US PG Publication No. 2022/0358646 A1), hereafter referred to as Ryan, and in further view of WAGNER.
Regarding claim 32, claim 21 is incorporated and Middlestead discloses the system (Middlestead, Par. [0008-11]), but fails to teach the following as further recited in claim 32.
However, Ryan teaches wherein the one or more programs include instructions for mapping a network of one or both of axons and neurites to a cell of the one or more cells based on the set of time-series image data and the cell state of the one or more cells (Par. [0017]: digital video data may be obtained from electrically active cells expressing optical reporters of cellular electrical activity. In certain embodiments, the cells are neurons and the digital video data shows action potentials propagating along axons of the neurons. Preferably, the compressed video can be retrieved and played to display the action potentials propagating along the axons of the neurons; Par. [0145]: action potential wavefront may then be identified using an algorithm based on sub-Nyquist action potential timing such as an algorithm based on the interpolation approach of Foust, 2010, Action potentials initiate in the axon initial segment and propagate through axon collaterals reliably).
Middlestead and Ryan are considered to be analogous art because they pertain to image processing applications. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus for identifying and characterizing cells of interest (as disclosed by Middlestead) with wherein the one or more programs include instructions for mapping a network of one or both of axons and neurites to a cell of the one or more cells based on the set of time-series image data and the cell state of the one or more cells (as taught by Ryan, Abstract, Par. [007, 145]) to facilitate accurate data analysis, to accurately ascertain cellular response to drug compounds, including at varied concentrations, to accurately distinguish phenotypes of cells in the presence of different biological conditions and/or the presence of different drug compounds, and to improve efficiency and uniformity (Ryan, Abstract, Par. [0002-11, 159, 218, 242]).
The combination of Middlestead and Ryan, as a whole teaches the system, as indicated above, but fails to teach neurites (Par. [0650]: characteristics of the cells that may be observed or measured from label-free images may include, but are not limited to, morphology, presence/count/size of subcellular components, density, refractive index, absorption or absorption spectrum, polarization-dependent absorption or refractive index, degree of attachment to substrate or surrounding cells, proliferation rate, velocity, projection of cell outgrowths such as neurites).
However, WAGNER teaches neurites (Par. [0650]: characteristics of the cells that may be observed or measured from label-free images may include, but are not limited to, morphology, presence/count/size of subcellular components, density, refractive index, absorption or absorption spectrum, polarization-dependent absorption or refractive index, degree of attachment to substrate or surrounding cells, proliferation rate, velocity, projection of cell outgrowths such as neurites).
Middlestead, Ryan, and WAGNER are considered to be analogous art because they pertain to image processing applications. Therefore, the combined teachings of Middlestead, Ryan, and WAGNER, as a whole, would have rendered obvious the invention recited in claim 32 with a reasonable expectation of success in order to modify apparatus for identifying and characterizing cells of interest (as disclosed by Middlestead) with neurites (as taught by WAGNER, Abstract, Par. [0650]) to achieve an improvement of the conventional cell culture process and to improve the predicted yield, functionality, phenotype, or other properties of the output cell product, to quickly and accurately produce output cell products and easily scalable to enable large scale biological manufacturing (WAGNER, Abstract, Par. [0003-9, 190, 209, 290]).
Claims 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Middlestead, as applied to claim 21 above, in view of Ho et al. (US PG Publication No. 2023/0036156 A1), hereafter referred to as Ho.
Regarding claim 37, claim 36 is incorporated and Middlestead discloses the system (Par. [0008-11]), but fails to teach the following as further recited in claim 37.
However, Ho wherein the one or more genetic mutations is selected from the group consisting (i.e. at least one) of a deletion mutation, insertion mutation, substitution mutation, missense mutation, nonsense mutation, and frameshift mutation (Par. [0173]: characterization of organoids may include different modalities of data. When the modality is DNA, molecular information may be distinguished from a query such as a KRAS codon 12 missense mutation).
Middlestead and Ho are considered to be analogous art because they pertain to image processing applications. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus for identifying and characterizing cells of interest (as disclosed by Middlestead) with wherein the one or more genetic mutations is selected from the group consisting (i.e. at least one) of a deletion mutation, insertion mutation, substitution mutation, missense mutation, nonsense mutation, and frameshift mutation (as taught by Ho, Abstract, Par. [0173]) to provide an improved in vitro model system to study the effect of therapeutic agents on a mixed population of cells (Ho, Abstract, Par. [0002-13, 80]).
Regarding claim 38, claim 37 is incorporated and the combination of Middlestead and Ho, as a whole, teaches the system (Middlestead, Par. [0008-11]), wherein the one or more live biological cells have a phenotypic difference compared to healthy cells that do not comprise the one or more genetic mutations (Ho, Par. [0164-185]: if the phenotype is quantifiable, systems and methods herein may be used to establish one or more thresholds that indicate which organoids are susceptible to an immune cell based therapy, such as a heterotypic cellular therapy with or without another immune-oncology therapy, and which are resistant. If a phenotype is categorizable, systems and methods herein may be able to establish one or more categories that indicate which organoids are susceptible… distinctions between glycans and lipids, cell surface molecules, aberrant gene product(s) directly or indirectly caused by mutation, or mutant enzyme not present in healthy cells… mage-derived data is determined and provided to a target quantification pipeline at a process 818, which receives the image-derived data and determines, at process 820, organoid phenotype and/or organoid morphology changes in response to the immune cell based therapies. The process 820 characterizing cancer organoid morphology change and or phenotype change may be executed by a machine learning algorithm, as described in various methods herein), wherein the phenotypic difference comprises a difference in metabolic activity, cellular kinetics, cellular morphology, or any combination thereof (Ho, Par. [0032]: the method further including: quantifying metabolic activity; Par. [0128]: measuring the kinetics of a specified phenotype/endpoint of interest and terminating the assay when a slowing down/inflection point of the rate of the phenotype has been observed or measured… analyzing cell proliferation and metabolic activity).
The same motivation to combine above-mentioned teachings applies, as previously indicated in claim 37.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GUILLERMO M RIVERA-MARTINEZ whose telephone number is (571) 272-4979. The examiner can normally be reached on 9 am to 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GUILLERMO M RIVERA-MARTINEZ/ Primary Examiner, Art Unit 2677