Prosecution Insights
Last updated: April 19, 2026
Application No. 18/701,601

SYSTEM AND METHODS FOR ANALYZING MULTICOMPONENT CELL AND MICROBE SOLUTIONS AND METHODS OF DIAGNOSING BACTEREMIA USING THE SAME

Non-Final OA §102§103§112
Filed
Apr 15, 2024
Examiner
BROUGHTON, KATHLEEN M
Art Unit
2661
Tech Center
2600 — Communications
Assignee
The Regents of the University of Colorado
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
219 granted / 263 resolved
+21.3% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 263 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment A Preliminary Amendment was made 04/15/20214 to amend the specification, abstract and claims. Claims 1-4, 9, 11, 16, 21-22, 28, 31, 34, 35, 41-42, 49-50, 55, 58-59 are pending with amendment to Claims 4, 9, 11, 16, 31, 55, 58. Claims 5-8, 10, 12-15, 17-20, 23-27, 29-30, 32-33, 36-40, 43-48, 51-54, 56-57, 60-79 were cancelled. Claim Objections Claims 9, 34, 59 are objected to because of the following informalities: Claim 9 is objected to for reciting “wherein said wherein the biological sample”, which appears to be a typographical error and should be “wherein said Claim 34 is objected to for missing the transitional phrase to connect the preamble from the body of the claim. See MPEP § 2111.03. For purposes of examination, the transitional phase is interpreted as “comprising.” Claim 59 is objected to for missing the transitional phrase to connect the preamble from the body of the claim. See MPEP § 2111.03. For purposes of examination, the transitional phase is interpreted as “comprising.” Claim 59 is objected to for reciting “change in between” in the limitation “wherein said comparison identifies a characteristic change in between the samples” and should be “change [[in]] between”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: Claim 1, 34, 59: “image capture module” The “image capture module” is described as element 3 of Figure 1, with outputs shown in Figure 2, described as a “high-throughput imaging instrument”, described to output 10^4 to 10^7 or more images and image particles, described broadly pg 24 ln 8-pg 25 ln 5, pg 26 ln 22-pg 27 ln 3, pg 37 ln 22-pg 39 ln 23. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3, 4, 28, 34, 41-42, 49-50, 55, 58, 59 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation "collection outlet" in "said collection outlet of said biological sample". There is insufficient antecedent basis for this limitation in the claim. “collection outlet” of claim 3 is rejected for lack of antecedent basis as dependent on claim 2, whereas claim 2 recites “a collection outlet stream.” For purposes of examination claim 3 will be considered dependent on claim 2 and interpreted as “said collection outlet stream.” Claim 4 recites the limitation "collection outlet" in "said collection outlet of said biological sample". There is insufficient antecedent basis for this limitation in the claim. “collection outlet” of claim 4 is rejected for lack of antecedent basis as dependent on claim 1, whereas “a collection outlet stream” is introduced in claim 3, dependent on claim 2. For purposes of examination claim 4 will be considered dependent on claim 3 and the “collection outlet” of claim 4 is interpreted as “said collection outlet stream.” Claims 28, 55 each recites the limitation "signal modalities" in "adapted to combine signal modalities". There is insufficient antecedent basis for this limitation in the claim. “Signal modalities was not previously introduced in the claim in which it depends but is interpreted as the signal from the image capture module and the signal from the machine learning module. Claims 34, 59 are each independent claims and claim a limitation “applied to the system above” but it is unclear if “the system above” is meant as “the system” within the same claim or “the system” of claim 1 and/or its dependents. Therefore, the claim is ambiguous and not sufficiently definite. See MPEP § 2173. Thus, Applicant has failed to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 35, 41-42, 49-50, 55, 58 are rejected as dependent on claim 34. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 9, 11, 16, 59 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Daniels et al (Machine learning and statistical analyses for extracting and characterizing “fingerprints” of antibody aggregation at container interfaces from flow microscopy images). Regarding Claim 1, Daniels et al teach a system for analyzing a biological sample (system that combines flow imaging microscopy images with a convolutional neural network algorithm to quantitatively analyze images of particle morphologies; Introduction ¶ 6) comprising: - a biological sample (applicant’s Example 12 is a protein suspension used as the biological sample, Fig 12 and pg 60 ln 26-pg 61 ln 2) (Intraveneous immunoglobulin (IVIg) protein aggregates suspended in PBS; Materials – Generation of Protein Aggregates) containing a quantity of microparticles (Intraveneous immunoglobulin (IVIg) aggregates were generated in a stock solution (supernatant) and diluted in filtered PBS; Materials – Generation of Protein Aggregates); - an image capture module (flow imaging microscopy (FIM) capture a high throughput of images (10^3 – 10^5 images per 200 microliter sample); Introduction ¶ 3, Materials – Flow Imaging Microscopy), configured to capture a plurality of digital image signals of said microparticles present in said biological sample (imaging with the flow imaging microscopy (FIM) is performed to capture a set of digital images of individual particles in a liquid biological sample; Introduction ¶ 3, Materials – Flow Imaging Microscopy); - a machine learning module (machine learning algorithms (as described in Abstract, Discussion ¶ 1) include Convolutional neural networks (ConvNets, also known as a CNN); Introduction ¶ 4, Materials – Algorithm Overview, Materials – ConvNets) configured to process the digital image signals from said image capture module (the machine learning algorithms are used to analyze the FIM images; Abstract, Algorithm Overview) further comprising: - a digital filter (a first CNN is used as a classifier; Materials – Convolutional Neural Networks (ConvNets) ¶ 1) to differentiate the images of microparticles of interest from images of microparticles in said biological sample (a classifier is used to predict the class (differentiate) of the given sample image; Materials – Convolutional Neural Networks (ConvNets) ¶ 1); and - a convolutional neural network configured to further identify said microparticles of interest (Convolutional neural networks (ConvNets) are used to extract and analyze morphological information of the protein aggregate particles contained in the biological samples; Introduction ¶ 4, Materials – Algorithm Overview, Materials – ConvNets). Regarding Claim 9, Daniels et al teach the system of claim 1 (as described above), wherein said wherein the biological sample comprises a biological sample selected from the group consisting of: sputum, oral fluid, amniotic fluid, blood, a blood fraction, bone marrow, a biopsy samples, urine, semen, stool, vaginal fluid, peritoneal fluid, pleural fluid, tissue explant, mucous, lymph fluid, organ culture, cell culture, a fraction or derivative thereof or isolated therefrom, or a static or flowing liquid suspension (Intraveneous immunoglobulin (IVIg) protein aggregates suspended in PBS; Materials – Generation of Protein Aggregates). Regarding Claim 11, Daniels et al teach the system of claim 1 (as described above), wherein said microparticles in said biological sample are selected from: microbial microparticles, non-microbial microparticles, cells, or pathogenic microbes (Intraveneous immunoglobulin (IVIg) protein are non-microbial microparticles; Materials – Generation of Protein Aggregates). Regarding Claim 16, Daniels et al teach the system of claim 1 (as described above), wherein said image capture module (flow imaging microscopy (FIM); Introduction ¶ 3, Materials – Flow Imaging Microscopy) comprises a high-throughput imaging instrument capable of imaging flowing or static suspensions of microparticles, or a high-throughput microfluidic imaging instrument capable of imaging flowing or static liquid suspensions (flow imaging microscopy (FIM) combines light microscopy with microfluidics to capture a high throughput of images with a small liquid sample (10^3-10^5 images per 200 microliter sample); Introduction ¶ 3, Materials – Flow Imaging Microscopy). Regarding Claim 59, Daniels et al teach a system for characterizing changes in pharmaceutical sample populations (system that combines flow imaging microscopy with a convolutional neural network algorithm to quantitatively analyze images of particle morphologies; Introduction ¶ 6): - a pharmaceutical sample (defined by applicant to include protein biologic formulations in applicant’s preferred embodiment 33, pg 10 and Example 12, a protein suspension, Fig 12 and pg 60 ln 26-pg 61 ln 2) (Intraveneous immunoglobulin (IVIg) protein aggregates (therapeutic (pharmaceutical) product) suspended in PBS; Materials – Generation of Protein Aggregates) containing a quantity of microparticles (Intraveneous immunoglobulin (IVIg) aggregates were generated in a stock solution (supernatant) and diluted in filtered PBS; Materials – Generation of Protein Aggregates); - an image capture module (flow imaging microscopy (FIM) capture a high throughput of images (10^3 – 10^5 images per 200 microliter sample); Introduction ¶ 3, Materials – Flow Imaging Microscopy), configured to capture a plurality of digital image signals of the microparticles present in said pharmaceutical sample (imaging with the flow imaging microscopy (FIM) is performed to capture a set of digital images of individual particles in a liquid biological sample; Introduction ¶ 3, Materials – Flow Imaging Microscopy); - a machine learning module (machine learning algorithms (as described in Abstract, Discussion ¶ 1) include Convolutional neural networks (ConvNets, also known as a CNN); Introduction ¶ 4, Materials – Algorithm Overview, Materials – ConvNets) configured to process the digital image signals from said image capture module (the machine learning algorithms are used to analyze the FIM images; Abstract, Algorithm Overview) further comprising a convolutional neural network configured to extract a feature of interest from said images (Convolutional neural networks (ConvNets) are used to extract and analyze morphological information of the protein aggregate particles contained in the biological samples; Introduction ¶ 4, Materials – Algorithm Overview, Materials – ConvNets); and - one or more additional pharmaceutical samples containing a quantity of microparticles applied to the system above (the IVIg protein aggregates (therapeutic (pharmaceutical) product) suspended in PBS is experimented with the baseline as compared to a stress condition (experimental), representing the one or more additional samples (freeze-thaw samples or mechanical shaking); Fig 2 and Materials – Generation of Protein Aggregates) and compared to said preceding pharmaceutical sample, wherein said comparison identifies a characteristic change in between the samples (the baseline and experimental test sample are compared to determine if the particles are similar or different after stress condition; Fig 2, 4, 5 and Results – CNNs, Particle comparisons). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-4, 21-22, 28, 31 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et al (Machine learning and statistical analyses for extracting and characterizing “fingerprints” of antibody aggregation at container interfaces from flow microscopy images) in view of Ashcroft et al (US 2017/0242234). Regarding Claim 2, Daniels et al teach the system of claim 1 (as described above). Daniels et al does not explicitly teach a separation module configured to separate the microparticles in said biological sample into a collection outlet stream containing predominantly microparticles of interest, and a waste stream containing predominately other particles found in said biological sample. Ashcroft et al is analogous art pertinent to the technological problem addressed in the current application and teaches a separation module (mesh filter 118; Fig 1A and ¶ [0042]) configured to separate the microparticles in said biological sample into a collection outlet stream containing predominantly microparticles of interest, and a waste stream containing predominately other particles found in said biological sample (samples are filtered by passing through a mesh filter118 using vacuum pressure to an outlet 122 and pass by an objective for microscopy imaging 140 and spectroscopy scanning where protein aggregates and polymeric particles are collected 130, allowing for particles to pass to an outlet 122 for collection and additional analysis, while the fluid sample and particles smaller than the filter are filtered through to waste; Fig 1A, 16A-17B and ¶ [0041]-[0045], [0104]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Ashcroft et al including a separation module configured to separate the microparticles in said biological sample into a collection outlet stream containing predominantly microparticles of interest, and a waste stream containing predominately other particles found in said biological sample. By sorting and filtering the microparticles of interest from particles not of interest, the microparticles of interest can be characterized without contamination, thereby improving high throughput image analysis and improving protein therapeutic sampling analysis in quantitative terms, thereby enhancing research to regulate treatments and enhance data used for practical applications, such as FDA regulations, as recognized by Ashcroft et al (¶ [0002]-[0003]). Regarding Claim 3, Daniels et al in view of Ashcroft et al teach the system of claim 2 (as described above), wherein said image capture module is further configured to capture a plurality of digital image signals of said microparticles present in said collection outlet of said biological sample (Ashcroft et al, particle imaging by imaging device 140 occurs after particles 130 have landed on the filter 118 of the outlet 122; Fig 1A and ¶ [0041]-[0042], [0047]-[0048]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Ashcroft et al including wherein said image capture module is further configured to capture a plurality of digital image signals of said microparticles present in said collection outlet of said biological sample. By sorting and filtering the microparticles of interest from particles not of interest, the microparticles of interest can be characterized without contamination, thereby improving high throughput image analysis and leading to enhanced protein therapeutic sampling analysis in quantitative terms, resulting in practical applications, such as high quantity, low cost data demonstrating efficacy for FDA regulations, as recognized by Ashcroft et al (¶ [0002]-[0003]). Regarding Claim 4, Daniels et al teach the system of claim 1. Daniels et al does not explicitly teach wherein said digital filter is further configured to differentiate images of microparticles of interest from images of the microparticles in said collection outlet of said biological sample. Ashcroft et al is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said digital filter is further configured to differentiate images of microparticles of interest from images of the microparticles in said collection outlet of said biological sample (the image is used to characterize and identify trapped particles 130, which is based on linking the particle visual image to its spectra (¶ [0042]-[0043], [0055]); and the particles are classified (digital filter equivalent as claimed) to identify particle type using machine learning algorithms (including use of CNN); ¶ [0059], [0062]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Ashcroft et al including wherein said digital filter is further configured to differentiate images of microparticles of interest from images of the microparticles in said collection outlet of said biological sample. By improving data used for high throughput image analysis, machine learning models may more accurately identify and classify data, thus improving quantitative sampling and leading to practical applications, such as high quantity, low cost data demonstrating efficacy for FDA regulations, as recognized by Ashcroft et al (¶ [0002]-[0003]). Regarding Claim 21, Daniels et al teach the system of claim 1 (as described above). Daniels et al does not teach wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based automated classifier configured to determine if the microparticles are a microbe of interest, or a subject-derived cell, and/or wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based embedding scheme configured to determine if the cell culture components comprising the microparticles are microbes of interest, or a subject-derived cells. Ashcroft et al is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based automated classifier configured to determine if the microparticles are a microbe of interest, or a subject-derived cell (the particles are classified (digital filter equivalent as claimed) to identify particle type using machine learning algorithms (including use of CNN) and the images collected of known particles are used to classify additional particles (microbe of interest); Fig 20A, 20B and , ¶ [0059], [0062], [0113]), and/or wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based embedding scheme configured to determine if the cell culture components comprising the microparticles are microbes of interest, or a subject-derived cells (first wherein clause is taught). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Ashcroft et al including wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based automated classifier configured to determine if the microparticles are a microbe of interest, or a subject-derived cell, and/or wherein said digital filter comprises a convolutional neural network further comprising a machine learning-based embedding scheme configured to determine if the cell culture components comprising the microparticles are microbes of interest, or a subject-derived cells. By improving data used for high throughput image analysis, machine learning models may more accurately identify and classify data, thus improving quantitative sampling and leading to practical applications, such as high quantity, low cost data demonstrating efficacy for FDA regulations, as recognized by Ashcroft et al (¶ [0002]-[0003]). Regarding Claim 22, Daniels et al in view of Ashcroft et al teach the system of claim 21 (as described above), wherein said convolutional neural network comprises a machine learning-based automated classifier configured to identify the microbe of interest by genus, species, phenotypic characteristic, genotypic characteristic, or one or more antibiotic resistance characteristics (Ashcroft et al, the CNN used to classify the particle includes classification of a particle type (genus) or sub-type (species); ¶ [0059]). Regarding Claim 28, Daniels et al teach the system of claim 1 (as described above). Daniels et al does not teach wherein said machine learning module further comprises a fusion module adapted to combine signal modalities. Ashcroft et al is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said machine learning module further comprises a fusion module adapted to combine signal modalities (the machine learning algorithms can include multiple data to generate a predictive classification model, such as image processing, signal processing from images, spectroscopic or fluorescent signals; ¶ [0059]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Ashcroft et al including wherein said machine learning module further comprises a fusion module adapted to combine signal modalities. By combining different signal modality data using machine learning models, data may be more accurately identified and classified, thus improving quantitative sampling and leading to practical applications, such as high quantity, low cost data demonstrating efficacy for FDA regulations, as recognized by Ashcroft et al (¶ [0002]-[0003]). Regarding Claim 31, Daniels et al teach the system of claim 1 (as described above). Daniels et al does not teach wherein said machine learning module comprises a machine learning module configured to extract one or more features of said microparticles of interest by machine learning including supervised learning, or unsupervised learning. Ashcroft et al is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said machine learning module comprises a machine learning module configured to extract one or more features of said microparticles of interest by machine learning including supervised learning, or unsupervised learning (training of the machine learning algorithm may be performed with supervised (known identity to build the model) or unsupervised learning; ¶ [0059]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Ashcroft et al including wherein said machine learning module comprises a machine learning module configured to extract one or more features of said microparticles of interest by machine learning including supervised learning, or unsupervised learning. By training machine learning models using data with the microparticles intended for classification experiments, models may more accurately and efficiently be trained to identify and characterize data during inference, as recognized by Ashcroft et al (¶ [0006]-[0007]). Claims 34, 41, 58 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et al (Machine learning and statistical analyses for extracting and characterizing “fingerprints” of antibody aggregation at container interfaces from flow microscopy images) in view of Stamatoyannopoulos et al (US 2020/0167914). Regarding Claim 34, Daniels et al teach a system for characterizing changes in cell populations (system that combines flow imaging microscopy images with a convolutional neural network algorithm to quantitatively analyze images of particle morphologies; Introduction ¶ 6): - a biological sample (applicant’s Example 12 is a protein suspension used as the biological sample, Fig 12 and pg 60 ln 26-pg 61 ln 2) containing a quantity of a cell culture further containing a quantity of engineered cells (Intraveneous immunoglobulin (IVIg) aggregates in a stock solution (supernatant) and resuspended in filtered PBS; Materials – Generation of Protein Aggregates); - an image capture module (flow imaging microscopy (FIM) capture a high throughput of images (10^3 – 10^5 images per 200 microliter sample); Introduction ¶ 3, Materials – Flow Imaging Microscopy), configured to capture a plurality of digital image signals of the engineered cells present in said biological sample (imaging with the flow imaging microscopy (FIM) is performed to capture a set of digital images of individual particles in a liquid biological sample; Introduction ¶ 3, Materials – Flow Imaging Microscopy); - a machine learning module (machine learning algorithms (as described in Abstract, Discussion ¶ 1) include Convolutional neural networks (ConvNets, also known as a CNN); Introduction ¶ 4, Materials – Algorithm Overview, Materials – ConvNets) configured to process the digital image signals from said image capture module (the machine learning algorithms are used to analyze the FIM images; Abstract, Algorithm Overview) further comprising a convolutional neural network configured to extract a feature of interest from said images (Convolutional neural networks (ConvNets) are used to extract and analyze morphological information of the protein aggregate particles contained in the biological samples; Introduction ¶ 4, Materials – Algorithm Overview, Materials – ConvNets). Daniels et al does not teach the biological sample containing a quantity of a cell culture further containing a quantity of engineered cells; and one or more additional biological samples containing a quantity of a cell culture containing a quantity of engineered cells applied to the system above and compared to said preceding biological sample, wherein said comparison identifies a characteristic change in said engineered cells between the samples. Stamatoyannopoulos et al is analogous art pertinent to the technological problem addressed in the current application and teaches the biological sample containing a quantity of a cell culture further containing a quantity of engineered cells (genetically-engineered cells may be generated and used for high throughput imaging and machine learning-based analysis; Fig 9-26 and ¶ [0204]-[0206]); and one or more additional biological samples containing a quantity of a cell culture containing a quantity of engineered cells applied to the system above and compared to said preceding biological sample (K562 cells are genetically-engineered with different gene knock-outs and are compared to the wild-type, with 12 different KO sample types and are compared to identify phenotype differences based on high-throughput image processing techniques as well as testing bromodomain inhibitors; Fig 9-26 and ¶ [0205]-[0207]), wherein said comparison identifies a characteristic change in said engineered cells between the samples (the phenotypic comparison is to detect and discriminate differences in chromatin structure through phenotypic traits such as image texture, intensity and variation; ¶ [0206]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Stamatoyannopoulos et al including the biological sample containing a quantity of a cell culture further containing a quantity of engineered cells; and one or more additional biological samples containing a quantity of a cell culture containing a quantity of engineered cells applied to the system above and compared to said preceding biological sample, wherein said comparison identifies a characteristic change in said engineered cells between the samples. By using populations of engineered cells for high throughput imaging analysis, characterization may be performed to predict changes, such as phenotype, genotype, epigenotype or genomic, thereby improving screening methods for use of the cells as or with drug candidates and detecting changes quickly and effectively, thereby enhancing research screening, as recognized by Stamatoyannopolous et al (¶ [0007]-[0010]). Regarding Claim 41, Daniels et al in view of Stamatoyannopoulos et al teach the system of claim 34 (as described above), wherein said feature of interest comprises a feature of interest associated with transduced cells, or non-transduced cells (Stamatoyannopoulos et al, the genetically engineered cells are transduced with a bromodomain knockout gene and compared to wildtype (non-transduced) K562 cells; ¶ [0204]-[0205]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Stamatoyannopoulos et al including wherein said feature of interest comprises a feature of interest associated with transduced cells, or non-transduced cells. By transducing an intracellular signal into an abnormal phenotype, chromatin structural reorganization occurs, which may then be studied using high throughput image analysis combined with machine learning to determine phenotype differences and predict additional changes, thereby improving screening methods for use of the cells as or with drug candidates and detecting changes quickly and effectively, thereby enhancing research screening, as recognized by Stamatoyannopolous et al (¶ [0007]-[0010]). Regarding Claim 58, Daniels et al in view of Stamatoyannopoulos et al teach the system of claim 34 (as described above), wherein said machine learning module comprises a machine learning module configured to extract one or more features of interest by supervised learning or unsupervised learning (Stamatoyannopoulos et al, training data may be either supervised or unsupervised based on the classification approach (¶ [0129]-[0130], including for machine learning techniques for classification based on cell characterizations; ¶ [0146]- [0147]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al with Stamatoyannopoulos et al including wherein said machine learning module comprises a machine learning module configured to extract one or more features of interest by supervised learning or unsupervised learning. By using machine learning algorithms for analyzing the high throughput image data, cell characterization may be performed to detect multiple changes under a given environmental condition in a quick and reliable manner, thereby improving screening methods for use of the cells as or with drug candidates and detecting changes quickly and effectively, thereby enhancing research screening, as recognized by Stamatoyannopolous et al (¶ [0007]-[0010]). Claims 35, 55 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et al (Machine learning and statistical analyses for extracting and characterizing “fingerprints” of antibody aggregation at container interfaces from flow microscopy images) in view of Stamatoyannopoulos et al (US 2020/0167914) and Ashcroft et al (US 2017/0242234). Regarding Claim 35, Daniels et al in view of Stamatoyannopoulos et al teach the system of claim 34 (as described above), including said engineered cells in said biological sample (Stamatoyannopoulos et al, genetically-engineered cells may be generated and used for high throughput imaging and machine learning-based analysis; Fig 9-26 and ¶ [0204]-[0206]). Daniels et al in view of Stamatoyannopoulos et al do not teach further comprising a separation module configured to separate said engineered cells in said biological sample into a collection outlet of said biological sample. Ashcroft et al is analogous art pertinent to the technological problem addressed in the current application and teaches further comprising a separation module configured to separate said engineered cells in said biological sample into a collection outlet of said biological sample. separation module (mesh filter 118; Fig 1A and ¶ [0042]) configured to separate the said engineered cells in said biological sample into a collection outlet of said biological sample (samples are filtered by passing through a mesh filter118 using vacuum pressure to an outlet 122 and pass by an objective for microscopy imaging 140 and spectroscopy scanning where protein aggregates and polymeric particles are collected 130, allowing for particles to pass to an outlet 122 for collection and additional analysis, while the fluid sample and particles smaller than the filter are filtered through to waste; Fig 1A, 16A-17B and ¶ [0041]-[0045], [0104]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al in view of Stamatoyannopoulos et al with Ashcroft et al including further comprising a separation module configured to separate said engineered cells in said biological sample into a collection outlet of said biological sample. By sorting and filtering the microparticles of interest from particles not of interest, the microparticles of interest can be characterized without contamination, thereby improving high throughput image analysis and improving protein therapeutic sampling analysis in quantitative terms, thereby enhancing research to regulate treatments and enhance data used for practical applications, such as FDA regulations, as recognized by Ashcroft et al (¶ [0002]-[0003]). Regarding Claim 55, Daniels et al in view of Stamatoyannopoulos et al teach the system of claim 34 (as described above). Daniels et al in view of Stamatoyannopoulos et al do not teach wherein said machine learning module further comprises a fusion module adapted to combine signal modalities, or to fuse embeddings in signals from two or more modalities. Ashcroft et al is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said machine learning module further comprises a fusion module adapted to combine signal modalities, or to fuse embeddings in signals from two or more modalities (the machine learning algorithms can include multiple data to generate a predictive classification model, such as image processing, signal processing from images, spectroscopic or fluorescent signals; ¶ [0059]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al in view of Stamatoyannopoulos et al with Ashcroft et al including wherein said machine learning module further comprises a fusion module adapted to combine signal modalities, or to fuse embeddings in signals from two or more modalities. By combining different signal modality data using machine learning models, data may be more accurately identified and classified, thus improving quantitative sampling and leading to practical applications, such as high quantity, low cost data demonstrating efficacy for FDA regulations, as recognized by Ashcroft et al (¶ [0002]-[0003]). Claims 42, 49, 50 are rejected under 35 U.S.C. 103 as being unpatentable over Daniels et al (Machine learning and statistical analyses for extracting and characterizing “fingerprints” of antibody aggregation at container interfaces from flow microscopy images) in view of Stamatoyannopoulos et al (US 2020/0167914) and Aifuwa (WO 2021/041994, disclosed in applicant IDS 04/15/2024). Regarding Claim 42, Daniels et al in view of Stamatoyannopoulos et al teach the system of claim 41 (as described above). Daniels et al in view of Stamatoyannopoulos et al do not teach wherein said transduced cells comprise T cells transduced to form CAR-T cells. Aifuwa is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said transduced cells comprise T cells transduced to form CAR-T cells (T-cells are transduced to CAR-T cells; ¶ [0041], [0053]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al in view of Stamatoyannopoulos et al with Aifuwa including wherein said transduced cells comprise T cells transduced to form CAR-T cells. By transducing T cells to CAR-T cells, a therapeutic cell product is generated, which allows for potential use as a cell therapy, as recognized by Aifuwa (¶ [0058]). Regarding Claim 49, Daniels et al in view of Stamatoyannopoulos et al teach the system of claim 34 (as described above). Daniels et al in view of Stamatoyannopoulos et al do not teach wherein said digital filter comprises a machine learning- based automated classifier configured to determine if the cell culture components comprise T cells transduced to form CAR-T cells, or non-transduced T-cells. Aifuwa is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said digital filter comprises a machine learning- based automated classifier configured to determine if the cell culture components comprise T cells transduced to form CAR-T cells, or non-transduced T-cells (T-cells are transduced to CAR-T cells and the machine learning model is trained to classify the cell regarding expression of CAR, thereby identifying if the T cell is positive (CAR-T) or negative (non-transduced T cell) to contain the recombinant receptor and identify particular attributes associated with the given cell culture population; ¶ [0040]-[0042], [0053]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al in view of Stamatoyannopoulos et al with Aifuwa including wherein said digital filter comprises a machine learning- based automated classifier configured to determine if the cell culture components comprise T cells transduced to form CAR-T cells, or non-transduced T-cells. By performing different classification analysis of cell populations, multi-dimensional cellular analysis is performed thereby improving protocols for generating therapeutic cell populations used for cell therapy, while optimizing manufacturing efficacy and consistency, as recognized by Aifuwa (¶ [0058]). Regarding Claim 50, Daniels et al in view of Stamatoyannopoulos et al teach the system of claim 34 (as described above). Daniels et al in view of Stamatoyannopoulos et al do not teach wherein said digital filter comprises a machine learning- based embedding scheme configured to determine if the cell culture components comprise transduced CAR-T cells, or non-transduced T-cell. Aifuwa is analogous art pertinent to the technological problem addressed in the current application and teaches wherein said digital filter comprises a machine learning- based embedding scheme configured to determine if the cell culture components comprise transduced CAR-T cells, or non-transduced T-cell (the machine learning model is trained to classify the cell regarding cellular attributes with the given cell culture population including expression of CAR, thereby identifying if the T cell is positive (CAR-T) or negative (non-transduced T cell) to contain the recombinant receptor and identify particular attributes associated; ¶ [0040]-[0042], [0053]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Daniels et al in view of Stamatoyannopoulos et al with Aifuwa including wherein said digital filter comprises a machine learning- based embedding scheme configured to determine if the cell culture components comprise transduced CAR-T cells, or non-transduced T-cell. By performing different classification analysis of cell populations, multi-dimensional cellular analysis is performed thereby improving protocols for generating therapeutic cell populations used for cell therapy, while optimizing manufacturing efficacy and consistency, as recognized by Aifuwa (¶ [0058]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Randolph et al (US 2021/0303818, application 17/264,690), from the same applicant and co-inventors teach a method and system for applying machine learning to microscope images for high-throughput systems including methods for isolating synthesized pharmaceutical compounds, imaging such compounds and analyzing such compounds with a trained neural network to identify statistical differences in samples, which the current invention is distinct from by claiming systems for analyzing microparticles. Calderon et al (Using Deep Convolutional Neural Networks to Circumvent Morphologic Feature Specification when Classifying Subvisible Protein Aggregates from Micro-Flow Images) teach flow-imaging microscopy applied to particle analysis and morphological feature analysis of such protein therapeutics using convolutional neural networks with the samples experimented with various stress states. Constantinou et al (Self-Learning Microfluidic Platform for Single-Cell Imaging and Classification in Flow) teach high-throughput single-cell imaging of cell populations combined with variational autoencoders for analysis and classification of cellular attributes. Irimia et al (WO 2021/041873) teach the use of CAR-T cells for the use as a cellular therapy and the use of machine learning for classifying the efficiency of CAR-T cells against tumor cells under various experimental conditions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Mar 16, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602915
FEATURE FUSION FOR NEAR FIELD AND FAR FIELD IMAGES FOR VEHICLE APPLICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597233
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12586203
IMAGE CUTTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567227
METHOD AND SYSTEM FOR UNSUPERVISED DEEP REPRESENTATION LEARNING BASED ON IMAGE TRANSLATION
2y 5m to grant Granted Mar 03, 2026
Patent 12565240
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
92%
With Interview (+8.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 263 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month