DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 08/03/2022 is being considered by the examiner.
Specification Objections
The specification is objected to because of the following informalities:
In paragraph [0043], line 1, “various embodiment” should read “various embodiments.”
In paragraph [0051], line 5, “were the contextual mask labels biological structures” should read “where
Appropriate correction is required.
Double Patenting
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a non-statutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1).
Claim 8 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 18 of US Patent No.: US 12067712 B2 in view of BHARTI (US 20250095390 A1), further in view of GAO. (US 20220036124 A1).
Claim 15 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of BHARTI (US 20250095390 A1), further in view of GAO. (US 20220036124 A1).
Although the claims 1-20 of this Application No. 17/826,392 and claims at issue are not identical, they are not patentably distinct from each other because the instant application and the conflicting patent are claiming common subject matter, as follows:
This Application No. 17/826,392
US Patent No.: US 12067712 B2
Claim 1: A method comprising:
obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure;
determining a context spectrum selection from context spectrum including a range of selectable values by:
applying the specific QID to an input layer of a context-spectrum neural network,
wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss,
based on previous QID and constructed context spectrum data associated with the previous QID;
mapping the context spectrum selection to the image to generate a context spectrum mask for the image;
and determining a condition of the biostructure based on the context spectrum mask.
Claim 8: An apparatus, comprising:
a memory storing instructions;
and a processor in communication with the memory,
wherein, when the processor executes the instructions, the processor is configured to cause the apparatus to perform:
obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure;
determining a context spectrum selection from context spectrum including a range of selectable values by:
applying the specific QID to an input layer of a context-spectrum neural network,
wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss,
based on previous QID and constructed context spectrum data associated with the previous QID;
mapping the context spectrum selection to the image to generate a context spectrum mask for the image;
and determining a condition of the biostructure based on the context spectrum mask.
Claim 15:
A non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform:
Claim 15: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure;
determining a context spectrum selection from context spectrum including a range of selectable values by:
applying the specific QID to an input layer of a context-spectrum neural network,
wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss,
based on previous QID and constructed context spectrum data associated with the previous QID;
mapping the context spectrum selection to the image to generate a context spectrum mask for the image;
and determining a condition of the biostructure based on the context spectrum mask.
Claim 1: A method including:
obtaining specific quantitative image data captured via a quantitative imaging technique of a sample (wherein a sample is a biostructure),
determining a specific context mask for the specific quantitative image data by
comparing the specific quantitative image data to previous quantitative image data for a previous sample via application of the specific quantitative image data to an input of a neutral network
trained using constructed context masks (wherein constructed context masks are constructed context spectrum data) generated based on the previous sample and the previous quantitative image data;
applying the specific context mask to the specific quantitative image data to determine a context value for the pixel, wherein the context value includes an expected dye concentration level at the pixel;
and based on the pixel and the quantitative parameter value, determining a quantitative characterization for the context value; and referencing the quantitative characterization against a structure integrity index to determine a condition of the sample.
Claim 18: A biological imaging device including:
memory configured to store:
raw pixel data from the pixel capture array; and quantitative parameter values for pixels of the raw pixel data; a neural network trained using constructed structure masks generated based on previous quantitative parameter values and previous pixel data; a computed structure mask for the pixels; a structure integrity index;
a processor in data communication with memory,
the processor configured to: determine the quantitative parameter values for the pixels based on the raw pixel data and the comparative effect, wherein the quantitate parameter values are derived from raw pixel values; via execution of the neural network, determine the computed structure mask by assigning a subset of the pixels that represent portions of a selected biological structure identical mask values within the computed structure mask;
Claim 1: obtaining specific quantitative image data captured via a quantitative imaging technique of a sample (wherein a sample is a biostructure),
determining a specific context mask for the specific quantitative image data by
comparing the specific quantitative image data to previous quantitative image data for a previous sample via application of the specific quantitative image data to an input of a neutral network
trained using constructed context masks (wherein constructed context masks are constructed context spectrum data) generated based on the previous sample and the previous quantitative image data;
.
applying the specific context mask to the specific quantitative image data to determine a context value for the pixel, wherein the context value includes an expected dye concentration level at the pixel;
and based on the pixel and the quantitative parameter value, determining a quantitative characterization for the context value; and referencing the quantitative characterization against a structure integrity index to determine a condition of the sample.
Although US Patent No. US 12067712 B2, claim 1 teaches a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, based on previous QID and constructed context spectrum data associated with the previous QID; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, US Patent No. US 12067712 B2, claim 1, as stated in the table above with respect to claim 1, fails to clearly disclose wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss.
However, GAO (US 20220036124 A1) explicitly teaches wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss (Fig. 5, Paragraph [0076] – GAO discloses in the formula (4), L.sub.focal is a focal loss function, which is used when the primary image segmentation model is trained. L.sub.dice is a generalized dice loss function used for training the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, based on previous QID and constructed context spectrum data associated with the previous QID; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, with the teachings of GAO (US 20220036124 A1) having wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss.
Wherein having US Patent No. US 12067712 B2, claim 1 wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Although US Patent No. US 12067712 B2, claim 18 teaches an apparatus, comprising: a memory storing instructions; and a processor in communication with the memory, wherein, when the processor executes the instructions, the processor is configured to cause the apparatus to perform, US Patent No. US 12067712 B2, claim 18, as stated in the table above with respect to claim 8, fails to clearly disclose obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss.
However, BHARTI (US 20250095390 A1) explicitly teaches obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure (Fig. 2B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 [wherein 210 called new input array is quantitative imaging data] (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products));
determining a context spectrum selection from context spectrum including a range of selectable values (Fig. 2B, Paragraph [0075] – BHARTI discloses the deep neural network model 212 is capable of consistently and autonomously analyzing images, identifying features within images, performing high-throughput segmentation of given images, and correlating the images to identity, safety, physiological, biochemical, or molecular outcomes.) by:
applying the specific QID to an input layer of a context-spectrum neural network (Fig. 6, Paragraph [0095] – BHARTI discloses FIG. 6 illustrates an exemplary fully-connected deep neural network (DNN) 600 that can be implemented by the deep neural network model 212 in accordance with embodiments of the present disclosure. Paragraph [0098] – BHARTI further discloses the images to be analyzed 603 can be inputted into the nodes 602 of the input layer 604.),
mapping the context spectrum selection to the image to generate a context spectrum mask for the image (Fig. 6, Paragraph [0102] – BHARTI discloses the fully connected layer processes the output of the previous layer (which represents the activation maps of high level features) and determines which features most correlate to a particular class.);
and determining a condition of the biostructure based on the context spectrum mask (Fig. 6, Paragraph [0102] – BHARTI discloses a particular output feature from a previous convolution layer may indicate whether a specific feature in the image is indicative of an RPE cell, and such feature can be used to classify a target image as ‘RPE cell’ or ‘non-RPE cell’).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18, of having an apparatus, comprising: a memory storing instructions; and a processor in communication with the memory, wherein, when the processor executes the instructions, the processor is configured to cause the apparatus to perform, with the teachings of BHARTI (US 20250095390 A1) of having obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask.
Wherein having US Patent No. US 12067712 B2, claim 18 wherein the processor is configured to cause the apparatus to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
US Patent No. US 12067712 B2, claim 18, in view of BHARTI (US 20250095390 A1) fail to explicitly teach wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
However, GAO (US 20220036124 A1) explicitly teaches wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss (Fig. 5, Paragraph [0076] – GAO discloses in the formula (4), L.sub.focal is a focal loss function, which is used when the primary image segmentation model is trained. L.sub.dice is a generalized dice loss function used for training the primary image segmentation model.),
based on previous QID and constructed context spectrum data associated with the previous QID (Fig. 5, Paragraph [0085] – GAO discloses the target image segmentation model is able to input shape code into a potential space and make the shape predicted by a network accord with the prior knowledge [wherein prior knowledge is previous QID] by minimizing a distance between the shape predicted by the network and a ground truth shape [wherein ground truth shape is constructed context spectrum data].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18, of having an apparatus, comprising: a memory storing instructions; and a processor in communication with the memory, wherein, when the processor executes the instructions, the processor is configured to cause the apparatus to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, with the teachings of GAO (US 20220036124 A1) having wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus comprising: wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Although US Patent No. US 12067712 B2, claim 1 teaches obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, based on previous QID and constructed context spectrum data associated with the previous QID; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, US Patent No. US 12067712 B2, claim 1, as stated in the table above with respect to claim 15, fails to clearly disclose a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform.
However, BHARTI (US 20250095390 A1) explicitly teaches a non-transitory computer readable storage medium storing computer readable instructions (Fig. 1, Paragraph [0058] – BHARTI discloses the information processing system 102 can further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, a storage system 114 can be provided for reading from and writing to a non-removable or removable, non-volatile media such as one or more solid state disks and/or magnetic media (typically called a “hard drive”). A magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided.)
BHARTI (US 20250095390 A1) further teaches wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform (Fig. 1, Paragraph [0057] – BHARTI discloses the system memory 106, in one embodiment, includes a machine learning module 109 configured to perform one or more embodiments discussed below. It should be noted that even though FIG. 1 shows the machine learning module 109 residing in the main memory, the machine learning module 109 can reside within the processor 104, be a separate hardware component capable of and/or be distributed across a plurality of information processing systems and/or processors.):
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 of having obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, based on previous QID and constructed context spectrum data associated with the previous QID; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, with the teachings of BHARTI (US 20250095390 A1) having wherein a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform.
Wherein having US Patent No. US 12067712 B2, claim 1 wherein a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
US Patent No. US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1) fail to explicitly teach wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss.
However, GAO (US 20220036124 A1) explicitly teaches wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss (Fig. 5, Paragraph [0076] – GAO discloses in the formula (4), L.sub.focal is a focal loss function, which is used when the primary image segmentation model is trained. L.sub.dice is a generalized dice loss function used for training the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, based on previous QID and constructed context spectrum data associated with the previous QID; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, with the teachings of GAO having wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss.
Wherein having US Patent No. US 12067712 B2, claim 1 wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
The further limitations of the dependent claims 2-7, 9-14, and 16-20 are similar as indicated below:
Claim 2 and 6 are rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1).
Regarding claim 2, although US Patent No.: US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) teaches the method according to claim 1,
US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) fails to explicitly teach wherein: the previous QID are obtained corresponding to an image of a second biostructure; and the constructed context spectrum data comprises a ground truth condition of the second biostructure.
However, BHARTI explicitly teaches wherein: the previous QID are obtained corresponding to an image of a second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses the approach was to 1) train a DNN (DNN-Z) to segment cell borders in ZO-1 fluorescence images using corresponding images, where the cell borders had been drawn in by expert technicians, 2) collect QBAM images and fluorescent images of RPE that had been fluorescently stained for ZO-1, 3) use the DNN-Z to segment cell borders using ZO-1 fluorescence images and 4) use the ZO-1 segmentations to train a new DNN to segment cells in QBAM images (DNN-S));
and the constructed context spectrum data comprises a ground truth condition of the second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses a deep convolutional neural network was designed to segment RPE fluorescently labeled for a tight junction protein (ZO-1), which highlights the cell borders and enables accurate cell segmentation. The purpose of this was to have a highly accurate segmentation method to generate ground truth cell border labels for QBAM.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of BHARTI (US 20250095390 A1) having wherein: the previous QID are obtained corresponding to an image of a second biostructure; and the constructed context spectrum data comprises a ground truth condition of the second biostructure.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the previous QID are obtained corresponding to an image of a second biostructure; and the constructed context spectrum data comprises a ground truth condition of the second biostructure.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Regarding claim 6, US Patent No.: US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) teaches the method according to claim 1,
US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) fail to explicitly teach wherein: the context spectrum comprises a continuum or near continuum of selectable states.
However, BHARTI (US 20250095390 A1) explicitly teaches wherein: the context spectrum comprises a continuum or near continuum of selectable states (Fig. 7, Paragraph [0100] – BHARTI discloses CNN takes the image 702, and passes it through a series of convolutional, nonlinear, pooling (downsampling), and fully connected layers to get an output. The output [wherein output is selectable states] can be a single class or a probability of classes that best describes the image.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: determining a context spectrum selection from context spectrum including a range of selectable values, with the teachings of BHARTI (US 20250095390 A1) having wherein: the context spectrum comprises a continuum or near continuum of selectable states.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the context spectrum comprises a continuum or near continuum of selectable states.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Claim 3 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1).
Regarding claim 3, US Patent No.: US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) teaches the method according to claim 1,
US Patent No.: US 12067712 B2 claim 1 fails to explicitly teach wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
However, GAO (US 20220036124 A1) explicitly teaches wherein: the context-spectrum neural network comprises an EfficientNet Unet (Fig. 11, #110 called primary image segmentation model, Paragraph [0032] – GAO discloses the primary image segmentation model may be a modified 3D U-Net fully convolutional neural network that is based on an encoder-decoder architecture.)
comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet (Fig. 11, Paragraph [0034] – GAO discloses in order to improve an accuracy in segmentation performed by the primary image segmentation model, the image processing device may use a residue block including a convolution layer, an ReLU and a batch normalization layer as a backbone network of the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of GAO (US 20220036124 A1) having wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Claim 4 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1), and further in view of TSIORIS (US 20240254431 A1).
Regarding claim 4, US Patent No.: US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) teaches the method according to claim 1,
Although GAO explicitly teaches wherein the biostructure comprises at least one of the following: an organ (Fig. 11, Paragraph [0114] – GAO discloses the image processing device firstly performs feature extraction on an original CT image through a primary image segmentation model 110 to obtain a feature map and directly obtains a segmentation result of a large organ.),
US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) fail to explicitly teach wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part.
However, BHARTI (US 20250095390 A1) explicitly teaches wherein: the biostructure comprises at least one of the following: a cell (Fig. 2A-B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210),
a tissue (Figs. 2A-B, Paragraph [0072] – BHARTI discloses the input data 202 may include an input array of measurements representative of at least one physiological, molecular, cellular, and/or biochemical parameter of a plurality of primary cell types derived from human or any animal tissue.),
a cell part (Figs. 2A-B, Paragraph [0082] – BHARTI discloses based on an understanding of cell borders and visual parameters (i.e., shape, intensity and texture metrics) within the microscopic images, the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products). It should be noted that texture metrics may include a plurality of sub-cellular features [wherein sub-cellular features are a cell part].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of BHARTI (US 20250095390 A1) having wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
US 12067712 B2, claim 1, in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) fail to explicitly teach wherein the biostructure comprises at least one of the following: a HeLa cell.
However, TSIORIS (US 20240254431 A1) explicitly teaches wherein the biostructure comprises at least one of the following: a HeLa cell (Fig. 2, Paragraph [0165] – TSIORIS discloses a method of selecting a target cell, wherein the target cell may be a certain type of cell. In certain embodiments, the target cell is a T cell, a B cell, a plasma cell, antibody secreting cells (ASCs), an antigen presenting cell, a hybridoma, an immune cell, a stem cell, an induced pluripotent stem cell (IPSC), or an engineered cell. In certain embodiments, the engineered cell is… a HELA cell.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of TSIORIS (US 20240254431 A1) having wherein: the biostructure comprises at least one of the following: a HeLa cell.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ, or a HeLa cell.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Claims 5 and 7 are rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1), and further in view of MASAELI (US 20240153289 A1).
Regarding claim 5, US Patent No.: US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) teaches the method according to claim 1,
US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) fail to explicitly teach wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, health, or cell cycle.
However, BHARTI (US 20250095390 A1) explicitly teaches wherein: the condition of the biostructure comprises at least one of the following: health (Figs. 2A-B, Paragraph [0069] – BHARTI discloses FIGS. 2A and 2B illustrate a selected machine learning framework and a deep neural network framework that might be used individually or together for predicting functions, identity, disease state and health of cells and their derivatives.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of BHARTI (US 20250095390 A1) having wherein: the condition of the biostructure comprises at least one of the following: health.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the condition of the biostructure comprises at least one of the following: health.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
US 12067712 B2, claim 1, in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) fail to explicitly teach wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
However, MASAELI (US 20240153289 A1) explicitly teaches wherein: the condition of the biostructure comprises at least one of the following: viability (Fig. 1, Paragraph [0046] – MASAELI discloses examples of the feature of the cell(s) can comprise a size, shape, volume, electromagnetic radiation absorbance and/or transmittance (e.g., fluorescence intensity, luminescence intensity, etc.), or viability (e.g., when live cells are used).),
cell membrane integrity (Fig. 1, Paragraph [0053] – MASAELI discloses non-limiting examples of one or more morphological properties of a cell, as disclosed herein, that can be extracted from one or more images of the cell can include, but are not limited to (i) shape, curvature, size (e.g., diameter, length, width, circumference), area, volume, texture, thickness, roundness, etc. of the cell or one or more components of the cell (e.g., cell membrane, nucleus, mitochondria, etc.)),
or cell cycle (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. See also Paragraph [0048].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI (US 20240153289 A1) having wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, health, or cell cycle.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Regarding claim 7, US Patent No.: US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) teaches the method according to claim 1,
US 12067712 B2 claim 1 in view of GAO (US 20220036124 A1) fail to explicitly teach wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state;
However, BHARTI (US 20250095390 A1) explicitly teaches wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state (Figs. 2A-B, Paragraph [0068] – BHARTI discloses the disclosed approaches may be used to tell the difference between: 1) cells of different sub-types, thus allowing the possibility of making or optimizing the generation of specific cells and tissue types using stem cells; 2) healthy and diseased cells, allowing the possibility of discovering drugs or underlying mechanisms behind a disease that can improve the health of diseased cells);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of BHARTI (US 20250095390 A1) having wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state.
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
US 12067712 B2, claim 1, in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) fail to explicitly teach or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
However, MASAELI (US 20240153289 A1) explicitly teaches or the condition of the biostructure (Fig. 1, Paragraph [0041] – MASAELI discloses the classifier can be configured to classify (e.g., automatically classify) a cellular image sample [wherein cellular image sample is the biostructure] based on its proximity, correlation, or commonality with one or more of the morphologically-distinct clusters.)
comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase) (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. Paragraph [0048] – MASAELI further discloses “cell cycle” as used herein generally refers to the physiological and/or morphological progression of changes that cells undergo when dividing (e.g., proliferating). Examples of different phases of the cell cycle can include “interphase,” “prophase,” “metaphase,” “anaphase,” and “telophase”. Additionally, parts of the cell cycle can be “M (mitosis),” “S (synthesis),” “G0,” “G1 (gap 1)” and “G2 (gap 2)”. Furthermore, the cell cycle can include periods of progression that are intermediate to the above named phases.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of GAO (US 20220036124 A1) of having a method comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI (US 20240153289 A1) of having or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
Wherein having US Patent No. US 12067712 B2, claim 1’s method wherein: the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Claims 9 and 13 are rejected on the ground of non-statutory double patenting as being unpatentable over claim 18 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1).
Regarding claim 9, US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) teaches the apparatus according to claim 8,
BHARTI (US 20250095390 A1) further teaches wherein: the previous QID are obtained corresponding to an image of a second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses the approach was to 1) train a DNN (DNN-Z) to segment cell borders in ZO-1 fluorescence images using corresponding images, where the cell borders had been drawn in by expert technicians, 2) collect QBAM images and fluorescent images of RPE that had been fluorescently stained for ZO-1, 3) use the DNN-Z to segment cell borders using ZO-1 fluorescence images and 4) use the ZO-1 segmentations to train a new DNN to segment cells in QBAM images (DNN-S));
and the constructed context spectrum data comprises a ground truth condition of the second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses a deep convolutional neural network was designed to segment RPE fluorescently labeled for a tight junction protein (ZO-1), which highlights the cell borders and enables accurate cell segmentation. The purpose of this was to have a highly accurate segmentation method to generate ground truth cell border labels for QBAM.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of of BHARTI (US 20250095390 A1) having an apparatus, comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of BHARTI (US 20250095390 A1) having wherein: the previous QID are obtained corresponding to an image of a second biostructure; and the constructed context spectrum data comprises a ground truth condition of the second biostructure.
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus wherein: the previous QID are obtained corresponding to an image of a second biostructure; and the constructed context spectrum data comprises a ground truth condition of the second biostructure.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Regarding claim 13, US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) teaches the apparatus according to claim 8,
BHARTI (US 20250095390 A1) further teaches wherein: the context spectrum comprises a continuum or near continuum of selectable states (Fig. 7, Paragraph [0100] – BHARTI discloses CNN takes the image 702, and passes it through a series of convolutional, nonlinear, pooling (downsampling), and fully connected layers to get an output. The output [wherein output is selectable states] can be a single class or a probability of classes that best describes the image.).comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet (Fig. 11, Paragraph [0034] – GAO discloses in order to improve an accuracy in segmentation performed by the primary image segmentation model, the image processing device may use a residue block including a convolution layer, an ReLU and a batch normalization layer as a backbone network of the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of of BHARTI (US 20250095390 A1) having an apparatus, comprising: determining a context spectrum selection from context spectrum including a range of selectable values, with the teachings of BHARTI (US 20250095390 A1) having wherein: the context spectrum comprises a continuum or near continuum of selectable states.
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus wherein: the context spectrum comprises a continuum or near continuum of selectable states.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Claim 10 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 18 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1).
Regarding claim 10, US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) teaches the apparatus according to claim 8,
GAO (US 20220036124 A1) further teaches wherein: the context-spectrum neural network comprises an EfficientNet Unet (Fig. 11, #110 called primary image segmentation model, Paragraph [0032] – GAO discloses the primary image segmentation model may be a modified 3D U-Net fully convolutional neural network that is based on an encoder-decoder architecture.)
comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet (Fig. 11, Paragraph [0034] – GAO discloses in order to improve an accuracy in segmentation performed by the primary image segmentation model, the image processing device may use a residue block including a convolution layer, an ReLU and a batch normalization layer as a backbone network of the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of of BHARTI (US 20250095390 A1) having an apparatus, comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of GAO (US 20220036124 A1) having wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet..
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Claim 11 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 18 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1), and further in view of TSIORIS (US 20240254431 A1).
Regarding claim 11, US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) teaches the apparatus according to claim 8,
GAO (US 20220036124 A1) further teaches wherein the biostructure comprises at least one of the following: an organ (Fig. 11, Paragraph [0114] – GAO discloses the image processing device firstly performs feature extraction on an original CT image through a primary image segmentation model 110 to obtain a feature map and directly obtains a segmentation result of a large organ.).
US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1) fail to explicitly teach wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part.
However, BHARTI (US 20250095390 A1) explicitly teaches wherein: the biostructure comprises at least one of the following: a cell (Fig. 2A-B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210),
a tissue (Figs. 2A-B, Paragraph [0072] – BHARTI discloses the input data 202 may include an input array of measurements representative of at least one physiological, molecular, cellular, and/or biochemical parameter of a plurality of primary cell types derived from human or any animal tissue.),
a cell part (Figs. 2A-B, Paragraph [0082] – BHARTI discloses based on an understanding of cell borders and visual parameters (i.e., shape, intensity and texture metrics) within the microscopic images, the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products). It should be noted that texture metrics may include a plurality of sub-cellular features [wherein sub-cellular features are a cell part].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) having an apparatus, comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of BHARTI (US 20250095390 A1) having wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part.
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) fail to explicitly teach wherein: the biostructure comprises at least one of the following: a HeLa cell.
However, TSIORIS (US 20240254431 A1) explicitly teaches wherein the biostructure comprises at least one of the following: a HeLa cell (Fig. 2, Paragraph [0165] – TSIORIS discloses a method of selecting a target cell, wherein the target cell may be a certain type of cell. In certain embodiments, the target cell is a T cell, a B cell, a plasma cell, antibody secreting cells (ASCs), an antigen presenting cell, a hybridoma, an immune cell, a stem cell, an induced pluripotent stem cell (IPSC), or an engineered cell. In certain embodiments, the engineered cell is… a HELA cell.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of of BHARTI (US 20250095390 A1) having an apparatus, comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of TSIORIS (US 20240254431 A1) having wherein: the biostructure comprises at least one of the following: a HeLa cell.
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ, or a HeLa cell.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Claims 12 and 14 are rejected on the ground of non-statutory double patenting as being unpatentable over claim 18 of US Patent No.: US 12067712 B2 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1), and further in view of MASAELI (US 20240153289 A1).
Regarding claim 12, US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) teaches the apparatus according to claim 8,
Although BHARTI (US 20250095390 A1) further teaches wherein: the condition of the biostructure comprises at least one of the following: health (Figs. 2A-B, Paragraph [0069] – BHARTI discloses FIGS. 2A and 2B illustrate a selected machine learning framework and a deep neural network framework that might be used individually or together for predicting functions, identity, disease state and health of cells and their derivatives.),
US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) fail to explicitly teach wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
However, MASAELI (US 20240153289 A1) explicitly teaches wherein: the condition of the biostructure comprises at least one of the following: viability (Fig. 1, Paragraph [0046] – MASAELI discloses examples of the feature of the cell(s) can comprise a size, shape, volume, electromagnetic radiation absorbance and/or transmittance (e.g., fluorescence intensity, luminescence intensity, etc.), or viability (e.g., when live cells are used).),
cell membrane integrity (Fig. 1, Paragraph [0053] – MASAELI discloses non-limiting examples of one or more morphological properties of a cell, as disclosed herein, that can be extracted from one or more images of the cell can include, but are not limited to (i) shape, curvature, size (e.g., diameter, length, width, circumference), area, volume, texture, thickness, roundness, etc. of the cell or one or more components of the cell (e.g., cell membrane, nucleus, mitochondria, etc.)),
or cell cycle (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. See also Paragraph [0048].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of of BHARTI (US 20250095390 A1) having an apparatus, comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI (US 20240153289 A1) having wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, health, or cell cycle.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Regarding claim 14, US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) teaches the apparatus according to claim 8,
Although BHARTI (US 20250095390 A1) further teaches wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state (Figs. 2A-B, Paragraph [0068] – BHARTI discloses the disclosed approaches may be used to tell the difference between: 1) cells of different sub-types, thus allowing the possibility of making or optimizing the generation of specific cells and tissue types using stem cells; 2) healthy and diseased cells, allowing the possibility of discovering drugs or underlying mechanisms behind a disease that can improve the health of diseased cells),
US Patent No.: US 12067712 B2 claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) fail to explicitly teach or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
However, MASAELI (US 20240153289 A1) explicitly teaches or the condition of the biostructure (Fig. 1, Paragraph [0041] – MASAELI discloses the classifier can be configured to classify (e.g., automatically classify) a cellular image sample [wherein cellular image sample is the biostructure] based on its proximity, correlation, or commonality with one or more of the morphologically-distinct clusters.)
comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase) (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. Paragraph [0048] – MASAELI further discloses “cell cycle” as used herein generally refers to the physiological and/or morphological progression of changes that cells undergo when dividing (e.g., proliferating). Examples of different phases of the cell cycle can include “interphase,” “prophase,” “metaphase,” “anaphase,” and “telophase”. Additionally, parts of the cell cycle can be “M (mitosis),” “S (synthesis),” “G0,” “G1 (gap 1)” and “G2 (gap 2)”. Furthermore, the cell cycle can include periods of progression that are intermediate to the above named phases.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 18 in view of GAO (US 20220036124 A1), further in view of BHARTI (US 20250095390 A1) having an apparatus, comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI (US 20240153289 A1) of having or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
Wherein having US Patent No. US 12067712 B2, claim 18’s apparatus wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state; or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Claim 16 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1).
Regarding claim 16, US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) teaches the non-transitory computer readable storage medium according to claim 15,
BHARTI (US 20250095390 A1) further teaches wherein: the previous QID are obtained corresponding to an image of a second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses the approach was to 1) train a DNN (DNN-Z) to segment cell borders in ZO-1 fluorescence images using corresponding images, where the cell borders had been drawn in by expert technicians, 2) collect QBAM images and fluorescent images of RPE that had been fluorescently stained for ZO-1, 3) use the DNN-Z to segment cell borders using ZO-1 fluorescence images and 4) use the ZO-1 segmentations to train a new DNN to segment cells in QBAM images (DNN-S));
and the constructed context spectrum data comprises a ground truth condition of the second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses a deep convolutional neural network was designed to segment RPE fluorescently labeled for a tight junction protein (ZO-1), which highlights the cell borders and enables accurate cell segmentation. The purpose of this was to have a highly accurate segmentation method to generate ground truth cell border labels for QBAM.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of BHARTI (US 20250095390 A1) having wherein: the previous QID are obtained corresponding to an image of a second biostructure; and the constructed context spectrum data comprises a ground truth condition of the second biostructure.
Wherein having US Patent No. US 12067712 B2, claim 1’s non-transitory computer readable storage medium storing computer readable instructions wherein: the previous QID are obtained corresponding to an image of a second biostructure; and the constructed context spectrum data comprises a ground truth condition of the second biostructure.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Claim 17 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1).
Regarding claim 17, US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) teaches the non-transitory computer readable storage medium according to claim 15,
GAO (US 20220036124 A1) further teaches wherein: the context-spectrum neural network comprises an EfficientNet Unet (Fig. 11, #110 called primary image segmentation model, Paragraph [0032] – GAO discloses the primary image segmentation model may be a modified 3D U-Net fully convolutional neural network that is based on an encoder-decoder architecture.)
comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet (Fig. 11, Paragraph [0034] – GAO discloses in order to improve an accuracy in segmentation performed by the primary image segmentation model, the image processing device may use a residue block including a convolution layer, an ReLU and a batch normalization layer as a backbone network of the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of GAO (US 20220036124 A1) having wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
Wherein having US Patent No. US 12067712 B2, claim 1’s non-transitory computer readable storage medium storing computer readable instructions wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
Claim 18 is rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1).
Regarding claim 18, US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) teaches the non-transitory computer readable storage medium according to claim 15,
BHARTI (US 20250095390 A1) further teaches wherein: the biostructure comprises at least one of the following: a cell (Fig. 2A-B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210),
a tissue (Figs. 2A-B, Paragraph [0072] – BHARTI discloses the input data 202 may include an input array of measurements representative of at least one physiological, molecular, cellular, and/or biochemical parameter of a plurality of primary cell types derived from human or any animal tissue.),
a cell part (Figs. 2A-B, Paragraph [0082] – BHARTI discloses based on an understanding of cell borders and visual parameters (i.e., shape, intensity and texture metrics) within the microscopic images, the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products). It should be noted that texture metrics may include a plurality of sub-cellular features [wherein sub-cellular features are a cell part].).
US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), fails to explicitly teach wherein: the biostructure comprises at least one of the following: an organ.
However, GAO (US 20220036124 A1) explicitly teaches wherein: the biostructure comprises at least one of the following: an organ (Fig. 11, Paragraph [0114] – GAO discloses the image processing device firstly performs feature extraction on an original CT image through a primary image segmentation model 110 to obtain a feature map and directly obtains a segmentation result of a large organ.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of GAO (US 20220036124 A1) having wherein: the biostructure comprises at least one of the following: an organ.
Wherein having US Patent No. US 12067712 B2, claim 1’s non-transitory computer readable storage medium storing computer readable instructions wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ.
The motivation behind the modification would have been to obtain a more precise and accurate method of obtaining quantitative imaging data to determine a condition of a biostructure using a neural network.
US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) fails to explicitly teach wherein: the biostructure comprises at least one of the following: a HeLa cell.
However, TSIORIS (US 20240254431 A1) explicitly teaches wherein: the biostructure comprises at least one of the following: a HeLa cell (Fig. 2, Paragraph [0165] – TSIORIS discloses a method of selecting a target cell, wherein the target cell may be a certain type of cell. In certain embodiments, the target cell is a T cell, a B cell, a plasma cell, antibody secreting cells (ASCs), an antigen presenting cell, a hybridoma, an immune cell, a stem cell, an induced pluripotent stem cell (IPSC), or an engineered cell. In certain embodiments, the engineered cell is… a HELA cell.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of TSIORIS (US 20240254431 A1) having wherein: the biostructure comprises at least one of the following: a HeLa cell.
Wherein having US Patent No. US 12067712 B2, claim 1’s non-transitory computer readable storage medium storing computer readable instructions wherein: the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ, or a HeLa cell.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Claims 19 and 20 are rejected on the ground of non-statutory double patenting as being unpatentable over claim 1 of US Patent No.: US 12067712 B2 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1), and further in view of MASAELI (US 20240153289 A1).
Regarding claim 19, US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) teaches the non-transitory computer readable storage medium according to claim 15,
Although BHARTI (US 20250095390 A1) further teaches wherein: the condition of the biostructure comprises at least one of the following: health (Figs. 2A-B, Paragraph [0069] – BHARTI discloses FIGS. 2A and 2B illustrate a selected machine learning framework and a deep neural network framework that might be used individually or together for predicting functions, identity, disease state and health of cells and their derivatives.),
US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) fails to explicitly teach wherein: the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
However, MASAELI (US 20240153289 A1) explicitly teaches wherein: the condition of the biostructure comprises at least one of the following: viability (Fig. 1, Paragraph [0046] – MASAELI discloses examples of the feature of the cell(s) can comprise a size, shape, volume, electromagnetic radiation absorbance and/or transmittance (e.g., fluorescence intensity, luminescence intensity, etc.), or viability (e.g., when live cells are used).),
cell membrane integrity (Fig. 1, Paragraph [0053] – MASAELI discloses non-limiting examples of one or more morphological properties of a cell, as disclosed herein, that can be extracted from one or more images of the cell can include, but are not limited to (i) shape, curvature, size (e.g., diameter, length, width, circumference), area, volume, texture, thickness, roundness, etc. of the cell or one or more components of the cell (e.g., cell membrane, nucleus, mitochondria, etc.)),
or cell cycle (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. See also Paragraph [0048].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI (US 20240153289 A1) having wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
Wherein having US Patent No. US 12067712 B2, claim 1’s non-transitory computer readable storage medium storing computer readable instructions wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, health, or cell cycle.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Regarding claim 20, US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) teaches the non-transitory computer readable storage medium according to claim 15,
Although BHARTI (US 20250095390 A1) further teaches wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state (Figs. 2A-B, Paragraph [0068] – BHARTI discloses the disclosed approaches may be used to tell the difference between: 1) cells of different sub-types, thus allowing the possibility of making or optimizing the generation of specific cells and tissue types using stem cells; 2) healthy and diseased cells, allowing the possibility of discovering drugs or underlying mechanisms behind a disease that can improve the health of diseased cells),
US Patent No.: US 12067712 B2 claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) fails to explicitly teach or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
However, MASAELI explicitly teaches or the condition of the biostructure (Fig. 1, Paragraph [0041] – MASAELI discloses the classifier can be configured to classify (e.g., automatically classify) a cellular image sample [wherein cellular image sample is the biostructure] based on its proximity, correlation, or commonality with one or more of the morphologically-distinct clusters.)
comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase) (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. Paragraph [0048] – MASAELI further discloses “cell cycle” as used herein generally refers to the physiological and/or morphological progression of changes that cells undergo when dividing (e.g., proliferating). Examples of different phases of the cell cycle can include “interphase,” “prophase,” “metaphase,” “anaphase,” and “telophase”. Additionally, parts of the cell cycle can be “M (mitosis),” “S (synthesis),” “G0,” “G1 (gap 1)” and “G2 (gap 2)”. Furthermore, the cell cycle can include periods of progression that are intermediate to the above named phases.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of the US Patent No. US 12067712 B2, claim 1 in view of BHARTI (US 20250095390 A1), further in view of GAO (US 20220036124 A1) of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI (US 20240153289 A1) having or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
Wherein having US Patent No. US 12067712 B2, claim 1’s non-transitory computer readable storage medium storing computer readable instructions wherein: the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed
invention is not identically disclosed as set forth in section 102 of this title, if the
differences between the claimed invention and the prior art are such that the claimed
invention as a whole would have been obvious before the effective filing date of the
claimed invention to a person having ordinary skill in the art to which the claimed
invention pertains. Patentability shall not be negated by the manner in which the
invention was made.
Claims 1-3, 6, 8-10, 13, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over BHARTI (US 20250095390 A1), hereinafter referenced as BHARTI in view of GAO (US 20220036124 A1), hereinafter referenced as GAO.
Regarding claim 1, BHARTI teaches a method (Figs. 2A-B, Fig. 6, Paragraph [0079] – BHARTI discloses the method disclosed below should be considered as one non-limiting example of a method employing deep, convolutional neural networks for cell functionality characterization) comprising:
obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure (Fig. 2B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 [wherein 210 called new input array is quantitative imaging data] (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products));
determining a context spectrum selection from context spectrum including a range of selectable values (Fig. 2B, Paragraph [0075] – BHARTI discloses the deep neural network model 212 is capable of consistently and autonomously analyzing images, identifying features within images, performing high-throughput segmentation of given images, and correlating the images to identity, safety, physiological, biochemical, or molecular outcomes.) by:
applying the specific QID to an input layer of a context-spectrum neural network (Fig. 6, Paragraph [0095] – BHARTI discloses FIG. 6 illustrates an exemplary fully-connected deep neural network (DNN) 600 that can be implemented by the deep neural network model 212 in accordance with embodiments of the present disclosure. Paragraph [0098] – BHARTI further discloses the images to be analyzed 603 can be inputted into the nodes 602 of the input layer 604.),
Although BHARTI further teaches mapping the context spectrum selection to the image to generate a context spectrum mask for the image (Fig. 6, Paragraph [0102] – BHARTI discloses the fully connected layer processes the output of the previous layer (which represents the activation maps of high level features) and determines which features most correlate to a particular class.);
and determining a condition of the biostructure based on the context spectrum mask (Fig. 6, Paragraph [0102] – BHARTI discloses a particular output feature from a previous convolution layer may indicate whether a specific feature in the image is indicative of an RPE cell, and such feature can be used to classify a target image as ‘RPE cell’ or ‘non-RPE cell’).
BHARTI fails to explicitly teach wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
However, GAO explicitly teaches wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss (Fig. 5, Paragraph [0076] – GAO discloses in the formula (4), L.sub.focal is a focal loss function, which is used when the primary image segmentation model is trained. L.sub.dice is a generalized dice loss function used for training the primary image segmentation model.),
based on previous QID and constructed context spectrum data associated with the previous QID (Fig. 5, Paragraph [0085] – GAO discloses the target image segmentation model is able to input shape code into a potential space and make the shape predicted by a network accord with the prior knowledge [wherein prior knowledge is previous QID] by minimizing a distance between the shape predicted by the network and a ground truth shape [wherein ground truth shape is constructed context spectrum data].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, with the teachings of GAO of having wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
Wherein having BHARTI’s method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
Regarding claim 2, BHARTI in view of GAO teach the method according to claim 1,
BHARTI further teaches wherein: the previous QID are obtained corresponding to an image of a second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses the approach was to 1) train a DNN (DNN-Z) to segment cell borders in ZO-1 fluorescence images using corresponding images, where the cell borders had been drawn in by expert technicians, 2) collect QBAM images and fluorescent images of RPE that had been fluorescently stained for ZO-1, 3) use the DNN-Z to segment cell borders using ZO-1 fluorescence images and 4) use the ZO-1 segmentations to train a new DNN to segment cells in QBAM images (DNN-S));
and the constructed context spectrum data comprises a ground truth condition of the second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses a deep convolutional neural network was designed to segment RPE fluorescently labeled for a tight junction protein (ZO-1), which highlights the cell borders and enables accurate cell segmentation. The purpose of this was to have a highly accurate segmentation method to generate ground truth cell border labels for QBAM.).
Regarding claim 3, BHARTI in view of GAO teach the method according to claim 1,
Although BHARTI further teaches the context-spectrum neural network (Fig. 6, #600 called deep neural network, Paragraph [0095]),
BHARTI fails to explicitly teach wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
However, GAO explicitly teaches wherein: the context-spectrum neural network comprises an EfficientNet Unet (Fig. 11, #110 called primary image segmentation model, Paragraph [0032] – GAO discloses the primary image segmentation model may be a modified 3D U-Net fully convolutional neural network that is based on an encoder-decoder architecture.)
comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet (Fig. 11, Paragraph [0034] – GAO discloses in order to improve an accuracy in segmentation performed by the primary image segmentation model, the image processing device may use a residue block including a convolution layer, an ReLU and a batch normalization layer as a backbone network of the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having a method comprising: determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, with the teachings of GAO of having wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
Wherein having BHARTI’s method wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
Regarding claim 6, BHARTI in view of GAO teach the method according to claim 1, BHARTI further teaches wherein: the context spectrum comprises a continuum or near continuum of selectable states (Fig. 7, Paragraph [0100] – BHARTI discloses CNN takes the image 702, and passes it through a series of convolutional, nonlinear, pooling (downsampling), and fully connected layers to get an output. The output [wherein output is selectable states] can be a single class or a probability of classes that best describes the image.).
Regarding claim 8, BHARTI teaches an apparatus (Fig. 1, #102 called information processing system, Paragraph [0054] – BHARTI discloses information processing system 102 of FIG. 1 is capable of implementing and/or performing any of the functionality set forth.), comprising:
a memory storing instructions; and a processor in communication with the memory (Fig. 1, Paragraph [0055] – BHARTI discloses the components of the information processing system 102 can include, but are not limited to, one or more processors or processing units 104, a system memory 106, and a bus 108 that couples various system components including the system memory 106 to the processor 104.),
wherein, when the processor executes the instructions, the processor is configured to cause the apparatus to perform (Fig. 1, Paragraph [0057] – BHARTI discloses the system memory 106, in one embodiment, includes a machine learning module 109 configured to perform one or more embodiments discussed below. It should be noted that even though FIG. 1 shows the machine learning module 109 residing in the main memory, the machine learning module 109 can reside within the processor 104, be a separate hardware component capable of and/or be distributed across a plurality of information processing systems and/or processors.):
obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure (Fig. 2B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 [wherein 210 called new input array is quantitative imaging data] (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products));
determining a context spectrum selection from context spectrum including a range of selectable values (Fig. 2B, Paragraph [0075] – BHARTI discloses the deep neural network model 212 is capable of consistently and autonomously analyzing images, identifying features within images, performing high-throughput segmentation of given images, and correlating the images to identity, safety, physiological, biochemical, or molecular outcomes.) by:
applying the specific QID to an input layer of a context-spectrum neural network (Fig. 6, Paragraph [0095] – BHARTI discloses FIG. 6 illustrates an exemplary fully-connected deep neural network (DNN) 600 that can be implemented by the deep neural network model 212 in accordance with embodiments of the present disclosure. Paragraph [0098] – BHARTI further discloses the images to be analyzed 603 can be inputted into the nodes 602 of the input layer 604.),
Although BHARTI further teaches mapping the context spectrum selection to the image to generate a context spectrum mask for the image (Fig. 6, Paragraph [0102] – BHARTI discloses the fully connected layer processes the output of the previous layer (which represents the activation maps of high level features) and determines which features most correlate to a particular class.);
and determining a condition of the biostructure based on the context spectrum mask (Fig. 6, Paragraph [0102] – BHARTI discloses a particular output feature from a previous convolution layer may indicate whether a specific feature in the image is indicative of an RPE cell, and such feature can be used to classify a target image as ‘RPE cell’ or ‘non-RPE cell’).
BHARTI fails to explicitly teach wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
However, GAO explicitly teaches wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss (Fig. 5, Paragraph [0076] – GAO discloses in the formula (4), L.sub.focal is a focal loss function, which is used when the primary image segmentation model is trained. L.sub.dice is a generalized dice loss function used for training the primary image segmentation model.),
based on previous QID and constructed context spectrum data associated with the previous QID (Fig. 5, Paragraph [0085] – GAO discloses the target image segmentation model is able to input shape code into a potential space and make the shape predicted by a network accord with the prior knowledge [wherein prior knowledge is previous QID] by minimizing a distance between the shape predicted by the network and a ground truth shape [wherein ground truth shape is constructed context spectrum data].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having an apparatus, comprising: a memory storing instructions; and a processor in communication with the memory, wherein, when the processor executes the instructions, the processor is configured to cause the apparatus to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, with the teachings of GAO of having wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
Wherein having BHARTI’s apparatus comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
The motivation behind the modification would have been to obtain an apparatus for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
Regarding claim 9, BHARTI in view of GAO teach the apparatus according to claim 8,
BHARTI further teaches wherein: the previous QID are obtained corresponding to an image of a second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses the approach was to 1) train a DNN (DNN-Z) to segment cell borders in ZO-1 fluorescence images using corresponding images, where the cell borders had been drawn in by expert technicians, 2) collect QBAM images and fluorescent images of RPE that had been fluorescently stained for ZO-1, 3) use the DNN-Z to segment cell borders using ZO-1 fluorescence images and 4) use the ZO-1 segmentations to train a new DNN to segment cells in QBAM images (DNN-S));
and the constructed context spectrum data comprises a ground truth condition of the second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses a deep convolutional neural network was designed to segment RPE fluorescently labeled for a tight junction protein (ZO-1), which highlights the cell borders and enables accurate cell segmentation. The purpose of this was to have a highly accurate segmentation method to generate ground truth cell border labels for QBAM.).
Regarding claim 10, BHARTI in view of GAO teach the apparatus according to claim 8,
Although BHARTI further teaches the context-spectrum neural network (Fig. 6, #600 called deep neural network, Paragraph [0095]),
BHARTI fails to explicitly teach wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
However, GAO explicitly teaches wherein: the context-spectrum neural network comprises an EfficientNet Unet (Fig. 11, #110 called primary image segmentation model, Paragraph [0032] – GAO discloses the primary image segmentation model may be a modified 3D U-Net fully convolutional neural network that is based on an encoder-decoder architecture.)
comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet (Fig. 11, Paragraph [0034] – GAO discloses in order to improve an accuracy in segmentation performed by the primary image segmentation model, the image processing device may use a residue block including a convolution layer, an ReLU and a batch normalization layer as a backbone network of the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having an apparatus, comprising: determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, with the teachings of GAO of having wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
Wherein having BHARTI’s apparatus wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
The motivation behind the modification would have been to obtain an apparatus for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
Regarding claim 13, BHARTI in view of GAO teach the apparatus according to claim 8, BHARTI further teaches wherein: the context spectrum comprises a continuum or near continuum of selectable states (Fig. 7, Paragraph [0100] – BHARTI discloses CNN takes the image 702, and passes it through a series of convolutional, nonlinear, pooling (downsampling), and fully connected layers to get an output. The output [wherein output is selectable states] can be a single class or a probability of classes that best describes the image.).
Regarding claim 15, BHARTI teaches a non-transitory computer readable storage medium storing computer readable instructions (Fig. 1, Paragraph [0058] – BHARTI discloses the information processing system 102 can further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, a storage system 114 can be provided for reading from and writing to a non-removable or removable, non-volatile media such as one or more solid state disks and/or magnetic media (typically called a “hard drive”). A magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided.),
wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform (Fig. 1, Paragraph [0057] – BHARTI discloses the system memory 106, in one embodiment, includes a machine learning module 109 configured to perform one or more embodiments discussed below. It should be noted that even though FIG. 1 shows the machine learning module 109 residing in the main memory, the machine learning module 109 can reside within the processor 104, be a separate hardware component capable of and/or be distributed across a plurality of information processing systems and/or processors.):
obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure (Fig. 2B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 [wherein 210 called new input array is quantitative imaging data] (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products));
determining a context spectrum selection from context spectrum including a range of selectable values (Fig. 2B, Paragraph [0075] – BHARTI discloses the deep neural network model 212 is capable of consistently and autonomously analyzing images, identifying features within images, performing high-throughput segmentation of given images, and correlating the images to identity, safety, physiological, biochemical, or molecular outcomes.) by:
applying the specific QID to an input layer of a context-spectrum neural network (Fig. 6, Paragraph [0095] – BHARTI discloses FIG. 6 illustrates an exemplary fully-connected deep neural network (DNN) 600 that can be implemented by the deep neural network model 212 in accordance with embodiments of the present disclosure. Paragraph [0098] – BHARTI further discloses the images to be analyzed 603 can be inputted into the nodes 602 of the input layer 604.),
Although BHARTI further teaches mapping the context spectrum selection to the image to generate a context spectrum mask for the image (Fig. 6, Paragraph [0102] – BHARTI discloses the fully connected layer processes the output of the previous layer (which represents the activation maps of high level features) and determines which features most correlate to a particular class.);
and determining a condition of the biostructure based on the context spectrum mask (Fig. 6, Paragraph [0102] – BHARTI discloses a particular output feature from a previous convolution layer may indicate whether a specific feature in the image is indicative of an RPE cell, and such feature can be used to classify a target image as ‘RPE cell’ or ‘non-RPE cell’).
BHARTI fails to explicitly teach wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
However, GAO explicitly teaches wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss (Fig. 5, Paragraph [0076] – GAO discloses in the formula (4), L.sub.focal is a focal loss function, which is used when the primary image segmentation model is trained. L.sub.dice is a generalized dice loss function used for training the primary image segmentation model.),
based on previous QID and constructed context spectrum data associated with the previous QID (Fig. 5, Paragraph [0085] – GAO discloses the target image segmentation model is able to input shape code into a potential space and make the shape predicted by a network accord with the prior knowledge [wherein prior knowledge is previous QID] by minimizing a distance between the shape predicted by the network and a ground truth shape [wherein ground truth shape is constructed context spectrum data].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure; determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, mapping the context spectrum selection to the image to generate a context spectrum mask for the image; and determining a condition of the biostructure based on the context spectrum mask, with the teachings of GAO of having wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
Wherein having BHARTI’s non-transitory computer readable storage medium wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure wherein the context-spectrum neural network is trained, according to a combination of focal loss and dice loss, based on previous QID and constructed context spectrum data associated with the previous QID.
The motivation behind the modification would have been to obtain a non-transitory computer readable medium storing computer readable instructions for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
Regarding claim 16, BHARTI in view of GAO teach the non-transitory computer readable storage medium according to claim 15,
BHARTI further teaches wherein: the previous QID are obtained corresponding to an image of a second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses the approach was to 1) train a DNN (DNN-Z) to segment cell borders in ZO-1 fluorescence images using corresponding images, where the cell borders had been drawn in by expert technicians, 2) collect QBAM images and fluorescent images of RPE that had been fluorescently stained for ZO-1, 3) use the DNN-Z to segment cell borders using ZO-1 fluorescence images and 4) use the ZO-1 segmentations to train a new DNN to segment cells in QBAM images (DNN-S));
and the constructed context spectrum data comprises a ground truth condition of the second biostructure (Figs. 2A-B, Paragraph [0193] – BHARTI discloses a deep convolutional neural network was designed to segment RPE fluorescently labeled for a tight junction protein (ZO-1), which highlights the cell borders and enables accurate cell segmentation. The purpose of this was to have a highly accurate segmentation method to generate ground truth cell border labels for QBAM.).
Regarding claim 17, BHARTI in view of GAO teach the non-transitory computer readable storage medium according to claim 15,
Although BHARTI further teaches the context-spectrum neural network (Fig. 6, #600 called deep neural network, Paragraph [0095]),
BHARTI fails to explicitly teach wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
However, GAO explicitly teaches wherein: the context-spectrum neural network comprises an EfficientNet Unet (Fig. 11, #110 called primary image segmentation model, Paragraph [0032] – GAO discloses the primary image segmentation model may be a modified 3D U-Net fully convolutional neural network that is based on an encoder-decoder architecture.)
comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet (Fig. 11, Paragraph [0034] – GAO discloses in order to improve an accuracy in segmentation performed by the primary image segmentation model, the image processing device may use a residue block including a convolution layer, an ReLU and a batch normalization layer as a backbone network of the primary image segmentation model.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having a non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: determining a context spectrum selection from context spectrum including a range of selectable values by: applying the specific QID to an input layer of a context-spectrum neural network, with the teachings of GAO of having wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
Wherein having BHARTI’s non-transitory computer readable storage medium wherein: the context-spectrum neural network comprises an EfficientNet Unet comprising one or more first layers for adapting a vector size to operational size for another layer of the EfficientNet Unet.
The motivation behind the modification would have been to obtain a non-transitory computer readable medium storing computer readable instructions for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over BHARTI (US 20250095390 A1), hereinafter referenced as BHARTI in view of GAO (US 20220036124 A1), hereinafter referenced as GAO, further in view of TSIORIS (US 20240254431 A1), hereinafter referenced as TSIORIS.
Regarding claim 4, BHARTI in view of GAO teach a method according to claim 1,
Although BHARTI further teaches wherein: the biostructure comprises at least one of the following: a cell (Fig. 2A-B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210),
a tissue (Figs. 2A-B, Paragraph [0072] – BHARTI discloses the input data 202 may include an input array of measurements representative of at least one physiological, molecular, cellular, and/or biochemical parameter of a plurality of primary cell types derived from human or any animal tissue.),
a cell part (Figs. 2A-B, Paragraph [0082] – BHARTI discloses based on an understanding of cell borders and visual parameters (i.e., shape, intensity and texture metrics) within the microscopic images, the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products). It should be noted that texture metrics may include a plurality of sub-cellular features [wherein sub-cellular features are a cell part].),
BHARTI fails to explicitly teach wherein the biostructure comprises an organ.
However, GAO explicitly teaches wherein the biostructure comprises an organ (Fig. 11, Paragraph [0114] – GAO discloses the image processing device firstly performs feature extraction on an original CT image through a primary image segmentation model 110 to obtain a feature map and directly obtains a segmentation result of a large organ.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of GAO of having wherein the biostructure comprises at least one of the following: an organ.
Wherein having BHARTI’s method wherein the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
BHARTI in view of GAO fail to explicitly teach wherein the biostructure comprises at least one of the following: a HeLa cell.
However, TSIORIS explicitly teaches wherein the biostructure comprises at least one of the following: a HeLa cell (Fig. 2, Paragraph [0165] – TSIORIS discloses a method of selecting a target cell, wherein the target cell may be a certain type of cell. In certain embodiments, the target cell is a T cell, a B cell, a plasma cell, antibody secreting cells (ASCs), an antigen presenting cell, a hybridoma, an immune cell, a stem cell, an induced pluripotent stem cell (IPSC), or an engineered cell. In certain embodiments, the engineered cell is… a HELA cell.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having a method comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of TSIORIS of having wherein the biostructure comprises at least one of the following: a HeLa cell.
Wherein having BHARTI’s method wherein the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ, or a HeLa cell.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and TSIORIS relate to cell characterization methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and TSIORIS discloses methods and systems for high-throughput cell line development, providing for rapid identification and characterization of compositions produced by cells, providing a rapid and cost-effective technology for developing new cell lines for therapeutic uses. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and TSIORIS (US 20240254431 A1), Paragraphs [0052, 0058].
Regarding claim 11, BHARTI in view of GAO teach the apparatus according to claim 8,
Although BHARTI further teaches wherein: the biostructure comprises at least one of the following: a cell (Fig. 2A-B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210),
a tissue (Figs. 2A-B, Paragraph [0072] – BHARTI discloses the input data 202 may include an input array of measurements representative of at least one physiological, molecular, cellular, and/or biochemical parameter of a plurality of primary cell types derived from human or any animal tissue.),
a cell part (Figs. 2A-B, Paragraph [0082] – BHARTI discloses based on an understanding of cell borders and visual parameters (i.e., shape, intensity and texture metrics) within the microscopic images, the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products). It should be noted that texture metrics may include a plurality of sub-cellular features [wherein sub-cellular features are a cell part].),
BHARTI fails to explicitly teach wherein the biostructure comprises at least one of the following: an organ.
However, GAO explicitly teaches wherein the biostructure comprises at least one of the following: an organ (Fig. 11, Paragraph [0114] – GAO discloses the image processing device firstly performs feature extraction on an original CT image through a primary image segmentation model 110 to obtain a feature map and directly obtains a segmentation result of a large organ.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having the apparatus comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of GAO of having wherein the biostructure comprises at least one of the following: an organ.
Wherein having BHARTI’s apparatus wherein the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ.
The motivation behind the modification would have been to obtain an apparatus for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
BHARTI in view of GAO fail to explicitly teach wherein the biostructure comprises at least one of the following: a HeLa cell.
However, TSIORIS explicitly teaches wherein the biostructure comprises at least one of the following: a HeLa cell (Fig. 2, Paragraph [0165] – TSIORIS discloses a method of selecting a target cell, wherein the target cell may be a certain type of cell. In certain embodiments, the target cell is a T cell, a B cell, a plasma cell, antibody secreting cells (ASCs), an antigen presenting cell, a hybridoma, an immune cell, a stem cell, an induced pluripotent stem cell (IPSC), or an engineered cell. In certain embodiments, the engineered cell is… a HELA cell.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having an apparatus comprising: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of TSIORIS of having wherein the biostructure comprises at least one of the following: a HeLa cell.
Wherein having BHARTI’s apparatus wherein the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ, or a HeLa cell.
The motivation behind the modification would have been to obtain an apparatus for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and TSIORIS relate to cell characterization methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and TSIORIS discloses methods and systems for high-throughput cell line development, providing for rapid identification and characterization of compositions produced by cells, providing a rapid and cost-effective technology for developing new cell lines for therapeutic uses. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and TSIORIS (US 20240254431 A1), Paragraphs [0052, 0058].
Regarding claim 18, BHARTI in view of GAO teach the non-transitory computer readable storage medium according to claim 15,
Although BHARTI further teaches wherein: the biostructure comprises at least one of the following: a cell (Fig. 2A-B, Paragraph [0082] – BHARTI discloses the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210),
a tissue (Figs. 2A-B, Paragraph [0072] – BHARTI discloses the input data 202 may include an input array of measurements representative of at least one physiological, molecular, cellular, and/or biochemical parameter of a plurality of primary cell types derived from human or any animal tissue.),
a cell part (Figs. 2A-B, Paragraph [0082] – BHARTI discloses based on an understanding of cell borders and visual parameters (i.e., shape, intensity and texture metrics) within the microscopic images, the deep neural network model 212 is capable of detecting cell borders and correlation of visual parameters within such images of the new input array 210 (e.g., the live fluorescence microscopic images, multispectral absorption bright-field images, chemiluminescent images, radioactive images or hyperspectral fluorescent images of similar cells or cell derived products). It should be noted that texture metrics may include a plurality of sub-cellular features [wherein sub-cellular features are a cell part].),
BHARTI fails to explicitly teach wherein the biostructure comprises at least one of the following: an organ.
However, GAO explicitly teaches wherein the biostructure comprises at least one of the following: an organ (Fig. 11, Paragraph [0114] – GAO discloses the image processing device firstly performs feature extraction on an original CT image through a primary image segmentation model 110 to obtain a feature map and directly obtains a segmentation result of a large organ.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI of having the non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of GAO of having wherein the biostructure comprises at least one of the following: an organ.
Wherein having BHARTI’s non-transitory computer readable storage medium wherein the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ.
The motivation behind the modification would have been to obtain a non-transitory computer readable storage medium for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and GAO relate to image processing methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and GAO discloses an image processing method and device that is able to improve an accuracy of image segmentation such that the image processing device may locate a smaller segmentation target first and finely segments the parts cropped out from the feature map, thereby solving the problem that the samples of the small segmentation targets are imbalanced, and making the image segmentation easier and more accurate. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and GAO (US 20220036124 A1), Paragraph [0105].
BHARTI in view of GAO fail to explicitly teach wherein the biostructure comprises at least one of the following: a HeLa cell.
However, TSIORIS explicitly teaches wherein the biostructure comprises at least one of the following: a HeLa cell (Fig. 2, Paragraph [0165] – TSIORIS discloses a method of selecting a target cell, wherein the target cell may be a certain type of cell. In certain embodiments, the target cell is a T cell, a B cell, a plasma cell, antibody secreting cells (ASCs), an antigen presenting cell, a hybridoma, an immune cell, a stem cell, an induced pluripotent stem cell (IPSC), or an engineered cell. In certain embodiments, the engineered cell is… a HELA cell.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having the non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: obtaining specific quantitative imaging data (QID) corresponding to an image of a biostructure, with the teachings of TSIORIS of having wherein the biostructure comprises at least one of the following: a HeLa cell.
Wherein having BHARTI’s non-transitory computer readable storage medium wherein the biostructure comprises at least one of the following: a cell, a tissue, a cell part, an organ, or a HeLa cell.
The motivation behind the modification would have been to obtain a non-transitory computer readable storage medium for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and TSIORIS relate to cell characterization methods utilizing neural networks, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and TSIORIS discloses methods and systems for high-throughput cell line development, providing for rapid identification and characterization of compositions produced by cells, providing a rapid and cost-effective technology for developing new cell lines for therapeutic uses. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and TSIORIS (US 20240254431 A1), Paragraphs [0052, 0058].
Claims 5, 7, 12, 14, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over BHARTI (US 20250095390 A1), hereinafter referenced as BHARTI in view of GAO (US 20220036124 A1), hereinafter referenced as GAO, further in view of MASAELI (US 20240153289 A1), hereinafter referenced as MASAELI.
Regarding claim 5, BHARTI in view of GAO teach the method according to claim 1,
Although BHARTI further teaches wherein: the condition of the biostructure comprises at least one of the following: health (Figs. 2A-B, Paragraph [0069] – BHARTI discloses FIGS. 2A and 2B illustrate a selected machine learning framework and a deep neural network framework that might be used individually or together for predicting functions, identity, disease state and health of cells and their derivatives.),
BHARTI in view of GAO fail to explicitly teach wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
However, MASAELI explicitly teaches wherein: the condition of the biostructure comprises at least one of the following:
viability (Fig. 1, Paragraph [0046] – MASAELI discloses examples of the feature of the cell(s) can comprise a size, shape, volume, electromagnetic radiation absorbance and/or transmittance (e.g., fluorescence intensity, luminescence intensity, etc.), or viability (e.g., when live cells are used).),
cell membrane integrity (Fig. 1, Paragraph [0053] – MASAELI discloses non-limiting examples of one or more morphological properties of a cell, as disclosed herein, that can be extracted from one or more images of the cell can include, but are not limited to (i) shape, curvature, size (e.g., diameter, length, width, circumference), area, volume, texture, thickness, roundness, etc. of the cell or one or more components of the cell (e.g., cell membrane, nucleus, mitochondria, etc.)),
or cell cycle (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. See also Paragraph [0048].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having a method comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI of having wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
Wherein having BHARTI’s method wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, health, or cell cycle.
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and MASAELI relate to image processing methods and systems for cell classification and sorting, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and MASAELI discloses methods and systems for analyzing (e.g., automatically classifying) cells based on one or more morphological features of the cells without the need to rely on other utilized methods of analyzing cells, enhancing speed and/or scalability of cell analysis systems and methods while maintaining or even enhancing accuracy of the analysis. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and MASAELI (US 20240153289 A1), Paragraph [0039].
Regarding claim 7, BHARTI in view of GAO teach the method according to claim 1,
Although BHARTI further teaches wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state (Figs. 2A-B, Paragraph [0068] – BHARTI discloses the disclosed approaches may be used to tell the difference between: 1) cells of different sub-types, thus allowing the possibility of making or optimizing the generation of specific cells and tissue types using stem cells; 2) healthy and diseased cells, allowing the possibility of discovering drugs or underlying mechanisms behind a disease that can improve the health of diseased cells);
BHARTI fails to explicitly teach or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
However, MASAELI explicitly teaches or the condition of the biostructure (Fig. 1, Paragraph [0041] – MASAELI discloses the classifier can be configured to classify (e.g., automatically classify) a cellular image sample [wherein cellular image sample is the biostructure] based on its proximity, correlation, or commonality with one or more of the morphologically-distinct clusters.)
comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase) (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. Paragraph [0048] – MASAELI further discloses “cell cycle” as used herein generally refers to the physiological and/or morphological progression of changes that cells undergo when dividing (e.g., proliferating). Examples of different phases of the cell cycle can include “interphase,” “prophase,” “metaphase,” “anaphase,” and “telophase”. Additionally, parts of the cell cycle can be “M (mitosis),” “S (synthesis),” “G0,” “G1 (gap 1)” and “G2 (gap 2)”. Furthermore, the cell cycle can include periods of progression that are intermediate to the above named phases.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having a method comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI of having or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
Wherein having BHARTI’s method wherein the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
The motivation behind the modification would have been to obtain a method of obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and MASAELI relate to image processing methods and systems for cell classification and sorting, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and MASAELI discloses methods and systems for analyzing (e.g., automatically classifying) cells based on one or more morphological features of the cells without the need to rely on other utilized methods of analyzing cells, enhancing speed and/or scalability of cell analysis systems and methods while maintaining or even enhancing accuracy of the analysis. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and MASAELI (US 20240153289 A1), Paragraph [0039].
Regarding claim 12, BHARTI in view of GAO teach the apparatus according to claim 8,
Although BHARTI further teaches wherein: the condition of the biostructure comprises at least one of the following: health (Figs. 2A-B, Paragraph [0069] – BHARTI discloses FIGS. 2A and 2B illustrate a selected machine learning framework and a deep neural network framework that might be used individually or together for predicting functions, identity, disease state and health of cells and their derivatives.),
BHARTI in view of GAO fail to explicitly teach wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
However, MASAELI explicitly teaches wherein: the condition of the biostructure comprises at least one of the following:
viability (Fig. 1, Paragraph [0046] – MASAELI discloses examples of the feature of the cell(s) can comprise a size, shape, volume, electromagnetic radiation absorbance and/or transmittance (e.g., fluorescence intensity, luminescence intensity, etc.), or viability (e.g., when live cells are used).),
cell membrane integrity (Fig. 1, Paragraph [0053] – MASAELI discloses non-limiting examples of one or more morphological properties of a cell, as disclosed herein, that can be extracted from one or more images of the cell can include, but are not limited to (i) shape, curvature, size (e.g., diameter, length, width, circumference), area, volume, texture, thickness, roundness, etc. of the cell or one or more components of the cell (e.g., cell membrane, nucleus, mitochondria, etc.)),
or cell cycle (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. See also Paragraph [0048].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having an apparatus comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI of having wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
Wherein having BHARTI’s apparatus wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, health, or cell cycle.
The motivation behind the modification would have been to obtain an apparatus for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and MASAELI relate to image processing methods and systems for cell classification and sorting, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and MASAELI discloses methods and systems for analyzing (e.g., automatically classifying) cells based on one or more morphological features of the cells without the need to rely on other utilized methods of analyzing cells, enhancing speed and/or scalability of cell analysis systems and methods while maintaining or even enhancing accuracy of the analysis. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and MASAELI (US 20240153289 A1), Paragraph [0039].
Regarding claim 14, BHARTI in view of GAO teach the apparatus according to claim 8,
Although BHARTI further teaches wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state (Figs. 2A-B, Paragraph [0068] – BHARTI discloses the disclosed approaches may be used to tell the difference between: 1) cells of different sub-types, thus allowing the possibility of making or optimizing the generation of specific cells and tissue types using stem cells; 2) healthy and diseased cells, allowing the possibility of discovering drugs or underlying mechanisms behind a disease that can improve the health of diseased cells);
BHARTI fails to explicitly teach or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
However, MASAELI explicitly teaches or the condition of the biostructure (Fig. 1, Paragraph [0041] – MASAELI discloses the classifier can be configured to classify (e.g., automatically classify) a cellular image sample [wherein cellular image sample is the biostructure] based on its proximity, correlation, or commonality with one or more of the morphologically-distinct clusters.)
comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase) (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. Paragraph [0048] – MASAELI further discloses “cell cycle” as used herein generally refers to the physiological and/or morphological progression of changes that cells undergo when dividing (e.g., proliferating). Examples of different phases of the cell cycle can include “interphase,” “prophase,” “metaphase,” “anaphase,” and “telophase”. Additionally, parts of the cell cycle can be “M (mitosis),” “S (synthesis),” “G0,” “G1 (gap 1)” and “G2 (gap 2)”. Furthermore, the cell cycle can include periods of progression that are intermediate to the above named phases.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having an apparatus comprising: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI of having or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
Wherein having BHARTI’s apparatus wherein the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
The motivation behind the modification would have been to obtain an apparatus for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and MASAELI relate to image processing methods and systems for cell classification and sorting, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and MASAELI discloses methods and systems for analyzing (e.g., automatically classifying) cells based on one or more morphological features of the cells without the need to rely on other utilized methods of analyzing cells, enhancing speed and/or scalability of cell analysis systems and methods while maintaining or even enhancing accuracy of the analysis. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and MASAELI (US 20240153289 A1), Paragraph [0039].
Regarding claim 19, BHARTI in view of GAO teach the non-transitory computer readable storage medium according to claim 15,
Although BHARTI further teaches wherein: the condition of the biostructure comprises at least one of the following: health (Figs. 2A-B, Paragraph [0069] – BHARTI discloses FIGS. 2A and 2B illustrate a selected machine learning framework and a deep neural network framework that might be used individually or together for predicting functions, identity, disease state and health of cells and their derivatives.),
BHARTI in view of GAO fail to explicitly teach wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
However, MASAELI explicitly teaches wherein: the condition of the biostructure comprises at least one of the following:
viability (Fig. 1, Paragraph [0046] – MASAELI discloses examples of the feature of the cell(s) can comprise a size, shape, volume, electromagnetic radiation absorbance and/or transmittance (e.g., fluorescence intensity, luminescence intensity, etc.), or viability (e.g., when live cells are used).),
cell membrane integrity (Fig. 1, Paragraph [0053] – MASAELI discloses non-limiting examples of one or more morphological properties of a cell, as disclosed herein, that can be extracted from one or more images of the cell can include, but are not limited to (i) shape, curvature, size (e.g., diameter, length, width, circumference), area, volume, texture, thickness, roundness, etc. of the cell or one or more components of the cell (e.g., cell membrane, nucleus, mitochondria, etc.)),
or cell cycle (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. See also Paragraph [0048].).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having the non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI of having wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, cell cycle.
Wherein having BHARTI’s non-transitory computer readable storage medium wherein: the condition of the biostructure comprises at least one of the following: viability, cell membrane integrity, health, or cell cycle.
The motivation behind the modification would have been to obtain non-transitory computer readable storage medium for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and MASAELI relate to image processing methods and systems for cell classification and sorting, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and MASAELI discloses methods and systems for analyzing (e.g., automatically classifying) cells based on one or more morphological features of the cells without the need to rely on other utilized methods of analyzing cells, enhancing speed and/or scalability of cell analysis systems and methods while maintaining or even enhancing accuracy of the analysis. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and MASAELI (US 20240153289 A1), Paragraph [0039].
Regarding claim 20, BHARTI in view of GAO teach the non-transitory computer readable storage medium according to claim 15,
Although BHARTI further teaches wherein: the condition of the biostructure comprises one of a viable state, an injured state, or a dead state (Figs. 2A-B, Paragraph [0068] – BHARTI discloses the disclosed approaches may be used to tell the difference between: 1) cells of different sub-types, thus allowing the possibility of making or optimizing the generation of specific cells and tissue types using stem cells; 2) healthy and diseased cells, allowing the possibility of discovering drugs or underlying mechanisms behind a disease that can improve the health of diseased cells);
BHARTI fails to explicitly teach or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
However, MASAELI explicitly teaches or the condition of the biostructure (Fig. 1, Paragraph [0041] – MASAELI discloses the classifier can be configured to classify (e.g., automatically classify) a cellular image sample [wherein cellular image sample is the biostructure] based on its proximity, correlation, or commonality with one or more of the morphologically-distinct clusters.)
comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase) (Fig. 1, Paragraph [0038] – MASAELI discloses one or more morphological properties of a cell can be used to, for example, study cell type and cell state, or to diagnose diseases. In some cases, cell shape can be one of the markers of cell cycle. Paragraph [0048] – MASAELI further discloses “cell cycle” as used herein generally refers to the physiological and/or morphological progression of changes that cells undergo when dividing (e.g., proliferating). Examples of different phases of the cell cycle can include “interphase,” “prophase,” “metaphase,” “anaphase,” and “telophase”. Additionally, parts of the cell cycle can be “M (mitosis),” “S (synthesis),” “G0,” “G1 (gap 1)” and “G2 (gap 2)”. Furthermore, the cell cycle can include periods of progression that are intermediate to the above named phases.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of BHARTI in view of GAO of having the non-transitory computer readable storage medium storing computer readable instructions, wherein, the computer readable instructions, when executed by a processor, are configured to cause the processor to perform: determining a condition of the biostructure based on the context spectrum mask, with the teachings of MASAELI of having or the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
Wherein having BHARTI’s non-transitory computer readable storage medium wherein the condition of the biostructure comprises one of a cell growth stage (G1 phase), a deoxyribonucleic acid (DNA) synthesis stage (S phase), or a cell growth/mitotic stage (G2/M phase).
The motivation behind the modification would have been to obtain a non-transitory computer readable storage medium for obtaining quantitative imaging data determining a condition of a biostructure by a neural network based on quantitative imaging data (QID), allowing for simple and efficient gathering of a wide spectrum of information, from screening new drugs, to studying the expression of novel genes, to creating new diagnostic products, and even to monitoring cancer patients, since both BHARTI and MASAELI relate to image processing methods and systems for cell classification and sorting, wherein BHARTI discloses a novel computational framework for generating lot and batch release criteria for a clinical preparation of individual stem cell lines to determine the degree of similarity to previous lots or batches such that the automated analysis performed by selected machine learning methods and modern deep neural networks substantially eliminates human bias and error, and MASAELI discloses methods and systems for analyzing (e.g., automatically classifying) cells based on one or more morphological features of the cells without the need to rely on other utilized methods of analyzing cells, enhancing speed and/or scalability of cell analysis systems and methods while maintaining or even enhancing accuracy of the analysis. Please see BHARTI (US 20250095390 A1), Paragraphs [0115-0116], and MASAELI (US 20240153289 A1), Paragraph [0039].
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
WOOLF et al. (US 20240281977 A1)- Disclosed is a method for analyzing a set of images of a coronary artery tissue. The method comprises segmenting the images for the presence of normal artery features and those associated with OCT, correcting artifacts, and optimizing the images. The method further comprises segmenting the diseased tissue into distinct tissue types, and measuring features of interests of the segmented tissue types. The method further comprises compiling a first set of measurements for each identified feature of interest at a first time, and a second set of measurements at a second time subsequent to the first time. The method further comprises determining changes in the coronary artery tissue, indicative of progression or regression of a diseased state, or prediction of multiple adverse cardiovascular events (MACE) such as cardiac death or myocardial infarction..... ...... Fig. 1., 3, 4, Abstract.
BAUER et al. (US 20220223230 A1)- The present disclosure relates to automated systems and methods for quantitatively determining an unmasking status of a biological specimen subjected to an unmasking process (e.g. an antigen retrieval process and/or a target retrieval process) using a trained unmasking status estimation engine. In some embodiments, the trained unmasking status estimation engine comprises a machine learning algorithm based on a projection onto latent structure regression model. In some embodiments, the trained unmasking status estimation engine includes a neural network........ Fig. 3, 5, Abstract.
SHI et al. (US 20220108430 A1)- This invention relates to a hyperspectral imaging system for denoising and/or color unmixing multiple overlapping spectra in a low signal-to-noise regime with a fast analysis time. This system may carry out Hyper-Spectral Phasors (HySP) calculations to effectively analyze hyper-spectral time-lapse data. For example, this system may carry out Hyper-Spectral Phasors (HySP) calculations to effectively analyze five-dimensional (5D) hyper-spectral time-lapse data. Advantages of this imaging system may include: (a) fast computational speed, (b) the ease of phasor analysis, and (c) a denoising algorithm to obtain the minimally-acceptable signal-to-noise ratio (SNR). An unmixed color image of a target may be generated. These images may be used in diagnosis of a health condition, which may enhance a patient's clinical outcome and evolution of the patient's health............... Fig. 1, Abstract.
ZHU et al. (US 20210374518 A1)- Apparatuses, systems, and techniques are described herein to speed up inferencing in a neural network by copying output from one layer of the neural network to another computing resource based on dependencies among layers in the network. In at least one embodiment, a processor comprising one or more circuits causes two or more subsequent layers of one or more neural networks to be performed on separate computing resources from a previous layer of the one or more neural networks................ Fig. 4, 5, Abstract.
OZCAN et al. (US 20210264214 A1) – A deep learning-based digital staining method and system are disclosed that provides a label-free approach to create a virtually-stained microscopic images from quantitative phase images (QPI) of label-free samples. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses a convolutional neural network trained using a generative adversarial network model to transform QPI images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures, and would significantly simplify tissue preparation in pathology and histology fields……. Fig. 1, 7, Abstract.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEZAWIT N SHIMELES whose telephone number is (571)272-7663. The examiner can normally be reached M-F 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BEZAWIT NOLAWI SHIMELES/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673