Prosecution Insights
Last updated: April 19, 2026
Application No. 17/954,145

CELL CULTURE EVALUATION DEVICE, METHOD, AND PROGRAM FOR OPERATING CELL CULTURE EVALUATION DEVICE TO OUTPUT EXPRESSION LEVELS OF A PLURALITY OF TYPES OF RIBONUCLEIC ACIDS OF A CELL

Final Rejection §103
Filed
Sep 27, 2022
Examiner
DICKERSON, CHAD S
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
3 (Final)
63%
Grant Probability
Moderate
4-5
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
376 granted / 600 resolved
+0.7% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
14.9%
-25.1% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 600 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 11/17/2026 have been fully considered but they are not persuasive. The arguments state that the applied references do not disclose the feature of “output an image feature amount set …and input the image feature amount set to a data machine learning model to output an expression level set”. The Examiner respectfully disagrees with this assertion and would like to explain why briefly below. In the primary reference Mizukami, the reference discloses input of a cell image into a machine learning model and outputting of an intermediate image Im. The intermediate image Im is considered as the image feature amount set that is comprised of a plurality of types of image feature amounts that is input into a second machine learning model. For example, the lighter and darker portions each represent separate image feature amounts, which is taught in ¶ [50]-[55]. The intermediate image is input into another machine learning model in order to acquire another output, which is taught in ¶ [57]-[61]. Although this is different from the Applicant’s interpretation outlined in the arguments, the above interpretation is considered to disclose this aspect of the claims. Moreover. the listed element in the arguments is not detailed within the claims, and the specification is not read into the claims. However, output of an expression levels related to different RNA is not taught, but this s disclosed by the references of Kapur and Khammanivong. Regarding the reference of Kapur, similar to the primary reference, a stained slide represented by a digital image is evaluated by a machine learning model to determine an over-expression of genes within the slide and predict an expression level. This is taught in ¶ [40], [52], [53] and [66]. Like Kapur, Khammanivong discloses using a process to determine the expression level, with the expression level described as related to RNA. This is taught in ¶ [71]-[77]. These references combined with the primary reference perform the feature of an input of an image amount set composed of a plurality of types of image feature amounts that are input into another machine learning model to output or predict expression levels of RNA. Thus, based on the combination and the broadest reasonable interpretation of the claims, the rejection of the claims is maintained. Thus, based on the above, the features of the claims are disclosed below. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: compression unit, restoration unit, extraction unit, discriminator and output unit in claims 9-12 and 14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 4, 9-12 and 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mizukami (US Pub 2021/0224992) in view of Kapur (US Pub 2021/0073986 (Prov. Date: 9/9/2019)) and Khammanivong (US Pub 2020/0115762). Re claim 1: Mizukami discloses a cell culture evaluation device comprising at least one processor configured to: acquire a cell image obtained by imaging a cell that is being cultured (e.g. the invention discloses imaging an image comprised of cells, which is taught in ¶ [38] and [39]. Cultured cells can also be acquired, which is taught in ¶ [80].), [0038] FIG. 1 is a diagram showing a concept of an image processing as one embodiment of the invention. FIG. 2 is a flow chart showing this image processing. The purpose of this image processing is to detect the positions of specific detected parts (e.g. cell nuclei) from a test image It captured to include cells and finally measure the number of the cells satisfying a predetermined condition. More specifically, the test image It as a target is, for example, a bright field image or phase difference image obtained by optical microscope imaging. Further, a detection result of the detected parts is output as a result image Io showing the positions of representative points of the detected parts. [0039] A summary of a specific processing is as follows. First, the test image It as a target is obtained (Step S101). The test image It may be obtained by newly performing imaging. Further, the test image It may be obtained by reading image data captured in advance and stored in an appropriate storage means. [0080] The imaging apparatus 120 is, for example, a microscope device provided with an imaging function. The imaging apparatus 120 generates image data by imaging a specimen such as cells cultured in a well plate and transmits the image data to the interface 114 of the image processing apparatus 110. If the imaging apparatus 120 is provided for the purpose of capturing the test image It, the imaging apparatus 120 may have a bright field imaging function. On the other hand, if the imaging apparatus 120 is provided for the purpose of collecting the first and second teacher images I1, I2, the imaging apparatus 120 may have a fluorescence imaging function in addition to the bright field imaging function. It is desirable that bright field imaging and fluorescence imaging can be performed in the same field of view for one specimen. input the cell image to an image machine learning model to output an image feature amount set composed of a plurality of types of image feature amounts related to the cell image from the image machine learning model (e.g. the cell image, or test image, is input into a first learning model. The output of the first learning model includes an intermediate image, which is taught in ¶ [40]-[43] and [55]. The intermediate image includes data that reflects first and second teacher data, which is taught in ¶ [44] and [45]. The teacher images contain features, such as markers or colors, that express the details of the living and dead cells, which is taught in ¶ [48]-[53]. The intermediate image shows multiple densities that reflects the different features of the different types of markers, which is taught in ¶ [55].), and [0040] The test image It is input to a first learning model 10 (Step S102). The first learning model 10 outputs an intermediate image Im generated based on the test image It (Step S103). The intermediate image Im is an image which would be obtained if the same sample as the test image It were imaged under a condition that a marker is expressed. [0041] That is, the first learning model 10 has a function of generating the intermediate image Im, to which a pseudo marker is administered, from the test image It without a marker. In this sense, the first learning model 10 can be, for example, called a “marker generation model”. If an image with a pseudo marker can be generated with high accuracy (i.e. to have a high degree of similarity to an image actually captured with a marker introduced) from an image without a marker, it is not necessary to introduce a marker into a specimen. This obviates the need for invasive imaging to cells as observation objects and contributes to a reduction of cost associated with marker introduction. [0042] In imaging without introducing a marker, a processing for introducing a marker into a specimen needs not be performed. Thus, non-invasive imaging to cells is possible and this imaging can be applied also for the purpose of observing changes of cells with time. Further, since a reagent serving as a marker and a processing for introducing the reagent are not necessary, experiment cost including the cost of imaging can also be suppressed. Further, depending on the type of the marker, it takes time, for example, several days if it is long, until imaging becomes possible. Such a time can also be eliminated. Thus, this imaging is suitable also for the observation of a specimen in which states of cells largely change in a short time, and suitably applied, for example, to a field of regenerative medicine. [0043] Since the shapes of cells, the shading of internal structures and the like clearly appear in an image not accompanied by a marker and captured in this way, e.g. a bright field image of cells, such an image is suitable for visual observation. On the other hand, for the purpose of quantitatively and automatically detecting the positions and number of specified parts to be noticed, e.g. cell nuclei, in an image, an image with a marker remains to be advantageous in terms of measurement accuracy. Therefore, if an image with a marker can be generated from an image without a marker, an accurate and quantitative measurement is possible even from an image without a marker. [0044] For this purpose, the first learning model 10 is constructed by deep learning using teacher data collected in advance. The teacher data is a collection of a multitude of sets of a first teacher image I1 and a second teacher image I2 obtained by imaging the same position of a specimen prepared in advance to include cells into which a marker is introduced. Here, the first teacher image I1 is obtained by imaging the specimen in which the marker is expressed, and the second teacher image I2 is obtained by imaging the specimen in a state where the marker is not expressed. [0045] A substance that selectively emits fluorescence in a specific part of a cell can be, for example, used as a marker. In this case, a fluorescence image of a specimen imaged under excitation light illumination can be the first teacher image I1, and a bright field image of the same specimen imaged under visible light illumination can be the second teacher image I2. Between such first and second teacher images I1, I2, how an object appears in the image can be related when the same object in the specimen is imaged with a marker and imaged without a marker by comparing corresponding positions in the images. [0048] FIG. 3 is a flow chart showing a learning process for constructing the first learning model. As described above, the first learning model 10 can be constructed by collecting a multitude of sets of a bright field image and a fluorescence image obtained by imaging the same position of the same specimen (Step S201) and performing deep learning using these as teacher data (Step S202). The teacher images (first teacher image I1 and second teacher image I2) are desirably images obtained by imaging the same type of cells as cells including detected parts to be noticed in the test image It. [0049] FIG. 4 is a diagram showing the operation of a pix2pix algorithm, which is a learning model employed in this embodiment. As shown in FIG. 4, the first learning model 10 includes an image generator 11, a discriminator 12 and an optimizer 13. The image generator 11 includes an encoder 11a for grasping an image characteristic by convolutional layers in a plurality of stages and a decoder 11b for generating an image by performing an inverse operation from the characteristic by inverse convolutional layers in the same number of stages. The second teacher image I2, which is a bright field image, is given as an input image to the image generator 11. The discriminator 12 discriminates from the image characteristic whether or not the input image is a generated image. The first teacher image I1, which is a fluorescence image, is input to the discriminator 12 to perform real learning. On the other hand, a generated image Ig output from the image generator 11 is input to the discriminator 12 to perform fake learning. The optimizer 13 adjusts internal parameters of the learning model so that the generated image Ig output by the image generator 11 approximates a real image (first teacher image I1). Learning progresses by repeating this. If learning sufficiently progresses, a function of generating a corresponding pseudo fluorescence image (intermediate image Im) from an unknown bright field image (test image It) is possessed. [0050] Note that although fluorescent labeling is used as a method for administering a marker to a cell here, a mode of the marker is not limited to this. For example, a specific part may be selectively dyed with an appropriate dye. In this case, deep learning is performed using sets of an image of an undyed specimen and an image of a dyed specimen as teacher data. By doing so, it is possible to analogize and generate an image obtained when the specimen is dyed from a newly given undyed image. [0051] Further, a plurality of types of markers may be introduced into one specimen. For example, calcein is known as a marker which is expressed in green in a cytoplasm of a living cell. Further, propidium iodide (PI) is known as a marker which is expressed in red in a nucleus of a dead cell. It is widely performed to introduce these into the same specimen. In such an example, the respective markers are expressed in different colors in a fluorescence image. Thus, living cells and dead cells can be distinguished by the color separation of an image, for example, in the case of a color image. [0052] This technique can be introduced also into the image processing of this embodiment. Specifically, deep learning is performed using sets of a fluorescence image and a bright field image of the specimen having these two types of markers introduced thereinto as the teacher data. By doing so, an intermediate image is obtained in which, out of objects in the bright field image as the test image, those corresponding to living cells are shown in green corresponding to calcein and those corresponding to nuclei of dead cells are shown in red corresponding to PI. [0053] Note that, in the case of introducing a plurality of types of markers having different luminous colors in this way, at least an image with a marker needs to be an image which can be handled with colors distinguished. For example, the data obtained by full-color imaging and color-separating into RGB colors can be used. Further, image data monochromatically captured via a band-pass filter corresponding to a luminous color may be handled as pseudo separation image data corresponding to this luminous color. If monochromatic imaging is performed, for example, using a highly sensitive cooled CCD camera, image data capable of color reproduction at a high resolution can be obtained. [0054] As just described, by performing deep learning using the sets of the image with the marker and the image without the marker as the teacher data, the first learning model 10 is constructed. The thus constructed first learning model 10 analogizes a fluorescence image obtained when the marker is introduced into the same specimen based on the test image It, which is a new bright field image without a marker, and generates a pseudo image with a marker. This image is the intermediate image Im. [0055] The intermediate image Im shown in FIG. 1 is a pseudo image with a marker corresponding to the test image It. Here, out of the objects included in the test image It, those in which two types of markers are expressed, are shown in two types of densities. input the image feature amount set to a data machine learning model (e.g. the second learning model accepts the intermediate image with a marker and position method image in order to identify the location of a particular cell, which is taught in ¶ [57]-[60]. The count of the number of cells within an image can be displayed and shown to a user, which is taught in ¶ [61]. The image of the position of a particular cell or the count of living cells occurs after the intermediate image is output from a second machine learning model, which is taught in ¶ [58]-[60].). [0057] In an image with a marker, particularly a fluorescence image in which a marker becomes luminous in response to excitation light, information on the shapes of original objects are, in principle, lost in many cases. From this, information on the precise shapes and extents in the image of cells and the like may not be obtained also in a pseudo image with a marker output by the first learning model 10. For example, if a plurality of cells are in contact with or proximate to each other in a specimen, those cells possibly appear as an integral assembly in an image in some cases. This can become an error factor in quantitatively detecting the positions and number of the cells. [0058] Accordingly, in this embodiment, the second learning model 20 is used to specify the positions of the detected parts from the intermediate image Im generated by the first learning model 10. As shown in FIG. 1, the second learning model 20 is constructed by performing deep learning using sets of a third teacher image I3 and a position information image Ip prepared in advance as teacher data. The position method image Ip is an image indicating the positions of representative points representing the detected parts in the corresponding third teacher image I3. [0059] As described later, the second learning model 20 thus constructed has a function of detecting the positions of representative points representing the detected parts in an image from the image with a marker. In this sense, the second learning model 20 can also be called a “position determination model”. In a specific processing, the intermediate image Im, which is a pseudo image with a marker corresponding to the test image It, is generated by the first learning model 10 as described above, and the intermediate image Im is input to the second learning model 20 (Step S104). [0060] The second learning model 20 generates an image showing the positions of the representative points of the detected parts from the input intermediate image Im, and outputs this image as a result image Io (Step S105). For example, if the detected parts are “living cells” and the centroid positions of the nuclei of the living cells are representative points, the result image Io is an image showing the “centroid positions of the nuclei of the living cells”, out of the objects included in the test image It. A result needs not necessarily be output as an image in this way and may be, for example, output as data representing coordinates of the detected representative points. The definitions of the detected parts and the representative points are also arbitrary without being limited to the above. [0061] If data for specifying the positions of the representative points of the detected parts in the image is obtained in this way, the number of the detected parts is easily automatically counted (Step S106) from that data. If a count result is output and presented to a user (Step S107), the user can obtain the count result of the detected parts included in the image only by preparing the test image It. However, Mizukami fails to specifically teach the features of input the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model. However, this is well known in the art as evidenced by Kapur. Similar to the primary reference, Kapur discloses using AI models to identify regions in an image to output an expression level (same field of endeavor or reasonably pertinent to the problem). Kapur discloses input the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of the cell from the data machine learning model (e.g. a salient region detection tool is used to detect features within an image that are considered as salient regions that can contain biomarkers within the image. The digital stained image can be considered as an image feature amount of data. The salient region detection tool can be considered as a machine learning model, which is taught in ¶ [52] and [61]-[63]. The biomarkers predicted can be determined from the salient regions sent to the identifier module, which is considered as a machine learning model, that is used to output an expression level of genes in the image. The expression level output, or the biomarker, is explained in ¶ [40], [52], [53], [65] and [66].). [0037] Histopathology refers to the study of a specimen that has been placed onto a slide. For example, a digital pathology image may be comprised of a digitized image of a microscope slide containing the specimen (e.g., a smear). One method a pathologist may use to analyze an image on a slide is to identify nuclei and classify whether a nucleus is normal (e.g., benign) or abnormal (e.g., malignant). To assist pathologists in identifying and classifying nuclei, histological stains may be used to make cells visible. Dye-based staining systems have been developed, including periodic acid-Schiff reaction, Masson's trichrome, nissl and methylene blue, and Haemotoxylin and Eosin (H&E). For medical diagnosis, H&E is a widely used dye-based method, with hematoxylin staining cell nuclei blue, eosin staining cytoplasm and extracellular matrix pink, and other tissue regions taking on variations of these colors. IHC and immunofluorescence involve, for example, using antibodies that bind to specific antigens in tissues enabling the visual detection of cells expressing specific proteins of interest, which may reveal biomarkers that are not reliably identifiable to trained pathologists based on the analysis of H&E stained slides. ISH and FISH may be employed to assess the number of copies of genes or the abundance of specific RNA molecules, depending on the type of probes employed (e.g., DNA probes for gene copy number and RNA probes for the assessment of RNA expression). [0038] A digitized image may be prepared to show a stained microscope slide, which may allow a pathologist to manually view the image on a slide and estimate a number of stained abnormal cells in the image. However, this process may be time consuming and may lead to errors in identifying abnormalities because some abnormalities are difficult to detect. Computational processes and devices may be used to assist pathologists in detecting abnormalities that may otherwise be difficult to detect. [0039] The detected biomarkers and/or the image alone may be used to recommend specific cancer drugs and/or drug combination therapies to be used to treat a patient, and the AI may identify which drugs and/or drug combinations are unlikely to be successful by correlating the detected biomarkers with a database of treatment options. This may be used to facilitate the automatic recommendation of immunotherapy drugs to target a patient's specific cancer. Further, this may be used for enabling personalized cancer treatment for specific subsets of patients and/or rarer cancer types. [0040] As described above, the present disclosure may use AI to predict biomarkers (e.g., the over-expression of a protein and/or gene product, amplification, or mutations of specific genes) from salient regions within digital images of tissues stained using H&E and other dye-based methods. The images of the tissues may be whole slide images (WSI), images of tissue cores within microarrays and/or selected areas of interest within a tissue section. Using staining methods like H&E, biomarkers may be difficult to visually detect or quantify without additional testing. Using AI to infer these biomarkers from digital images of tissues may improve patient care, while being faster and less expensive. [0052] The salient region detection tool 103 may identify salient regions of one or more digital images to be analyzed. This detection may be performed manually by a human or automatically using AI. An entire image or specific image regions may be considered salient. The image region salient to biomarker detection, e.g., region with a tumor, may take a fraction of an entire image. Regions of interest may be specified by a human expert using an image segmentation mask, a bounding box, or a polygon. Alternatively, or in addition, AI may provide a complete end-to-end solution in identifying locations. Salient region identification may enable the downstream AI system to learn how to detect biomarkers from less annotated data and to make more accurate predictions. Exemplary embodiments may include: (1) strongly supervised methods that identify precisely where the biomarker may be found; and/or (2) weakly supervised methods that may not provide a precise location. During AI training, the strongly supervised system may receive as input, the image and the location of the salient regions that may potentially express the biomarker. These locations may be specified with pixel-level labeling, bounding box-based labeling, polygon-based labeling, and/or using a corresponding image where the saliency has been identified (e.g., using IHC). The weakly supervised system may receive as input, the image or images and the presence/absence of the salient regions. The exact location of the salient location in one or more images may be unspecified when training the weakly supervised system. [0053] The biomarker prediction tool 104 may predict and/or infer biomarker presence using machine learning and/or computer vision. The prediction may be output to an electronic storage device. A notification or visual indicator may be sent/displayed to a user, alerting the user to the presence or absence of one or more of the biomarkers. [0061] The salient region identifier module 133 may train a machine learning algorithm that takes, as input, a digital image of a pathology specimen and predicts whether the salient region is present or not. Many methods may be used to learn which regions are salient, including but not limited to: (1) weak supervision: training a machine learning system (e.g., multi-layer perceptron (MLP), convolutional neural network (CNN), graph neural network, support vector machine (SVM), random forest, etc.) using multiple instance learning (MIL) using weak labeling of the digital image or a collection of images; the label may correspond to the presence or absence of a salient region that may express the relevant biomarker; (2) bounding box or polygon-based supervision: training a machine learning system (e.g., region-based CNN (R-CNN), Faster R-CNN, Selective Search) using bounding boxes or polygons that specify the sub-regions of the digital image that are salient for the detection of the presence or absence of the biomarker; (3) pixel-level labeling (e.g., a semantic or instance segmentation): training a machine learning system (e.g., Mask R-CNN, U-Net, Fully Convolutional Neural Network) using a pixel-level labeling, where individual pixels are identified as being salient for the detection of the biomarker; and/or (4) using a corresponding, but different digital image that identifies salient tissue regions—a digital image of tissue that highlights the salient region (e.g., cancer identified using IHC) may be registered with the input digital image. For example, a digital image of an H&E image may be registered/aligned with an IHC image identifying salient tissue (e.g., cancerous tissue where the biomarker should be found), where the IHC may be used to determine the salient pixels based on image color characteristics. [0062] The target image intake module 134 may receive one or more digital images of a pathology specimen (e.g., histology, cytology, etc.) into a digital storage device (e.g., hard drive, network drive, cloud storage, RAM, etc.). One or more digital images may be divided into sub-regions, and a saliency of one or more sub-regions may be determined (e.g., cancerous tissue for which the biomarker(s) should be identified). Regions may be specified in a variety of methods, including creating tiles of the image, segmentations based edge/contrast, segmentations via color differences, supervised determination by the machine learning system, and/or EdgeBoxes, etc. [0063] The salient region prediction module 135 may apply a trained machine learning algorithm to the image/sub-region to predict which regions of the image are salient and may potentially exhibit the biomarker(s) of interest (e.g., cancerous tissue). If a salient regions is present, identify and flag the location of the salient region. The salient regions may be detected using a variety of methods, including but not limited to: (1) running the machine learning system on image sub-regions to generate the prediction for one or more sub-regions; and/or (2) using machine learning visualization tools to create a detailed heatmap, e.g., by using class activation maps, GradCAM, etc., and then extracting the relevant regions. [0065] The training image intake module 136 may receive one or more digital images of a pathology specimen (e.g., histology, cytology, etc.) into a digital storage device (e.g., hard drive, network drive, cloud storage, RAM, etc.), and may receive, for one or more images, the level of a biomarker present (e.g., binary or ordinal value). For example, one or more digital images may be broken into sub-regions. One or more sub-regions may have their saliency determined. Regions may be specified in a variety of methods, including creating tiles of the image, segmentations based edge/contrast, segmentations via color differences, supervised determination by the machine learning system, and/or EdgeBoxes, etc. [0066] The salient region identifier module 137 may identify salient regions that may be relevant to biomarker(s) of interest using an AI-based system and/or using manual annotations from an expert. A machine learning algorithm may be trained to predict the expression level of one or more biomarkers from the (salient) image regions. Expression levels may be represented as binary numbers, ordinal numbers, real numbers, etc. Techniques presented herein may be implemented in multiple ways, including but not limited to: CNN, CNN trained with MIL, recurrent neural network (RNN), long-short term memory RNN (LSTM), gated recurrent unit RNN (GRU), graph convolutional network, support vector machine, and/or random forest. [0067] The target image intake module 138 may receive one or more digital images of a pathology specimen (e.g., histology, cytology, etc.) into a digital storage device (e.g., hard drive, network drive, cloud storage, RAM, etc.), and receive the location of salient region, which may be automatically identified using AI and/or manually specified by an expert. [0068] The expression level prediction module 139 may apply a machine learning algorithm to provide a prediction of whether the biomarker is present. Therefore, in view of Kapur, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of input the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of the cell from the data machine learning model, incorporated in the device of Mizukami, in order to output an expression level from image feature regions input into a machine learning model, which can assist in detecting abnormalities that may be difficult to detect (as stated in Kapur ¶ [38]). However, the combination above fails to specifically teach the features of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model. However, this is well known in the art as evidenced by Khammanivong. Similar to the primary reference, Khammanivong discloses output an expression level from a machine learning model (same field of endeavor or reasonably pertinent to the problem). Khammanivong discloses expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model (e.g. the invention discloses processing samples from an RNA sequence to determine particular genes that have a particular expression level. The samples are input into machine learning models in order to isolate the RNA molecules to determine and output the expression levels of the genes, which is taught in ¶ [75]-[81].). [0075] For each of the plurality of samples of bodily fluid, the processing circuitry of the computing device determines, for each corresponding RNA sequence, whether the RNA sequence is associated with exactly one corresponding gene sequence of a gene signature comprising a plurality of gene sequences and determines, for each sample of the plurality of samples, an approximate number of times that each RNA sequence associated with exactly one corresponding gene of the gene signature occurs in the sample of bodily fluid. Determining the approximate number of times that each such RNA sequence occurs in the sample may provide an indication of an expression level exhibited by the organism of RNAs corresponding to the gene of the gene signature. [0076] Processing circuitry (e.g., the processing circuitry described above or other processing circuitry) then determines, for each sample of bodily fluid and using one or more machine learning models, a pattern of expression of the plurality of gene sequences of the gene signature associated with the sample of bodily fluid based on the approximate number of times that each RNA sequence associated with exactly one corresponding gene of the gene signature occurs in the sample of bodily fluid. Next, using the one or more machine learning models and for each organism (e.g., subject) of the plurality of organisms, the processing circuitry associates the biological status of the organism with the corresponding pattern of expression of the of the plurality of gene sequences of the gene signature associated with the sample of bodily fluid from the organism. In this manner, the technique of FIG. 1 may train the one or more machine learning models to determine an unknown biological status of a sample obtained from a test organism, as described below with respect to FIG. 2. [0077] FIG. 2 is a flow diagram illustrating an example technique for determining a biological status (e.g., a previously unknown biological status) of an organism in accordance with the examples of this disclosure. In some examples, determining an unknown biological state of an organism may include determining a likelihood that the organism has or may develop a particular disease (e.g., osteosarcoma or other cancer) or other physiological condition, a progression or likelihood of progression of an existing disease, other behavior of an existing disease, or other statuses that may be associated with the disease or condition. Expression levels of genes (either absolute or relative to other genes) associated with a gene signature that corresponds to the disease or other condition may enable techniques for blood-based testing to determine such biological states. In order to determine expression levels of genes of a gene signature from a blood sample (or sample of other bodily fluid), RNA molecules may be isolated from exosomes isolated from the sample. Machine learning models may be used to analyze abundances of the RNA molecules having sequences associated with sequences of each of the genes of the gene signature (e.g., by qRT-PCR) may enable determination of expression levels of the genes for a particular organism, as discussed below with respect to the example technique of FIG. 2. [0078] In one example, the gene signature (e.g., an exosomal gene signature) is a plurality of genes associated with osteosarcoma: SKA2, NEU1, PAF1, PSMG2, and NOB1. These five genes may be a selected subset of a larger plurality of genes associated with osteosarcoma, such that other genes from the larger plurality of genes may be used to diagnose osteosarcoma in other examples. Determination of expression levels of each of these five genes (e.g., absolute expression levels or expression levels relative to other genes of the gene signature) may enable determination of whether an organism (e.g., a dog, other non-human animal, or human) is at risk for developing osteosarcoma, has osteosarcoma, and/or, in the case of existing osteosarcoma, a likelihood that the disease may progress relatively more or less aggressively. Determining a biological status corresponding to osteosarcoma in an organism by analysis of SKA2, NEU1, PAF1, PSMG2, and NOB1 expression levels may help enable earlier and/or more accurate diagnosis or determinations of prognosis, and in sonic examples may help inform decisions regarding treatments. [0079] According to the example of FIG. 2, a plurality of exosomes from a sample of bodily fluid derived an organism is obtained, such as by using any suitable ones of the laboratory techniques described herein or any other suitable laboratory techniques. In some examples, the plurality of exosomes comprises a plurality of molecules of RNA. Next, the plurality of molecules of RNA are isolated from the exosomes and amplified using any suitable technique, such as a PCR technique. [0080] For each substantially each molecule of the plurality of molecules of RNA, a corresponding RNA sequence is determined. In some examples, the RNA sequences may be determined by processing circuitry of a computing device, such as one or more of the computing devices described below with respect to FIGS. 4 and 5. In some such examples, the computing device may be part of any suitable nucleotide-sequencing system. Next, the processing circuitry of the computing device determines, for each corresponding RNA sequence, whether the RNA sequence is associated with exactly one corresponding gene sequence of a gene signature comprising a plurality of gene sequences and determines an approximate number of times that each RNA sequence associated with exactly one corresponding gene of the gene signature occurs in the sample of bodily fluid. Determining the approximate number of times that each such RNA sequence occurs in the sample may provide an indication of an expression level exhibited by the organism of RNAs corresponding to the gene of the gene signature. [0081] Processing circuitry (e.g., the processing circuitry described above or other processing circuitry) then analyzes, using one or more machine learning models, the approximate number of times each RNA sequence associated with the exactly one corresponding gene of the gene signature occurs in the sample of bodily fluid and determines a pattern of expression exhibited by the organism of the plurality of gene sequences of the gene signature associated with the sample of bodily fluid based on the approximate number of times that each RNA sequence associated with exactly one corresponding gene of the gene signature occurs in the sample of bodily fluid. Therefore, in view of Khammanivong, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model, incorporated in the device of Mizukami, as modified by Kapur, in order to output expression levels of RNA types from the machine learning model, which can help reduce the number of samples needed to identify biomarkers of disease (as stated in Khammanivong ¶ [67]). Re claim 2: Mizukami discloses the cell culture evaluation device according to claim 1, wherein the processor is configured to perform control to display the expression level set (e.g. a display is used to display a count of cells, which is taught in ¶ [103]). [0103] The above image processing is performed for the thus obtained bright field image. Specifically, the above image processing process is performed using the captured bright field image as a test image It and living cells as detected parts (Step S404), and the number of the living cells is counted (Step S405). The effects of the drug are determined, such as by obtaining the LC50 value described above from a count result (Step S406), and a determination result is, for example, output to the display unit 115 (Step S407). Further, in the case of continuing observation and regularly evaluating the specimen (Step S408), return is made to Step S403 and the specimen is repeatedly imaged and evaluated. Since the cells can be evaluated only by the bright field image without being processed in this embodiment, such a continuous evaluation is possible. Re claim 9: Mizukami discloses the cell culture evaluation device according to claim 1, wherein, the image machine learning model comprises a compression unit of an autoencoder, the autoencoder including the compression unit (interpretation: The cell image12 is input to the compression unit76. The compression unit76 converts the cell image12 into the image feature amount set55. The compression unit76 transmits the image feature amount set55 to the restoration unit77, which is taught in ¶ [46]. This interpretation and its equivalents are utilized for this claim term hereinafter in the Office Action.) that converts the cell image into the image feature amount set (e.g. the encoder creates characteristic information from the image that is considered as an image feature amount set, which is taught in ¶ [49].), and [0049] FIG. 4 is a diagram showing the operation of a pix2pix algorithm, which is a learning model employed in this embodiment. As shown in FIG. 4, the first learning model 10 includes an image generator 11, a discriminator 12 and an optimizer 13. The image generator 11 includes an encoder 11a for grasping an image characteristic by convolutional layers in a plurality of stages and a decoder 11b for generating an image by performing an inverse operation from the characteristic by inverse convolutional layers in the same number of stages. The second teacher image I2, which is a bright field image, is given as an input image to the image generator 11. The discriminator 12 discriminates from the image characteristic whether or not the input image is a generated image. The first teacher image I1, which is a fluorescence image, is input to the discriminator 12 to perform real learning. On the other hand, a generated image Ig output from the image generator 11 is input to the discriminator 12 to perform fake learning. The optimizer 13 adjusts internal parameters of the learning model so that the generated image Ig output by the image generator 11 approximates a real image (first teacher image I1). Learning progresses by repeating this. If learning sufficiently progresses, a function of generating a corresponding pseudo fluorescence image (intermediate image Im) from an unknown bright field image (test image It) is possessed. a restoration unit (interpretation: The restoration unit77 generates a restored image78 of the cell image12 from the image feature amount set55, which is taught in ¶ [47]. This interpretation and its equivalents are utilized for this claim term hereinafter in the Office Action.) that generates a restored image of the cell image from the image feature amount set (e.g. the decoder is used to restore the image from the image characteristic information created from the encoder, which is taught in ¶ [49] above.). Re claim 11: Mizukami discloses the cell culture evaluation device according to claim 9, wherein the autoencoder is trained using a generative adversarial network including a discriminator (interpretation: Then, a discriminator121 determines whether or not the training cell image12L input to the AE75 is the same as the training restored image78L output from the AE75. The discriminator121 outputs a determination result122, which is taught in ¶ [96]. This interpretation and its equivalents are utilized for this claim term hereinafter in the Office Action.) that determines whether or not the cell image is the same as the restored image (e.g. a discriminator is used to determine whether or not the cell image restored is the same as the cell image, which is taught in ¶ [49] above.). Re claim 12: Mizukami discloses the cell culture evaluation device according to claim 9, wherein the autoencoder is trained by inputting morphology-related information of the cell to the restoration unit, in addition to the image feature amount set from the compression unit (e.g. information containing marker information is input into the discriminator along with data that can be considered as characteristic information. The marker information is within the image I2, which is considered as morphology related information of the cell. This is taught in ¶ [45]-[48].). [0045] A substance that selectively emits fluorescence in a specific part of a cell can be, for example, used as a marker. In this case, a fluorescence image of a specimen imaged under excitation light illumination can be the first teacher image I1, and a bright field image of the same specimen imaged under visible light illumination can be the second teacher image I2. Between such first and second teacher images I1, I2, how an object appears in the image can be related when the same object in the specimen is imaged with a marker and imaged without a marker by comparing corresponding positions in the images. [0046] A multitude of such instances are collected and used as teacher data to perform machine learning. Then, it becomes possible to analogize, for example, how an object appearing in an image without a marker would appear if the object is imaged in a state where a marker is expressed. Utilizing this, it is possible to generate an image with a pseudo marker from an image without a marker. Particularly, when deep learning is used as a learning algorithm, it is not necessary to artificially give a feature amount for the analysis of an image. Thus, expert knowledge for appropriately selecting a feature amount according to use is unnecessary. In addition, it is possible to construct an optimal learning model excluding a possibility of erroneous determination due to inappropriate selection of the feature amount. [0047] Since various publicly known information materials on the principle of deep learning and the learning algorithm already exist, these are not described in detail here. A deep learning technique usable in this embodiment is not limited to a specific algorithm. Note that a method known as “pix2pix” based on Conditional GAN is, for example, a deep learning technique particularly suitably usable in this embodiment for learning a correlation between paired data using the paired data such as images as an input and an output (reference: Phillip Isola et al., Image-to-image Translation with Conditional Adversarial Networks, CVPR, 21 Nov. 2016, URL: https://arxiv.org/pdf/1611.07004v1.pdf). [0048] FIG. 3 is a flow chart showing a learning process for constructing the first learning model. As described above, the first learning model 10 can be constructed by collecting a multitude of sets of a bright field image and a fluorescence image obtained by imaging the same position of the same specimen (Step S201) and performing deep learning using these as teacher data (Step S202). The teacher images (first teacher image I1 and second teacher image I2) are desirably images obtained by imaging the same type of cells as cells including detected parts to be noticed in the test image It. Re claim 14: The cell culture evaluation device according to claim 1, wherein the image machine learning model comprises a compression unit of a convolutional neural network, the convolutional neural network including the compression unit that converts the cell image into the image feature amount set (e.g. the first learning model is constructed using deep learning using teacher data, which is taught in ¶ [11]. The pix2pix Gan uses Convolution within the encoder and decoder operations, which is inherent to its operation. The pix2pix algorithm is explained in ¶ [47] and [49].), and [0011] In the invention, the first learning model is a learning model constructed by performing deep learning using the teacher data associating the first image in which the marker corresponding to the detected part is expressed and the second image in which the marker is not expressed. Thus, in the already learned first learning model, an image with a marker and an image without a marker are associated with each other. [0047] Since various publicly known information materials on the principle of deep learning and the learning algorithm already exist, these are not described in detail here. A deep learning technique usable in this embodiment is not limited to a specific algorithm. Note that a method known as “pix2pix” based on Conditional GAN is, for example, a deep learning technique particularly suitably usable in this embodiment for learning a correlation between paired data using the paired data such as images as an input and an output (reference: Phillip Isola et al., Image-to-image Translation with Conditional Adversarial Networks, CVPR, 21 Nov. 2016, URL: https://arxiv.org/pdf/1611.07004v1.pdf). [0049] FIG. 4 is a diagram showing the operation of a pix2pix algorithm, which is a learning model employed in this embodiment. As shown in FIG. 4, the first learning model 10 includes an image generator 11, a discriminator 12 and an optimizer 13. The image generator 11 includes an encoder 11a for grasping an image characteristic by convolutional layers in a plurality of stages and a decoder 11b for generating an image by performing an inverse operation from the characteristic by inverse convolutional layers in the same number of stages. The second teacher image I2, which is a bright field image, is given as an input image to the image generator 11. The discriminator 12 discriminates from the image characteristic whether or not the input image is a generated image. The first teacher image I1, which is a fluorescence image, is input to the discriminator 12 to perform real learning. On the other hand, a generated image Ig output from the image generator 11 is input to the discriminator 12 to perform fake learning. The optimizer 13 adjusts internal parameters of the learning model so that the generated image Ig output by the image generator 11 approximates a real image (first teacher image I1). Learning progresses by repeating this. If learning sufficiently progresses, a function of generating a corresponding pseudo fluorescence image (intermediate image Im) from an unknown bright field image (test image It) is possessed. an output unit (interpretation: The compression unit151 transmits the image feature amount set153 to the output unit152. The output unit152 outputs an evaluation label154 for the cell13 on the basis of the image feature amount set153. The evaluation label154 is, for example, the quality level of the cell13 or the type of the cell13, which is taught in ¶ [113]. This interpretation and its equivalents are utilized for this claim term hereinafter in the Office Action.) that outputs an evaluation label for the cell on the basis of the image feature amount set (e.g. labeling occurs based on the image output from the image characteristic determined from the encoder. The fluorescent labeling of cells can occur as the method for administering a marker, which is taught in ¶ [49], [50] and [122]-[124].). [0049] FIG. 4 is a diagram showing the operation of a pix2pix algorithm, which is a learning model employed in this embodiment. As shown in FIG. 4, the first learning model 10 includes an image generator 11, a discriminator 12 and an optimizer 13. The image generator 11 includes an encoder 11a for grasping an image characteristic by convolutional layers in a plurality of stages and a decoder 11b for generating an image by performing an inverse operation from the characteristic by inverse convolutional layers in the same number of stages. The second teacher image I2, which is a bright field image, is given as an input image to the image generator 11. The discriminator 12 discriminates from the image characteristic whether or not the input image is a generated image. The first teacher image I1, which is a fluorescence image, is input to the discriminator 12 to perform real learning. On the other hand, a generated image Ig output from the image generator 11 is input to the discriminator 12 to perform fake learning. The optimizer 13 adjusts internal parameters of the learning model so that the generated image Ig output by the image generator 11 approximates a real image (first teacher image I1). Learning progresses by repeating this. If learning sufficiently progresses, a function of generating a corresponding pseudo fluorescence image (intermediate image Im) from an unknown bright field image (test image It) is possessed. [0050] Note that although fluorescent labeling is used as a method for administering a marker to a cell here, a mode of the marker is not limited to this. For example, a specific part may be selectively dyed with an appropriate dye. In this case, deep learning is performed using sets of an image of an undyed specimen and an image of a dyed specimen as teacher data. By doing so, it is possible to analogize and generate an image obtained when the specimen is dyed from a newly given undyed image. [0122] Further, for example, the test image and the second image may be bright field images or phase difference images. Images in which a marker is not expressed can be used as the test image and the second image. Thus, the test image and the second image may be obtained by imaging a specimen into which a marker is not introduced. If bright field images or phase difference images are used as such images, a high affinity for visual observation can be obtained. [0123] Further, the first image may be, for example, a fluorescence image obtained by imaging fluorescent-labeled cells under excitation light illumination. In a fluorescent labeling technology, many techniques for selectively expressing a marker in specific parts of cells according to a purpose have been developed. By applying such an established technique, aimed detected parts can be reliably detected. [0124] Here, fluorescent labeling showing different expression modes between living cells and dead cells may be performed. By such a configuration, living cells and dead cells can be discriminated from an image without introducing a marker. Re claim 15: Mizukami discloses a method for operating a cell culture evaluation device, the method being executed by a processor, the method comprising: acquiring a cell image obtained by imaging a cell that is being cultured (e.g. the invention discloses imaging an image comprised of cells, which is taught in ¶ [38] and [39]. Cultured cells can also be acquired, which is taught in ¶ [80] above.); inputting the cell image to an image machine learning model to output an image feature amount set composed of a plurality of types of image feature amounts related to the cell image from the image machine learning model (e.g. the cell image, or test image, is input into a first learning model. The output of the first learning model includes an intermediate image, which is taught in ¶ [40]-[43] above. The intermediate image includes a first and second teacher data, which is taught in ¶ [44] and [45] above. The teacher images contain features, such as markers or colors, that express the details of the living and dead cells, which is taught in ¶ [48]-[53] above.); and inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of the cell from the data machine learning model (e.g. the second learning model accepts the intermediate image and position method image in order to identify the location of a particular cell, which is taught in ¶ [57]-[60] above. The count of the number of cells within an image can be displayed and shown to a user, which is taught in ¶ [61] above. The image of the position of a particular cell or the count of living cells occurs after the intermediate image is output from a second machine learning model, which is taught in ¶ [58]-[60] above.). However, Mizukami fails to specifically teach the features of inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model. However, this is well known in the art as evidenced by Kapur. Similar to the primary reference, Kapur discloses using AI models to identify regions in an image to output an expression level (same field of endeavor or reasonably pertinent to the problem). Kapur discloses inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of the cell from the data machine learning model (e.g. a salient region detection tool is used to detect features within an image that are considered as salient regions that can contain biomarkers within the image. The digital stained image can be considered as an image feature amount of data. The salient region detection tool can be considered as a machine learning model, which is taught in ¶ [52] and [61]-[63] above. The biomarkers predicted can be determined from the salient regions sent to the identifier module, which is considered as a machine learning model, that is used to output an expression level of genes in the image. The expression level output, or the biomarker, is explained in ¶ [40], [52], [53], [65] and [66] above.). Therefore, in view of Kapur, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of the cell from the data machine learning model, incorporated in the device of Mizukami, in order to output an expression level from image feature regions input into a machine learning model, which can assist in detecting abnormalities that may be difficult to detect (as stated in Kapur ¶ [38]). However, the combination above fails to specifically teach the features of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model. However, this is well known in the art as evidenced by Khammanivong. Similar to the primary reference, Khammanivong discloses output an expression level from a machine learning model (same field of endeavor or reasonably pertinent to the problem). Khammanivong discloses expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model (e.g. the invention discloses processing samples from an RNA sequence to determine particular genes that have a particular expression level. The samples are input into machine learning models in order to isolate the RNA molecules to determine and output the expression levels of the genes, which is taught in ¶ [75]-[81] above.). Therefore, in view of Khammanivong, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model, incorporated in the device of Mizukami, as modified by Kapur, in order to output expression levels of RNA types from the machine learning model, which can help reduce the number of samples needed to identify biomarkers of disease (as stated in Khammanivong ¶ [67]). Re claim 16: Mizukami discloses a non-transitory storage medium storing a program that causes a computer to perform a cell culture evaluation processing, the cell culture evaluation processing comprising: acquiring a cell image obtained by imaging a cell that is being cultured (e.g. the invention discloses imaging an image comprised of cells, which is taught in ¶ [38] and [39]. Cultured cells can also be acquired, which is taught in ¶ [80] above.); inputting the cell image to an image machine learning model to output an image feature amount set composed of a plurality of types of image feature amounts related to the cell image from the image machine learning model (e.g. the cell image, or test image, is input into a first learning model. The output of the first learning model includes an intermediate image, which is taught in ¶ [40]-[43] above. The intermediate image includes a first and second teacher data, which is taught in ¶ [44] and [45] above. The teacher images contain features, such as markers or colors, that express the details of the living and dead cells, which is taught in ¶ [48]-[53] above.); and inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model (e.g. the second learning model accepts the intermediate image and position method image in order to identify the location of a particular cell, which is taught in ¶ [57]-[60] above. The count of the number of cells within an image can be displayed and shown to a user, which is taught in ¶ [61] above. The image of the position of a particular cell or the count of living cells occurs after the intermediate image is output from a second machine learning model, which is taught in ¶ [58]-[60] above.). However, Mizukami fails to specifically teach the features of inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model. However, this is well known in the art as evidenced by Kapur. Similar to the primary reference, Kapur discloses using AI models to identify regions in an image to output an expression level (same field of endeavor or reasonably pertinent to the problem). Kapur discloses inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of the cell from the data machine learning model (e.g. a salient region detection tool is used to detect features within an image that are considered as salient regions that can contain biomarkers within the image. The digital stained image can be considered as an image feature amount of data. The salient region detection tool can be considered as a machine learning model, which is taught in ¶ [52] and [61]-[63] above. The biomarkers predicted can be determined from the salient regions sent to the identifier module, which is considered as a machine learning model, that is used to output an expression level of genes in the image. The expression level output, or the biomarker, is explained in ¶ [40], [52], [53], [65] and [66] above.). Therefore, in view of Kapur, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of inputting the image feature amount set to a data machine learning model to output an expression level set composed of expression levels of a plurality of types of the cell from the data machine learning model, incorporated in the device of Mizukami, in order to output an expression level from image feature regions input into a machine learning model, which can assist in detecting abnormalities that may be difficult to detect (as stated in Kapur ¶ [38]). However, the combination above fails to specifically teach the features of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model. However, this is well known in the art as evidenced by Khammanivong. Similar to the primary reference, Khammanivong discloses output an expression level from a machine learning model (same field of endeavor or reasonably pertinent to the problem). Khammanivong discloses expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model (e.g. the invention discloses processing samples from an RNA sequence to determine particular genes that have a particular expression level. The samples are input into machine learning models in order to isolate the RNA molecules to determine and output the expression levels of the genes, which is taught in ¶ [75]-[81] above.). Therefore, in view of Khammanivong, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of expression levels of a plurality of types of ribonucleic acids of the cell from the data machine learning model, incorporated in the device of Mizukami, as modified by Kapur, in order to output expression levels of RNA types from the machine learning model, which can help reduce the number of samples needed to identify biomarkers of disease (as stated in Khammanivong ¶ [67]). Claim(s) 3-8 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mizukami, as modified by Kapur and Khammanivong, as applied to claim 1 above, and further in view of Ebisawa (JP Pub 2009-044974 (Pub Date: 3/5/2009)). Re claim 3: Mizukami discloses the cell culture evaluation device according to claim 1, wherein the processor is configured to: input each of the plurality of cell images to the image machine learning model to output the image feature amount set for each of the plurality of cell images from the image machine learning model (e.g. the cell image, or test image, is input into a first learning model. The output of the first learning model includes an intermediate image, which is taught in ¶ [40]-[43] above. The intermediate image includes a first and second teacher data, which is taught in ¶ [44] and [45] above. The teacher images contain features, such as markers or colors, that express the details of the living and dead cells, which is taught in ¶ [48]-[53] above.). However, Mizukami fails to specifically teach the features of the cell culture evaluation device according to claim 1, wherein the processor is configured to: acquire a plurality of cell images obtained by imaging one culture container, in which the cell is cultured, a plurality of times. However, this is well known in the art as evidenced by Ebisawa. Similar to the primary reference, Ebisawa discloses inputting cell images within a neural network (same field of endeavor or reasonably pertinent to the problem). Ebisawa discloses wherein the processor is configured to: acquire a plurality of cell images obtained by imaging one culture container, in which the cell is cultured, a plurality of times (e.g. multiple capturing of cultured cells occurs in order to be input into software for creating measured data. This is taught in ¶ [21].), and [0021](2) Image Acquisition (a in FIG. 1)Next, the cells of each sample are photographed at two or more different time points in the culture time to obtain images. In the present invention, this time point is referred to as "prediction time". The prediction time can be arbitrarily set, but the prediction time may be set between 1 hour and 20 hours after the start of culture. Specific examples of the time of prediction include day 1, day 2, day 3, day 4, day 5, day 6, day 7, day 8, day 9, day 10, day 11, day 12, day 13, day 14, day 15, day 16, day 17, day 18, day 19, and day 20 after the start of culture. In this way, the prediction time may be set in units of seconds, minutes, hours, or weeks instead of days. It is preferable that the prediction time is the initial stage of culture. This is because, when a prediction model for predicting the quality of cells in the future is constructed, prediction at an early stage is possible. The period from the start of culture to day 7 is referred to as "initial culture". Therefore, in view of Ebisawa, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the processor is configured to: acquire a plurality of cell images obtained by imaging one culture container, in which the cell is cultured, a plurality of times, incorporated in the device of Mizukami, in order to acquire a plurality of cultured cell images several times, which allows for higher prediction accuracy of the system using the model and more information using a plurality of images (as stated in Ebisawa ¶ [06], [07] and [77]). Re claim 4: However, Mizukami fails to specifically teach the features of the cell culture evaluation device according to claim 3, wherein the processor is configured to: aggregate a plurality of the image feature amount sets output for each of the plurality of cell images into a predetermined number of image feature amount sets that are capable of being handled by the data machine learning model, and input the aggregated image feature amount sets to the data machine learning model to output the expression level set for each of the aggregated image feature amount sets from the data machine learning model. However, this is well known in the art as evidenced by Khammanivong. Similar to the primary reference, Khammanivong discloses output an expression level from a machine learning model (same field of endeavor or reasonably pertinent to the problem). Khammanivong discloses wherein the processor is configured to: aggregate a plurality of the image feature amount sets output for each of the plurality of cell images into a predetermined number of image feature amount sets that are capable of being handled by the data machine learning model (e.g. from the sample of the body fluids taken, multiple genes can be acquired from the sample. Genes found can be considered as an image feature amount set. The invention discloses gathering a plurality of different genes in order to input these genes into a machine learning model to output expression levels of the gene, which is taught in ¶ [75]-[81] above.), and input the aggregated image feature amount sets to the data machine learning model to output the expression level set for each of the aggregated image feature amount sets from the data machine learning model (e.g. the machine learning model receives the genes from the sample and output expression levels of the genes in order to detect the prevalence of these genes to determine a biological status corresponding to a disease. This is taught in ¶ [75]-[81] above.). Therefore, in view of Khammanivong, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the processor is configured to: aggregate a plurality of the image feature amount sets output for each of the plurality of cell images into a predetermined number of image feature amount sets that are capable of being handled by the data machine learning model, and input the aggregated image feature amount sets to the data machine learning model to output the expression level set for each of the aggregated image feature amount sets from the data machine learning model, incorporated in the device of Mizukami, as modified by Kapur, in order to output expression levels of RNA types from the machine learning model, which can help reduce the number of samples needed to identify biomarkers of disease (as stated in Khammanivong ¶ [67]). Re claim 5: Mizukami discloses the cell culture evaluation device according to claim 3, wherein the plurality of cell images include at least one of cell images captured by different imaging methods, or cell images obtained by imaging the cells stained with different dyes (e.g. images of cells can be captured using dyed specimens, which is taught in ¶ [50].). [0050] Note that although fluorescent labeling is used as a method for administering a marker to a cell here, a mode of the marker is not limited to this. For example, a specific part may be selectively dyed with an appropriate dye. In this case, deep learning is performed using sets of an image of an undyed specimen and an image of a dyed specimen as teacher data. By doing so, it is possible to analogize and generate an image obtained when the specimen is dyed from a newly given undyed image. Re claim 6: However, Mizukami fails to specifically teach the features of the cell culture evaluation device according to claim 1, wherein the processor is configured to input reference information, which is a reference for the output of the expression level set, to the data machine learning model, in addition to the image feature amount set. However, this is well known in the art as evidenced by Ebisawa. Similar to the primary reference, Ebisawa discloses inputting cell images within a neural network (same field of endeavor or reasonably pertinent to the problem). Ebisawa discloses wherein the processor is configured to input reference information, which is a reference for the output of the expression level set, to the data machine learning model, in addition to the image feature amount set (e.g. the measured data prepared in step (4) is considered as teacher values, or a reference. The closer the output values are to the measured value, the lower the error in the neural network, which is taught in ¶ [36] and [37] above.). Therefore, in view of Ebisawa, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the processor is configured to input reference information, which is a reference for the output of the expression level set, to the data machine learning model, in addition to the image feature amount set, incorporated in the device of Mizukami, in order to provide an information processing model for predicting quality in cell culture through various output indices, which allows for high accuracy prediction of the cultured cells quality (as stated in Ebisawa ¶ [06], [07] and [77]). Re claim 7: However, Mizukami fails to specifically teach the features of the cell culture evaluation device according to claim 6, wherein the reference information includes morphology-related information of the cell and culture supernatant component information of the cell. However, this is well known in the art as evidenced by Ebisawa. Similar to the primary reference, Ebisawa discloses inputting cell images within a neural network (same field of endeavor or reasonably pertinent to the problem). Ebisawa discloses wherein the reference information includes morphology-related information of the cell and culture supernatant component information of the cell (e.g. the breeding ratio represents the morphology information that is calculated as measured data that is sent to the neural network, which is taught in ¶ [36], [37] above and [31]-[33]. The morphology of the cell includes the degree of a specific factor or protein, which is taught in ¶ [09]-[14].). [0009] "The quality of a cell" which serves as a prediction target in the present invention is the degree of production ability, such as a breeding ratio, the time which can be residual divided, the number of times (the number of residual doubling) which can be residual divided, the degree of differentiation, activity, a specific factor, and protein, the degree of unusualness, and tumorigenic transformation, a cancerization degree, the danger by the mixed ratio of land use of a different-species cell, etc. As described above, the application range of the present invention is wide, and its versatility is high. [0012] The "degree of differentiation" is an index indicating the differentiation stage of a cell that is expected to differentiate during culture (for example, differentiation from a precursor cell to a mature cell, differentiation from a pluripotent stem cell to a specific cell lineage, or the like). In order to determine the degree of differentiation, existence, a production amount, etc. of production of the existence of a manifestation of the form of a cell and a cell surface marker (protein) or an expression amount, and a specific factor is used.[0013] "Activity" is an index indicating the degree of activation of a cell that can be changed to an activated state by a specific factor or the like. In order to determine the activity, the morphology of a cell, the presence or absence of expression or the amount of expression of a cell surface marker, the presence or absence of production or the amount of production of a specific factor, and the like are used.[0014] The "ability to produce a specific factor, protein, or the like" is an index used for identifying a cell characterized by the presence or absence of production of a specific substance or the amount of production. Examples of the "specific factor" include various cytokines and various hormones. Examples of the "protein" include enzymes and antibodies. [0031](4) Preparation of Measured Data (a in FIG. 1)In this step, measured data serving as a prediction target is prepared for each sample. The measured data is used as a teacher value (target output value) in the subsequent fuzzy neural network analysis. The "actual measurement data" is numerical data representing the quality of cells, which is acquired by actually measuring and analyzing each sample according to the prediction target. For example, actual measurement data can be obtained by image analysis as in step (3). However, as long as and analysis is not particularly limited as long as necessary actual measurement data can be obtained. For example, if a prediction target is "a breeding ratio from a culture start to specified time elapse of after", the cell number of the sample after specified time elapse and the cell number at the time of a culture start can be measured, and "data measuring" can be obtained with a following formula.(Several 1)Breeding ratio =(cell number of sample after specified time elapse)/(cell number at the time of a culture start)[0032] Instead of using the start time of culture as a reference, a certain time point after the start of culture may be used as a reference, and the proliferation rate from that time point until a predetermined time has elapsed may be used as the "prediction target". In this case, "actual measurement data" can be obtained by the following expression.(Several 2)Breeding ratio =(cell number of sample after specified time elapse)/(cell number at a certain time after a culture start)[0033] As another example, when the prediction target is the "remaining division time", the culture is continued until the cells do not divide, and the time required from the start of the culture is measured. "Measured data" is obtained by the following equation. (Several 3)Residual division time = required time - (elapsed time from start of culture to prediction time) Therefore, in view of Ebisawa, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the reference information includes morphology-related information of the cell and culture supernatant component information of the cell, incorporated in the device of Mizukami, in order to provide an information processing model for predicting quality in cell culture through various output indices that includes morphology information, which allows for high accuracy prediction of the cultured cells quality (as stated in Ebisawa ¶ [06], [07] and [77]). Re claim 8: However, Mizukami fails to specifically teach the features of the cell culture evaluation device according to claim 7, wherein the morphology-related information includes at least one of a type, a donor, a confluency, a quality, or an initialization method of the cell. However, this is well known in the art as evidenced by Ebisawa. Similar to the primary reference, Ebisawa discloses inputting cell images within a neural network (same field of endeavor or reasonably pertinent to the problem). Ebisawa discloses wherein the morphology-related information includes at least one of a type, a donor, a confluency, a quality, or an initialization method of the cell (e.g. the quality of the cell is considered as the breeding ration that is sent to the neural network, which is taught in [09] and [31]-[33] above.). Therefore, in view of Ebisawa, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the morphology-related information includes at least one of a type, a donor, a confluency, a quality, or an initialization method of the cell, incorporated in the device of Mizukami, in order to provide an information processing model for predicting quality in cell culture through various output indices that includes morphology information, which allows for high accuracy prediction of the cultured cells quality (as stated in Ebisawa ¶ [06], [07] and [77]). Re claim 13: However, Mizukami fails to specifically teach the features of the cell culture evaluation device according to claim 12, wherein the morphology-related information includes at least one of a type, a donor, a confluency, a quality, or an initialization method of the cell. However, this is well known in the art as evidenced by Ebisawa. Similar to the primary reference, Ebisawa discloses inputting cell images within a neural network (same field of endeavor or reasonably pertinent to the problem). Ebisawa discloses wherein the morphology-related information includes at least one of a type, a donor, a confluency, a quality, or an initialization method of the cell (e.g. the quality of the cell is considered as the breeding ration that is sent to the neural network, which is taught in [09] and [31]-[33] above.). Therefore, in view of Ebisawa, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the morphology-related information includes at least one of a type, a donor, a confluency, a quality, or an initialization method of the cell, incorporated in the device of Mizukami, in order to provide an information processing model for predicting quality in cell culture through various output indices that includes morphology information, which allows for high accuracy prediction of the cultured cells quality (as stated in Ebisawa ¶ [06], [07] and [77]). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mizukami, as modified by Kapur and Khammanivong, as applied to claim 9 above, and further in view of Sashida (US Pub 2018/0286040). Re claim 10: However, Mizukami fails to specifically teach the features of the cell culture evaluation device according to claim 9, wherein the compression unit includes: a plurality of extraction units that are prepared according to a size of an extraction target group in the cell image, each of the plurality of extraction units extracting, using a convolution layer, a target group feature amount set composed of a plurality of types of target group feature amounts for the extraction target group corresponding to the each of the plurality of extraction unit, and a fully connected unit that converts, using a fully connected layer, a plurality of the target group feature amount sets output from the plurality of extraction units into the image feature amount set. However, this is well known in the art as evidenced by Sashida. Similar to the primary reference, Sashida discloses extracting images of cell images (same field of endeavor or reasonably pertinent to the problem). Sashida discloses wherein the compression unit includes: a plurality of extraction units that are prepared according to a size of an extraction target group in the cell image, each of the plurality of extraction units (interpretation: The plurality of extraction units85 are prepared according to the size of an extraction target group in the cell image12. The extraction units85 extract the target group feature amount maps87 composed of a plurality of types of target group feature amounts C, D, E, and F for the extraction target groups that each of the extraction units85 is in charge of, using the convolution layers90 and 91, which is taught in ¶ [93]. This interpretation and its equivalents are utilized for this claim term hereinafter in the Office Action.) extracting, using a convolution layer, a target group feature amount set composed of a plurality of types of target group feature amounts for the extraction target group corresponding to the each of the plurality of extraction unit (e.g. cells are recognized in a system where scanning images are of a predetermined size. A convolution operation is performed on the scanned images using feature extraction filters, which is taught in ¶ [83]. A plurality of extraction filters is taught in ¶ [98], [99], [107]-[109] and [113]-[115]. The sizes of the feature maps are different, and the extraction filters are set to generate feature amount sets for each size, which is taught in ¶ [128]-[130].), and [0083] The convolution layer performs convolution on each pixel value of the scanned images of predetermined sizes by using feature extraction filters with preset weights (and biases). The convolution layer sequentially performs convolution for every scan and maps the results. The convolution layer performs convolution for each image that is input from the preceding layer by using the feature extraction filter, and adds the results in the corresponding mapping positions, thereby generating a feature map. [0098] In the convolutional neural network of the embodiment, bright-field image processing section 30, fluorescence image processing section 40, concatenation section 50, and integrated processing section 60 constitute feature extraction section Na, whereas classification section 70 constitutes classification section Nb. [0099] Network parameters (weights, biases) of bright-field image processing section 30, fluorescence image processing section 40, integrated processing section 60, and classification section 70, for example, of the convolutional neural network undergo training processing in advance by training section 1c such that classification information, such as the state of cells, is output on the basis of the bright-field images and the fluorescence images. [0107] [Bright-Field Image Processing Section] [0108] Bright-field image processing section 30 generates a plurality of feature map data D3 (hereinafter, also referred to as “first feature map group D3”) in which image features of bright-field images D1 have been extracted by hierarchically connected feature extraction layers (hereinafter, also referred to as a “first series of feature extraction layers”), and then outputs first feature map group D3 to concatenation section 50. [0109] Bright-field image processing section 30 has a similar configuration to a common CNN described with reference to FIG. 5, and is composed of a plurality of hierarchically connected feature extraction layers 30a . . . 30n (n represents an optional number of layers). Bright-field image processing section 30 performs feature extraction processing, such as convolution, of input data from the preceding layer by each feature extraction layer 30a . . . 30n, and outputs feature maps to the following layer. [0113] [Fluorescence Image Processing Section] [0114] Fluorescence image processing section 40 generates a plurality of feature map data D4 (hereinafter, also referred to as “second feature map group D4”) in which image features of fluorescence images D2 are extracted by hierarchically connected feature extraction layers 40a . . . 40k (hereinafter, also referred to as a “second series of feature extraction layers”), and outputs the second feature map group D4 to concatenation section 50. [0115] Fluorescence image processing section 40 has a similar configuration to a common CNN described with reference to FIG. 5, and is composed of a plurality of hierarchically connected feature extraction layers 40a . . . 40k (representing an optional number of layers). Fluorescence image processing section 40 performs feature extraction processing, such as convolution, of input data from the preceding layer in respective feature extraction layers 40a . . . 40k, and then outputs feature maps to the following layer. [0128] Concatenation section 50 concatenates, for example, first feature map group D3 of the bright-field images and second feature map group D4 of the fluorescence images as different channels. For example, suppose that the number of the feature maps of first feature map group D3 is 100 and the number of the feature maps of second feature map group D4 is 150, concatenation section 50 concatenates these feature maps to create 250 feature maps. Through processing of concatenation section 50, first feature map group D3 and second feature map group D4 are correlated with each other in every pixel region. [0129] More preferably, concatenation section 50 matches image sizes of first feature map group D3 and second feature map group D4, and then outputs the matched results to integrated processing section 60 in a later stage. As described above, bright-field image processing section 30 and fluorescence image processing section 40 are different in the number of feature extraction layers, and consequently the image size of first feature map group D3 and the image size of second feature map group are different. If the sizes are not matched, there is a risk of failure in which integrated processing section 60 in a later stage cannot correlate first feature map group D3 to second feature map group D4 and vice versa in every pixel region. [0130] In view of the above, concatenation section 50 matches image sizes of first feature map group D3 and second feature map group D4 by upscaling the image sizes of first feature map group D3 to the image sizes of second feature map group D4 using deconvolution or bilinear interpolation, for example. Through such matching, first feature map group D3 and second feature map group D4 are correlated in every pixel region, thereby enhancing accuracy in processing in a later stage. a fully connected unit (interpretation: The fully connected unit86 converts a plurality of target group feature amount maps87 output from the plurality of extraction units85 into the image feature amount set55, using the fully connected layer95, which is taught in ¶ [93]. This interpretation and its equivalents are utilized for this claim term hereinafter in the Office Action.) that converts, using a fully connected layer, a plurality of the target group feature amount sets output from the plurality of extraction units into the image feature amount set (e.g. the concatenation section converts the various outputs from the previous extraction units comprised of feature map groups into a feature amount set, which is taught in ¶ [128]-[130] above.). Therefore, in view of Sashida, it would have been obvious to one of ordinary skill at the time the invention was made to have the feature of wherein the compression unit includes: a plurality of extraction units that are prepared according to a size of an extraction target group in the cell image, each of the plurality of extraction units extracting, using a convolution layer, a target group feature amount set composed of a plurality of types of target group feature amounts for the extraction target group corresponding to the each of the plurality of extraction unit, and a fully connected unit that converts, using a fully connected layer, a plurality of the target group feature amount sets output from the plurality of extraction units into the image feature amount set, incorporated in the device of Mizukami, as modified by Ebisawa, in order to use extraction units to extract feature amounts and an output unit to convert the received data into an image feature amount set, which allows for accurate identification of the type and morphology of cells (as stated in Sashida ¶ [13]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hattori discloses extraction units used to extract cell images. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD S DICKERSON whose telephone number is (571)270-1351. The examiner can normally be reached Monday-Friday 10AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached on 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHAD DICKERSON/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Sep 27, 2022
Application Filed
Jan 25, 2025
Non-Final Rejection — §103
Apr 09, 2025
Response Filed
Jul 12, 2025
Non-Final Rejection — §103
Nov 17, 2025
Response Filed
Feb 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602908
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603960
IMAGE ANALYSIS APPARATUS, IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, PROGRAM, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM COMPRISING READING A PRINTED MATTER, ANALYZING CONTENT RELATED TO READING OF THE PRINTED MATTER AND ACQUIRING SUPPORT INFORMATION BASED ON AN ANALYSIS RESULT OF THE CONTENT FOR DISPLAY TO ASSIST A USER IN FURTHER READING OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12579817
Vehicle Control Device and Control Method Thereof for Camera View Control Based on Surrounding Environment Information
2y 5m to grant Granted Mar 17, 2026
Patent 12522110
APPARATUS AND METHOD OF CONTROLLING THE SAME COMPRISING A CAMERA AND RADAR DETECTION OF A VEHICLE INTERIOR TO REDUCE A MISSED OR FALSE DETECTION REGARDING REAR SEAT OCCUPATION
2y 5m to grant Granted Jan 13, 2026
Patent 12519896
IMAGE READING DEVICE COMPRISING A LENS ARRAY INCLUDING FIRST LENS BODIES AND SECOND LENS BODIES, A LIGHT RECEIVER AND LIGHT BLOCKING PLATES THAT ARE BETWEEN THE LIGHT RECEIVER AND SECOND LENS BODIES, THE THICKNESS OF THE LIGHT BLOCKING PLATES EQUAL TO OR GREATER THAN THE SECOND LENS BODIES THICKNESS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
63%
Grant Probability
86%
With Interview (+23.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 600 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month