Prosecution Insights
Last updated: April 19, 2026
Application No. 18/065,162

Microscopy Virtual Staining Systems and Methods

Non-Final OA §103§112
Filed
Dec 13, 2022
Examiner
RODRIGUEZ, ANTHONY JASON
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Georgia Tech Research Corporation
OA Round
3 (Non-Final)
17%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
-5%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
3 granted / 18 resolved
-45.3% vs TC avg
Minimal -21% lift
Without
With
+-21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
47 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered. Election/Restrictions Applicant’s election of 1-5, 7-9, and 11-20, through the cancelling of claim 6 in the reply filed on 12/10/2025 is acknowledged. Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP § 818.01(a)). Response to Arguments Applicant’s arguments, see Remarks pages 1-5, filed 12/10/2025, with respect to the Rejections of claims 1-5, 7-9, 14-18, and 20 under 35 U.S.C. 101 have been fully considered and are persuasive. The Rejections of claims 1-5, 7-9, 14-18, and 20 have been withdrawn. Applicant’s arguments, see Remarks page 5, filed 12/10/2025, with respect to the Rejections of claims 1-5, 7-9, 14-18, and 20 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The Rejections of claims 1-5, 7-9, 14-18, and 20 have been withdrawn. Applicant's arguments, see Remarks page 5-11, filed 12/10/2025, with respect to the Rejection of amended claim 1 under 35 U.S.C. 103 have been fully considered but they are not persuasive. On Pages 6-7, Applicant argues: PNG media_image1.png 392 781 media_image1.png Greyscale Examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the acquiring and usage of images acquired at only one wavelength) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Page 14780: Col 1: Para 3 of Ojaghi discloses “Our multispectral deep-UV microscope enables fast imaging of live, unstained cells at different discrete UV wavelengths...As depicted in Fig. 1A, the system uses a broad-band, laser-driven plasma source and a set of band-pass filters to tune the imaging wavelength to the absorption peaks of major bio- chemical components in human cells (more details in Materials and Methods). Here we use 260 nm and 280 nm, which correspond to absorption peaks from nucleic acids and proteins, respectively (29, 30, 34). We also acquire images with a center wavelength of 300 nm,” wherein, as shown in Figure 1, 3 grayscale images are acquired from deep-UV microscope, wherein each image is acquired according to a respective wavelength. Since the language of the claim limitations require “using a deep UV microscope to obtain one or more single-channel UV images acquired at a single wavelength of a biological sample,” the claim limitation does not limit the obtaining or usage of the single-channel UV images to only images acquired at a single wavelength, but rather simply requires the obtaining and usage of at least one “single-channel UV images acquired at a single wavelength of a biological sample.” Thus, Ojaghi discloses the limitation “using a deep UV microscope to obtain one or more single-channel UV images acquired at a single wavelength of a biological sample, wherein the biological sample comprises cells from blood or bone marrow.” On Pages 7-9 of Remarks, Applicant argues: PNG media_image2.png 834 727 media_image2.png Greyscale Applicant’s arguments have been fully considered and are moot in view of the new grounds of rejection of Claim 1 (detailed in the rejections below) necessitated by Applicant’s amendments to the claim(s). On Page 9 of Remarks, Applicant argues: PNG media_image3.png 336 749 media_image3.png Greyscale Applicant’s arguments been fully considered and are moot in view of the new grounds of rejection of Claim 1 (detailed in the rejections below) necessitated by Applicant’s amendments to the claim(s). On Pages 10-11 of Remarks, Applicant argues: PNG media_image4.png 606 763 media_image4.png Greyscale Examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “A pseudo-RGB image is generated for each cell, composed of the UV image of the masked cell (placed in the green channel), the nucleus mask (placed in the red channel), and the cell mask (placed in the blue channel). This channel composition directs the pretrained ResNet-18 to focus on nuclear and cytoplasmic features while suppressing background”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Paragraph 0006 of Song discloses “In one embodiment, a plurality of training WSIs, e.g., labeled hematoxylin and eosin (H&E)-stained whole slide images each corresponding to a patient, is obtained…The classifier model is trained using the fixed-size feature maps corresponding to the plurality of training WSIs, and a classification engine is configured to use the trained classifier model to determine a WSI-level tissue or cell morphology classification or regression for a test WSI,” wherein stained slide images are processed by a classifier model, which is trained the classification of cells within the images. In addition, Paragraph 0048 of Song discloses that the classifier model is a deep learning neural network model. Thus, Song discloses the limitation “classifying or characterizing cells in the biological sample using a second deep learning neural network separate from the first deep learning neural network.” Claim Objections Claim 2 is objected to because of the following informalities: The value ranges pertaining to green-red and blue-yellow color value ranges should each be corrected from “between 127 and +127” to “between -127 and +127.” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 3 & 15 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 3 discloses the limitations “the lightness values in the first data set are between 0 and 100; the green-red values in the second data set are between -127 and +127; and the blue-yellow values in the third data set are between -127 and +127,” which fail to further the claim 2 limitations pertaining to the lightness, red-green, and blue-yellow value ranges. Claim 15 discloses the limitation “the first deep learning neural network is a generative adversarial network,” which fails to further limit the claim 1 limitation “a first deep learning neural network comprising a conditional generative adversarial network (cGAN) having a generator with a U-net architecture comprising encoding and decoding paths with skip connections” Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-5, 8-9, 11, and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Ojaghi et al. (Label-free hematology analysis using deep-ultraviolet microscopy) hereinafter referenced as Ojaghi, in view of Li et al. (Deep Learning for Virtual Histological Staining of Bright-Field Microscopic Images of Unlabeled Carotid Artery Tissue) hereinafter referenced as Li, Roy et al. (Novel Color Normalization Method for Hematoxylin & Eosin Stained Histopathology Images) hereinafter referenced as Roy, and Song et al. (US 20220180626 A1) hereinafter referenced as Song. Regarding claim 1, Ojaghi discloses: A method comprising: using a deep UV microscope to obtain one or more single-channel UV images acquired at a single wavelength of a biological sample (Ojaghi: Page 14780: Col 1: Para 3: “Our multispectral deep-UV microscope enables fast imaging of live, unstained cells at different discrete UV wavelengths... As depicted in Fig. 1A, the system uses a broad-band, laser-driven plasma source and a set of band-pass filters to tune the imaging wavelength to the absorption peaks of major bio- chemical components in human cells (more details in Materials and Methods). Here we use 260 nm and 280 nm, which correspond to absorption peaks from nucleic acids and proteins, respectively (29, 30, 34). We also acquire images with a center wavelength of 300 nm”; Page 14780: Col 2: Para 3: “In this approach, we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Wherein each UV image captured at a specific wavelength is a single channel image.), wherein the biological sample comprises cells from blood or bone marrow (Ojaghi: Abstract: “we introduce a pseudocolorization scheme that accurately recapitulates the appearance of cells under conventional staining protocols for microscopic analysis of blood smears and bone marrow aspirates.”); generating a virtually stained image of the biological sample comprising: generating a first data set for one or more of the single-channel UV images, the first data set comprising at least one data value for pixels of one or more of the single channel UV images (Ojaghi: Figure 1 B; Page 14780: Col 1: Para 3: “Here we use 260 nm and 280 nm, which correspond to absorption peaks from nucleic acids and proteins, respectively (29, 30, 34). We also acquire images with a center wavelength of 300 nm,”; Wherein the UV images taken constitute the first dataset); processing the first data set to generate one or more additional data sets, each additional data set comprising at least one data value corresponding to a value in a color model for the pixels. in one or more of the single-channel UV images (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Wherein the processing of each UV single channel image to a single RGB channel image constitutes an additional dataset); and creating virtually stained image of the biological sample using at least one or more of the additional data sets (Ojaghi: Page 14788: Col 1: Para 4: “Stitching of pseudocolorized images was performed using the Grid/Collection stitching plugin (49) of the Fiji (50) software, which calculates the overlap between each tile and linearly blends them into a single wide-field image.”); and classifying or characterizing cells in the biological sample using a using a machine learning algorithm (Ojaghi: Page 14782: Col 1: Para 2: “To complete the five-part cell differential analysis, we employed a machine learning algorithm, using support-vector machine (SVM) learning, trained using the extracted features from granulocytes. We evaluated the trained SVM model according to a fivefold cross-validation scheme which yields an accuracy of 98.3%, sensitivity of 95%, and specificity of 100% for classification of granulocyte subtypes (i.e., neutrophils, eosinophils, and basophils) using all features from the three UV wavelengths.”). Ojaghi does not disclose expressly: inputting the first data set into a first deep learning neural network comprising a conditional generative adversarial network (cGAN) having a generator with a U-net architecture comprising encoding and decoding paths with skip connections to generate one or more additional data sets. Li discloses: a method for generating a virtually stained image of a biological sample, wherein the method comprises: inputting a data set into a deep learning neural network comprising a conditional generative adversarial network (cGAN) (Li: Section: Conclusions: “we have developed a deep learning-based virtual staining method that transformed bright-field microscopic images of unlabeled tissue sections into their corresponding images of histological staining of the same samples using a conditional generative adversarial network model.”) having a generator with a U-net architecture comprising encoding and decoding paths with skip connections to generate one or more additional data sets (Li: Fig. 2: “Architecture of virtual staining cGAN. The generator consists of eight convolution layers of stride two that are each followed by a batch-norm module to avoid overfitting of the network. The eight upsampled sections are followed by the deconvolutional layers to increase the number of channels. Each upsampling section contains a deconvolution layer upsampled by stride two. Skip connections are used to share data between layers of the same level.”; Section: Conditional Generative Adversarial Network Architecture: “In this work, the generator D and the discriminator G of the cGAN comprised a U-net architecture [25] and of PatchGAN [26], respectively, as shown in Fig. 2.”; Wherein the processing of each image to a stained image constitutes an additional dataset). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique of utilizing a cGAN for the generation of virtually stained images as taught by Li into Ojaghi by generating each channel image through the use of a cGAN. The suggestion/motivation for doing so would have been “The results of a blind evaluation by board-certified pathologists illustrate that the virtual staining and standard histological staining images of rat carotid artery tissue sections and those involving different types of stains showed no major differences…This virtual staining method significantly mitigates the typically laborious and time consuming histological staining procedures and could be augmented with other label-free microscopic imaging modalities.” (Li: Abstract; Wherein the cGAN network allows for quick and accurate results while allowing for different imaging modalities). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Ojaghi in view of Li does not disclose expressly: post-processing the virtually stained image with a histogram operation to alter a background hue in the virtually stained image. Roy discloses: the processing of a stained image with a histogram operation to alter a background hue in the stained image (Roy: Abstract: “ In this paper, a novel color normalization method is proposed for Hematoxylin and Eosin stained histopathology images. Conventional Reinhard algorithm is modified in our proposed method by incorporating fuzzy logic. Moreover, mathematically, it is proved that our proposed method satisfies all three hypotheses of color normalization. ”; Section: A. Correlation Co-Efficient: “Color normalization method must preserve all the information of the source image, according to our pathologists’ group. This information preservation can be measured by estimating correlation co-efficient between source image and processed image… The first hypothesis of color normalization method is given in equation (4)… Equation (7) reveals that the shape of the normalized histogram (pdf) of source image will remain unchanged in the processed image, since only the magnitude of pdf is scaled by a real constant c…we believe that correlation coefficient is more realistic metric than discrete entropy for evaluating the preservation of source information.”; Section: B. Global Mean Color in Color Space: “The second hypothesis of color normalization method is that in any color normalization method, global mean color (background color) of processed image should be equal to global mean color of target image. In other words, in color space (αβ space), μtar≈μproc(8) where μtar is the mean color of target image, μproc is the mean color of processed image.”; Wherein the proposed color normalization method satisfying the first and second hypothesis of color normalizations constitutes a histogram operation to alter a background hue). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the method for color normalization taught by Roy on the stitched together RGB images disclosed by Ojaghi in view of Li. The suggestion/motivation for doing so would have been "color variation in the CAD system is inevitable due to the variability of stain concentration and manual tissue sectioning. The small variation in color may lead to the misclassification of cancer cells. Therefore, color normalization is a very much essential step prior to segmentation and classification in order to reduce the inter-variability of background color among a set of source images" (Roy: Abstract; Wherein color normalization improves the accuracy of subsequent processing methods). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Ojaghi in view of Li and Roy does not disclose expressly: classifying or characterizing cells in the biological sample using a second deep learning neural network separate from the first deep learning neural network. Song discloses: the classifying of cells in a biological sample using a deep learning model (Song: 0048: “FIG. 5 illustrates a block diagram of example operations for determining tissue or cell morphology classifications or regressions based on whole slide images in accordance with an embodiment. In system 500, a deep learning neural network model trained using fixed-size feature maps that allows for the analysis of WSI characteristics,”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to substitute the SVM machine learning algorithm disclosed in Ojaghi in view of Li and Roy with the deep learning machine algorithm disclosed in Song. The suggestion/motivation for doing so would have been “The various embodiments provide for a classifier model to be trained to determine a WSI-level tissue and/or cell morphology classification or regression using deep learning methods based on a limited set of training pathology slide images.” (Song: 0043: Wherein the model is able to be trained based on limited training data). Further, one skilled in the art could have substituted the elements as described above by known methods with no change in their respective functions, and the substitution would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ojaghi in view of Li and Roy with Song to obtain the invention as specified in claim 1. Regarding claim 4, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1, wherein the additional data sets comprise: a second data set representing pixels in one or more of the single-channel UV images with a red value in a RGB color model; a third data set representing pixels in one or more of the single-channel UV images with a blue value in the RGB color model; and a fourth data set representing pixels in one or more of the single-channel UV images with a green value in the RGB color model (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “In this approach, we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Page 14788: Col 1: Para 4: “Stitching of pseudocolorized images was performed using the Grid/Collection stitching plugin (49) of the Fiji (50) software”; Wherein each single RGB channel image from a single channel UV image constitutes an additional dataset). Regarding claim 5, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1, further comprising converting at least one of the data values in one or more of the additional data sets from a first color model to a second color model (Ojaghi: Page 14788: Col 1: Para 3: “Finally, the colorized images were transformed to the HSV (hue, saturation, and value) color space, a constant hue offset of +0.05 was applied, and they were converted back to RGB color space.”; Wherein each RGB channel image was converted to the HSV color model and back to the RGB color space.). Regarding claim 8, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1 further comprising: virtually staining one or more of the single-channel UV images using the first deep learning neural network (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Wherein the UV single channel images are converted to single RGB channel images using a cGAN, as taught by Li.); and displaying the virtually stained image (Ojaghi: Figure 6A; Page 14786: Col 1: “Common smear artifacts in the transition region from the monolayer to the feathered region where RBCs lose their biconcave shape (SI Appendix, Fig. S5) are observed in both the label-free UV and stained images…the presence of halo-like edge effects in UV images did not strongly affect visual inspection and was only noted by the hematologists at a rate of 1.83% (2 evaluations out of 109).”; Page 14788: Col 1: Para 4: “Stitching of pseudocolorized images was performed using the Grid/Collection stitching plugin (49) of the Fiji (50) software, which calculates the overlap between each tile and linearly blends them into a single wide-field image.”; Wherein the single RGB channel images are stitched together to form a virtually stained imaged, which is then viewed and compared to the stained images.); wherein: the one or more single-channel UV images are single-channel UV grayscale images (Ojaghi: Page 14780: Col 1: Para 3: “Our multispectral deep-UV microscope enables fast imaging of live, unstained cells at different discrete UV wavelengths... As depicted in Fig. 1A, the system uses a broad-band, laser-driven plasma source and a set of band-pass filters to tune the imaging wavelength to the absorption peaks of major bio- chemical components in human cells (more details in Materials and Methods). Here we use 260 nm and 280 nm, which correspond to absorption peaks from nucleic acids and proteins, respectively (29, 30, 34). We also acquire images with a center wavelength of 300 nm”; Wherein each UV image captured at a specific wavelength is a single channel image.); and the classifying or characterizing comprises: generating, from one or more of the single-channel UV grayscale images, a first mask representative of cells in the biological sample; generating, from one or more of the single-channel UV grayscale images, a second mask representative of nuclei in the biological sample (Ojaghi: Figure 2; Page 14780: Col 2: Para 4: “As shown in Fig. 2, our approach enables us to produce colorized images that faithfully recapitulate features of significant importance for blood cell phenotyping and differentiation using traditional staining protocols with bright-field microscopy… absorption of nucleic acids in leukocyte nuclei gives rise to the well-known distinctive violet color observed in Giemsa-stained images. In addition to nuclear contrast, our UV images exhibit key cytoplasmic color differences which mainly stem from the different levels of protein (SI Appendix, Fig. S1).”; Wherein the virtually stained UV images constitute the masks); generating, based on one or more of the single-channel UV grayscale images and the first and second masks, a feature vector; and classifying or characterizing, using the first and second masks and the feature vector, cells in the biological sample (Song: 0006: “a plurality of training WSIs, e.g., labeled hematoxylin and eosin (H&E)-stained whole slide images each corresponding to a patient, is obtained…A varied-size feature map is generated for each of the plurality of training WSIs by generating a grid of patches for the training WSI, segmenting the training WSI into tissue and non-tissue areas, and converting patches comprising the tissue areas into tensors, e.g., multidimensional descriptive vectors comprising RGB components…A fixed-size feature map is generated based on at least a subset of the feature map patches, which may be randomly selected and/or arranged randomly within the fixed-size feature map. The fixed-size feature map may comprise one of a (256, 256, 512) or (224, 224, 512) feature map…The classifier model is trained using the fixed-size feature maps corresponding to the plurality of training WSIs, and a classification engine is configured to use the trained classifier model to determine a WSI-level tissue or cell morphology classification or regression for a test WSI.”). Regarding claim 9, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1, wherein the one or more single-channel UV images are acquired at a center wavelength of 250-265 nm (Ojaghi: Figure 1; Page 14780: Col 2: Para 3: “In this approach, we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Wherein the captured images comprise 260nm images.). Regarding claim 11, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1, wherein classifying or characterizing the cells using the second deep learning neural network comprises: generating, from one or more of the single-channel UV images, a first mask representative of cells in the biological sample; generating, from one or more of the single-channel UV images, a second mask representative of nuclei in the biological sample (Ojaghi: Figure 2; Page 14780: Col 2: Para 4: “As shown in Fig. 2, our approach enables us to produce colorized images that faithfully recapitulate features of significant importance for blood cell phenotyping and differentiation using traditional staining protocols with bright-field microscopy… absorption of nucleic acids in leukocyte nuclei gives rise to the well-known distinctive violet color observed in Giemsa-stained images. In addition to nuclear contrast, our UV images exhibit key cytoplasmic color differences which mainly stem from the different levels of protein (SI Appendix, Fig. S1).”; Wherein the virtually stained UV images constitute the masks); generating, based on one or more of the single-channel UV images and the first and second masks, a feature vector; and classifying or characterizing, using the first and second masks and the feature vector, cells in the biological sample (Song: 0006: “a plurality of training WSIs, e.g., labeled hematoxylin and eosin (H&E)-stained whole slide images each corresponding to a patient, is obtained…A varied-size feature map is generated for each of the plurality of training WSIs by generating a grid of patches for the training WSI, segmenting the training WSI into tissue and non-tissue areas, and converting patches comprising the tissue areas into tensors, e.g., multidimensional descriptive vectors comprising RGB components…A fixed-size feature map is generated based on at least a subset of the feature map patches, which may be randomly selected and/or arranged randomly within the fixed-size feature map. The fixed-size feature map may comprise one of a (256, 256, 512) or (224, 224, 512) feature map …The classifier model is trained using the fixed-size feature maps corresponding to the plurality of training WSIs, and a classification engine is configured to use the trained classifier model to determine a WSI-level tissue or cell morphology classification or regression for a test WSI.”). Regarding claim 13, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 11, wherein the feature vector comprises 512 features (Song: 0006: “An RGB component of the image patch may be converted into a feature vector, e.g., a 512-feature vector for a resnet34 deep-learning neural network. However, the feature vector is not limited to a 512-feature vector or a particular deep learning model. At least one bounding box is generated based on the patches comprising the tissue areas. The at least one bounding box is segmented into feature map patches. A fixed-size feature map is generated based on at least a subset of the feature map patches, which may be randomly selected and/or arranged randomly within the fixed-size feature map. The fixed-size feature map may comprise one of a (256, 256, 512) or (224, 224, 512) feature map. A classifier model is configured to process fixed-size feature maps corresponding to the training WSIs such that, for each fixed-size feature map, the classifier model is operable to assign a WSI-level tissue or cell morphology classification or regression based on the tensors.”). Regarding claim 14, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1 further comprising training the first deep learning neural network using pairs of single-channel UV grayscale images and pseudocolorized images (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”) (Li: Section: Preparing Dataset for Training: “We acquired 60 images for the unstained, H&E-, PSR-, and orcein-stained groups. In total, 240 whole-slide images (WSIs) were obtained. Each WSI (1079×1079 pixels) was randomly cropped into 25 smaller overlapping patches (500 × 500 pixels). After eliminating the patches without intima and media, we obtained training image pairs (1800, 1500, and 1500) and testing image pairs (200, 150, and 150) for virtual H&E, PSR, and orcein staining, respectively.”; Wherein the training dataset for the cGANs would comprise pairs of single wavelength UV images and colors images disclosed by Ojaghi). Regarding claim 15, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1, wherein the first deep learning neural network is a generative adversarial network (Li: Section: Conclusions: “In conclusion, we have developed a deep learning-based virtual staining method that transformed bright-field microscopic images of unlabeled tissue sections into their corresponding images of histological staining of the same samples using a conditional generative adversarial network.”). Claim(s) 2 and 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ojaghi in view of Li, Roy, and Song, and further in view of Wang et al. (CN109557093A) hereinafter referenced as Wang. Regarding claim 2, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 1, wherein the first data set comprises the one or more UV images, wherein the UV images constitute gray-scale images; and one or more of the additional data sets comprises a second, third, and forth data set representing pixels in one or more of the single-channel UV images with a red, green, blue RGB color model value, respectfully (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”). Ojaghi in view of Li, Roy, and Song does not disclose expressly: wherein: the first data set comprises a lightness value between 0 and 100 in a LAB color model for pixels of one or more of the single-channel UV images; and one or more of the additional data sets comprises a second data set representing pixels in one or more of the single-channel UV images with a green-red value between -127 and + 127 in the LAB color model and a third data set representing pixels in one or more of the single channel UV images with a blue-yellow value between -127 and + 127 in the LAB color model. Wang discloses: a method for converting RGB color space images into LAB color space images (Wang: 0021: “The urine test strip color measurement algorithm of the present invention converts the RGB color space to the LAB color space, and performs corresponding conversions on the L, A, and B values of the LAB color space through a corresponding algorithm.”), wherein for each of the images, the color components present in the images are converted from RGB to LAB color components through the use of an intermediate XYZ color space, wherein the ranges for the LAB color space are defined as a lightness value between 0 and 100, a green-red value between -127 and + 127, and a blue-yellow value between -127 and +127 (Wang: 0009-0014:“the technical solution of the present invention is: a urine test strip color measurement algorithm, comprising the following steps: Step 1: converting the RGB color components into LAB color components by establishing a channel XYZ color space…Step 2: Define the range of values for variables X, Y, Z, and t as [0,1]; Step 3: Define the value range of the L component as [0, 100], and the values of the A and B components as [-127, 127]”; Wherein the a values constitute green-red values and b values constitute blue-yellow values). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the algorithms for converting RGB color space images into LAB color space images taught by Wang into Ojaghi in view of Li, Roy, and Song by converting the first dataset and RGB additional datasets into luminance and green-red and blue-yellow datasets. The suggestion/motivation for doing so would have been “Unlike the RGB color space, LAB colors are designed to approximate human vision. It is designed to perceive uniformity, and its L component closely matches human brightness perception. Therefore, it can be used to achieve precise color balance by modifying the input levels of the a and b components, or to adjust brightness contrast using the L component.” (Wang: 0023). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ojaghi in view of Li, Roy, and Song with Wang to obtain the invention as specified in claim 2. Regarding claim 3, Ojaghi in view of Li, Roy, Song, and Wang discloses: The method of claim 2, wherein: the lightness values in the first data set are between 0 and 100; the green-red values in the second data set are between -127 and+ 127; and the blue-yellow values in the third data set are between -127 and+ 127 (Wang: 0009-0014:“the technical solution of the present invention is: a urine test strip color measurement algorithm, comprising the following steps: Step 1: converting the RGB color components into LAB color components by establishing a channel XYZ color space…Step 2: Define the range of values for variables X, Y, Z, and t as [0,1]; Step 3: Define the value range of the L component as [0, 100], and the values of the A and B components as [-127, 127]”).; Wherein the a values constitute green-red values and b values constitute blue-yellow values). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ojaghi in view of Li, Roy, and Song, and further in view of Nagase et al. (US 20230127415 A1) hereinafter referenced as Nagase. Regarding claim 12, Ojaghi in view of Li, Roy, and Song discloses: The method of claim 11. Ojaghi in view of Li, Roy, and Song does not disclose expressly: wherein classifying or characterizing the cells further comprises determining whether the cells are dead or alive. Nagase discloses: the determining of whether cells are dead or alive (Nagase: 0102: “In addition, the classification unit 52 classifies the non-cell sample image SP2 given the sub-label indicating the “twisted dead cell”, the “air bubble”, or the “scratch on the well”, which is clearly different from a cell, as the cell-unlike sample image SP2B.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the unit for classifying samples as cells or non-cells disclosed in Nagase prior to the cell classification model disclosed in Ojaghi in view of Li, Roy, and Song. The suggestion/motivation for doing so would have been “in a case in which the detection accuracy of the cell is high, even though a plurality of cells are seeded in the well, some of the plurality of seeded cells may be determined to be non-cells. As a result, the accuracy of guaranteeing the unity of the cells is reduced…Preferably, the non-cell sample image is given a sub-label, and the processor classifies the non-cell sample image having similar appearance features to the cell as the cell-like sample image on the basis of the sub-label in the classification process” (Nagase: 0008 & 0013). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ojaghi in view of Li, Roy, and Song with Nagase to obtain the invention as specified in claim 12. Claim(s) 16, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ojaghi et al. (Label-free hematology analysis using deep-ultraviolet microscopy) hereinafter referenced as Ojaghi, in view of Ojaghi et al. (Deep-Ultraviolet Microscopy for Label-free Hematological Analysis) hereinafter referenced as Ojaghi(2), Li et al. (Deep Learning for Virtual Histological Staining of Bright-Field Microscopic Images of Unlabeled Carotid Artery Tissue) hereinafter referenced as Li, Roy et al. (Novel Color Normalization Method for Hematoxylin & Eosin Stained Histopathology Images) hereinafter referenced as Roy, and Song et al. (US 20220180626 A1) hereinafter referenced as Song. Regarding claim 16, Ojaghi discloses: A system comprising: a deep UV microscope with a detector configured to take one or more single channel UV images of a biological sample (Ojaghi: Page 14780: Col 1: Para 3: “Our multispectral deep-UV microscope enables fast imaging of live, unstained cells at different discrete UV wavelengths…Here we use 260 nm and 280 nm, which correspond to absorption peaks from nucleic acids and proteins, respectively (29, 30, 34). We also acquire images with a center wavelength of 300 nm,”; Wherein each UV image captured at a specific wavelength constitutes a single channel UV image), wherein the biological sample comprises cells from blood or bone marrow (Ojaghi: Abstract: “we introduce a pseudocolorization scheme that accurately recapitulates the appearance of cells under conventional staining protocols for microscopic analysis of blood smears and bone marrow aspirates.”); obtaining a first data set for the one or more single-channel UV images, the first data set comprising at least one data value for each pixel of the one or more single-channel UV images (Ojaghi: Figure 1 B; Page 14780: Col 1: Para 3: “Here we use 260 nm and 280 nm, which correspond to absorption peaks from nucleic acids and proteins, respectively (29, 30, 34). We also acquire images with a center wavelength of 300 nm,”; Wherein the single channel UV images taken constitute the first dataset); processing the first data set to generate one or more additional data sets, the one or more additional data sets comprising at least one data value corresponding to a value in a color model for each pixel in the one or more single-channel UV images (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Wherein the processing of each UV single channel image to a single RGB channel image constitutes an additional dataset); creating virtually stained image of the biological sample using at least the one or more additional data sets (Ojaghi: Page 14788: Col 1: Para 4: “Stitching of pseudocolorized images was performed using the Grid/Collection stitching plugin (49) of the Fiji (50) software, which calculates the overlap between each tile and linearly blends them into a single wide-field image.”); and classifying or characterizing the cells in the biological sample (Ojaghi: Page 14782: Col 1: Para 2: “To complete the five-part cell differential analysis, we employed a machine learning algorithm, using support-vector machine (SVM) learning, trained using the extracted features from granulocytes. We evaluated the trained SVM model according to a fivefold cross-validation scheme which yields an accuracy of 98.3%, sensitivity of 95%, and specificity of 100% for classification of granulocyte subtypes (i.e., neutrophils, eosinophils, and basophils) using all features from the three UV wavelengths.”); and a display configured to display the virtually stained image of the biological sample (Ojaghi: Figure 6A; Page 14786: Col 1: “Common smear artifacts in the transition region from the monolayer to the feathered region where RBCs lose their biconcave shape (SI Appendix, Fig. S5) are observed in both the label-free UV and stained images…the presence of halo-like edge effects in UV images did not strongly affect visual inspection and was only noted by the hematologists at a rate of 1.83% (2 evaluations out of 109).”; Wherein the virtually stained images are viewed and compared to the stained images implying the use of a display). Ojaghi does not disclose expressly: a deep UV microscope with a detector configured to take one or more single-channel UV images acquired at a single wavelength with a bandwidth of less than 50 nm of a biological sample. Ojaghi(2) discloses: a deep UV microscope with a detector configured to take one or more single-channel UV images acquired at a single wavelength (Ojaghi(2): Section: 3.2 Deep-UV multispectral microscopy and pseudo-colorization of whole blood smears: “This approach enables us to produce pseudo-RGB images that mimic the colors produced by standard Giemsa staining. We form an RGB image by simply taking the 255 nm image as the red channel, the 280 nm image as the green channel, and the 300 nm image as the blue channel.”) with a bandwidth of less than 50 nm of a biological sample (Ojaghi(2): Section: 2.1 Experimental setup: “Multi-spectral imaging was done using UV band-pass filters (~ 10nm bandwidth) installed on a filter wheel, allowing acquisition of images at three wavelength regions at 255, 280, and 300nm”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to substitute the UV band-pass filters disclosed by Ojaghi with the ~10nm bandwidth band-pass filters taught by Ojaghi(2) . The suggestion/motivation for doing so would have been “The contrast from absorption of different biochemicals within the cells give rise to color contrast between the nucleus and cytoplasm of the white blood cells (WBCs) (shown in the RGB image in Fig 3 inset) as well as enucleated RBCs. As depicted in Fig 3, many unique features of blood cells such as size, population, and morphology of red and white blood cells as well as platelet size and population can be evaluated from the UV images.” (Ojaghi(2): Section: 3.2 Deep-UV multispectral microscopy and pseudo-colorization of whole blood smears; Wherein the narrow bandwidth allows for a more accurate contrast). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Ojaghi in view of Ojaghi(2) does not disclose expressly: one or more deep learning neural networks configured to: generate a virtually stained image of the biological sample; by: inputting the first data set into a first deep learning neural network comprising a conditional generative adversarial network (cGAN) having a generator with a U-net architecture comprising encoding and decoding paths with skip connections to generate one or more additional data sets. Li discloses: a method for generating a virtually stained image of a biological sample, wherein the method comprises: inputting a data set into a deep learning neural network comprising a conditional generative adversarial network (cGAN) (Li: Section: Conclusions: “we have developed a deep learning-based virtual staining method that transformed bright-field microscopic images of unlabeled tissue sections into their corresponding images of histological staining of the same samples using a conditional generative adversarial network model.”) having a generator with a U-net architecture comprising encoding and decoding paths with skip connections to generate one or more additional data sets (Li: Fig. 2: “Architecture of virtual staining cGAN. The generator consists of eight convolution layers of stride two that are each followed by a batch-norm module to avoid overfitting of the network. The eight upsampled sections are followed by the deconvolutional layers to increase the number of channels. Each upsampling section contains a deconvolution layer upsampled by stride two. Skip connections are used to share data between layers of the same level.”; Section: Conditional Generative Adversarial Network Architecture: “In this work, the generator D and the discriminator G of the cGAN comprised a U-net architecture [25] and of PatchGAN [26], respectively, as shown in Fig. 2.”; Wherein the processing of each image to a stained image constitutes an additional dataset). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique of utilizing a cGAN for the generation of virtually stained images as taught by Li into Ojaghi in view of Ojaghi(2) by generating each channel image through the use of a cGAN. The suggestion/motivation for doing so would have been “The results of a blind evaluation by board-certified pathologists illustrate that the virtual staining and standard histological staining images of rat carotid artery tissue sections and those involving different types of stains showed no major differences…This virtual staining method significantly mitigates the typically laborious and time consuming histological staining procedures and could be augmented with other label-free microscopic imaging modalities.” (Li: Abstract; Wherein the cGAN network allows for quick and accurate results while allowing for different imaging modalities). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Ojaghi in view of Ojaghi(2) and Li does not disclose expressly: post-processing the virtually stained image with a histogram operation to alter a background hue in the virtually stained image. Roy discloses: the processing of a stained image with a histogram operation to alter a background hue in the stained image (Roy: Abstract: “ In this paper, a novel color normalization method is proposed for Hematoxylin and Eosin stained histopathology images. Conventional Reinhard algorithm is modified in our proposed method by incorporating fuzzy logic. Moreover, mathematically, it is proved that our proposed method satisfies all three hypotheses of color normalization. ”; Section: A. Correlation Co-Efficient: “Color normalization method must preserve all the information of the source image, according to our pathologists’ group. This information preservation can be measured by estimating correlation co-efficient between source image and processed image… The first hypothesis of color normalization method is given in equation (4)… Equation (7) reveals that the shape of the normalized histogram (pdf) of source image will remain unchanged in the processed image, since only the magnitude of pdf is scaled by a real constant c…we believe that correlation coefficient is more realistic metric than discrete entropy for evaluating the preservation of source information.”; Section: B. Global Mean Color in Color Space: “The second hypothesis of color normalization method is that in any color normalization method, global mean color (background color) of processed image should be equal to global mean color of target image. In other words, in color space (αβ space), μtar≈μproc(8) where μtar is the mean color of target image, μproc is the mean color of processed image.”; Wherein the proposed color normalization method satisfying the first and second hypothesis of color normalizations constitutes a histogram operation to alter a background hue). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the method for color normalization taught by Roy on the stitched together RGB images disclosed by Ojaghi in view of Ojaghi(2) and Li. The suggestion/motivation for doing so would have been "color variation in the CAD system is inevitable due to the variability of stain concentration and manual tissue sectioning. The small variation in color may lead to the misclassification of cancer cells. Therefore, color normalization is a very much essential step prior to segmentation and classification in order to reduce the inter-variability of background color among a set of source images" (Roy: Abstract; Wherein color normalization improves the accuracy of subsequent processing methods). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Ojaghi in view of Ojaghi(2), Li, and Roy does not disclose expressly: one or more deep learning neural networks configured to: classify or characterize cells in the biological sample; by: classifying or characterizing the cells in the biological sample using a second deep learning neural network separate from the first deep learning neural network. Song discloses: the classifying of cells in a biological sample using a deep learning model (Song: 0048: “FIG. 5 illustrates a block diagram of example operations for determining tissue or cell morphology classifications or regressions based on whole slide images in accordance with an embodiment. In system 500, a deep learning neural network model trained using fixed-size feature maps that allows for the analysis of WSI characteristics,”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to substitute the SVM machine learning algorithm disclosed in Ojaghi in view of Ojaghi(2), Li, and Roy with the deep learning machine algorithm disclosed in Song. The suggestion/motivation for doing so would have been “The various embodiments provide for a classifier model to be trained to determine a WSI-level tissue and/or cell morphology classification or regression using deep learning methods based on a limited set of training pathology slide images.” (Song: 0043: Wherein the model is able to be trained based on limited training data). Further, one skilled in the art could have substituted the elements as described above by known methods with no change in their respective functions, and the substitution would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ojaghi in view of Ojaghi(2), Li, and Roy with Song to obtain the invention as specified in claim 16. Regarding claim 18, Ojaghi in view of Ojaghi(2), Li, Roy, and Song discloses: The system of claim 16, wherein the one or more additional data sets comprises: a second data set representing each pixel in the one or more single-channel UV images with a red value in a RGB color model; a third data set representing each pixel in the one or more single-channel UV images with a blue value in the RGB color model; and a fourth data set representing each pixel in the one or more single-channel UV images with a green value in the RGB color model. (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “In this approach, we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Page 14788: Col 1: Para 4: “Stitching of pseudocolorized images was performed using the Grid/Collection stitching plugin (49) of the Fiji (50) software”; Wherein each single RGB channel image from a single channel UV image constitutes an additional dataset). Regarding claim 20, Ojaghi in view of Ojaghi(2), Li, Roy, and Song discloses: The system of claim 16, wherein the second deep learning neural network is configured to classify or characterize the cells by generating, from one or more of the single-channel UV images, a first mask representative of cells in the biological sample, generating, from one or more of the single channel UV images, a second mask representative of nuclei in the biological sample (Ojaghi: Figure 2; Page 14780: Col 2: Para 4: “As shown in Fig. 2, our approach enables us to produce colorized images that faithfully recapitulate features of significant importance for blood cell phenotyping and differentiation using traditional staining protocols with bright-field microscopy… absorption of nucleic acids in leukocyte nuclei gives rise to the well-known distinctive violet color observed in Giemsa-stained images. In addition to nuclear contrast, our UV images exhibit key cytoplasmic color differences which mainly stem from the different levels of protein (SI Appendix, Fig. S1).”; Wherein the virtually stained UV images constitute the masks), generating, based on one or more of the single-channel UV images and the first and second masks, a feature vector, and classifying or characterizing, using the first and second masks and the feature vector, cells in the biological sample (Song: 0006: “a plurality of training WSIs, e.g., labeled hematoxylin and eosin (H&E)-stained whole slide images each corresponding to a patient, is obtained…A varied-size feature map is generated for each of the plurality of training WSIs by generating a grid of patches for the training WSI, segmenting the training WSI into tissue and non-tissue areas, and converting patches comprising the tissue areas into tensors, e.g., multidimensional descriptive vectors comprising RGB components…A fixed-size feature map is generated based on at least a subset of the feature map patches, which may be randomly selected and/or arranged randomly within the fixed-size feature map. The fixed-size feature map may comprise one of a (256, 256, 512) or (224, 224, 512) feature map …The classifier model is trained using the fixed-size feature maps corresponding to the plurality of training WSIs, and a classification engine is configured to use the trained classifier model to determine a WSI-level tissue or cell morphology classification or regression for a test WSI.”). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ojaghi in view of Ojaghi(2), Li, Roy, and Song, and further in view of Wang. Regarding claim 17, Ojaghi in view of Ojaghi(2), Li, Roy, and Song discloses: The system of claim 16, wherein the one or more additional data sets comprises: a second data set representing each pixel in the one or more single-channel UV images with a red value in a RGB color model; a third data set representing each pixel in the one or more single-channel UV images with a blue value in the RGB color model; and a fourth data set representing each pixel in the one or more single-channel UV images with a green value in the RGB color model. (Ojaghi: Figure 1B; Page 14780: Col 2: Para 3: “In this approach, we construct a color image in RGB color space by assigning the 260-, 280-, and 300-nm images to the red, green, and blue channels, respectively (as demonstrated in Fig. 1B).”; Page 14788: Col 1: Para 4: “Stitching of pseudocolorized images was performed using the Grid/Collection stitching plugin (49) of the Fiji (50) software”; Wherein each single RGB channel image from a single channel UV image constitutes an additional dataset). Ojaghi in view of Ojaghi(2), Li, Roy, and Song does not disclose expressly: wherein: the first data set comprises a lightness value in a LAB color model for each pixel of the one or more single-channel UV images; and the one or more additional data sets comprises a second data set representing each pixel in the one or more single-channel UV images with a green-red value in the LAB color model and a third data set representing each pixel in the one or more single-channel UV images with a blue-yellow value in the LAB color model. Wang discloses: a method for converting RGB color space images into LAB color space images (Wang: 0021: “The urine test strip color measurement algorithm of the present invention converts the RGB color space to the LAB color space, and performs corresponding conversions on the L, A, and B values of the LAB color space through a corresponding algorithm.”), wherein for each of the images, the color components present in the images are converted from RGB to LAB color components through the use of an intermediate XYZ color space, wherein the ranges for the LAB color space are defined as a lightness value between 0 and 100, a green-red value between -127 and + 127, and a blue-yellow value between -127 and +127 (Wang: 0009-0014:“the technical solution of the present invention is: a urine test strip color measurement algorithm, comprising the following steps: Step 1: converting the RGB color components into LAB color components by establishing a channel XYZ color space…Step 2: Define the range of values for variables X, Y, Z, and t as [0,1]; Step 3: Define the value range of the L component as [0, 100], and the values of the A and B components as [-127, 127]”; Wherein the a values constitute green-red values and b values constitute blue-yellow values). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the algorithms for converting RGB color space images into LAB color space images taught by Wang into Ojaghi in view of Ojaghi(2), Li, Roy, and Song by converting the first dataset and RGB additional datasets into luminance and green-red and blue-yellow datasets. The suggestion/motivation for doing so would have been “Unlike the RGB color space, LAB colors are designed to approximate human vision. It is designed to perceive uniformity, and its L component closely matches human brightness perception. Therefore, it can be used to achieve precise color balance by modifying the input levels of the a and b components, or to adjust brightness contrast using the L component.” (Wang: 0023). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Ojaghi in view of Ojaghi(2), Li, Roy, and Song with Wang to obtain the invention as specified in claim 17. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J RODRIGUEZ whose telephone number is (703)756-5821. The examiner can normally be reached Monday-Friday 10am-7pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANTHONY J RODRIGUEZ/Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Dec 13, 2022
Application Filed
Apr 01, 2025
Non-Final Rejection — §103, §112
Jul 06, 2025
Response Filed
Sep 06, 2025
Final Rejection — §103, §112
Dec 10, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499701
DOCUMENT CLASSIFICATION METHOD AND DOCUMENT CLASSIFICATION DEVICE
2y 5m to grant Granted Dec 16, 2025
Patent 12488563
Hub Image Retrieval Method and Device
2y 5m to grant Granted Dec 02, 2025
Patent 12444019
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MEDIUM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
17%
Grant Probability
-5%
With Interview (-21.4%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month