Prosecution Insights
Last updated: April 19, 2026
Application No. 18/286,896

ANALYZING MICROSCOPE IMAGES OF MICROALGAE CULTURE SAMPLES

Non-Final OA §102§103§112
Filed
Oct 13, 2023
Examiner
NASHER, AHMED ABDULLALIM-M
Art Unit
2675
Tech Center
2600 — Communications
Assignee
TotalEnergies OneTech SAS
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
80 granted / 99 resolved
+18.8% vs TC avg
Strong +34% interview lift
Without
With
+34.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
63.1%
+23.1% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Examiner acknowledges the PCT/IB2021/000279 filed on 04/13/2021. Information Disclosure Statement The information disclosure statements (IDS) submitted on 01/08/2024, 01/03/2025 are being considered by the examiner. Claim Objections Claim 1 objected to because of the following informalities: Line 13 states an artificial neural network, but it should be the artificial neural network. Appropriate correction is required. Claim 12 objected to because of the following informalities: Line 2 states region of interest algorithm but should state region of interest detection algorithm to match dependent claim 11. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6 and 7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term deterministically in claim 6 is a relative term which renders the claim indefinite. The term “deterministically” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The examiner suggests to the applicant to define how a process is done deterministically or to remove the word entirely. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 14-17, 19 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Qian et al., ("Multi-Target Deep Learning for Algal Detection and Classification"). Regarding claim 14, Qian discloses providing an artificial neural network function trained to process an input microscope image of a microalgae culture sample and (page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. The framework consists of three branches that will be trained with different objectives for robust algal analysis) compute, for each respective localization among a plurality of localizations in the image each containing at least one respective micro-organism, a respective output representing a value of the one or more biological attributes for the at least one respective micro-organism ("page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. The framework consists of three branches that will be trained with different objectives for robust algal analysis. Fig. 2: Model architecture. The 3 branches simultaneously outputs the genus, bounding box, and biological class of the alga. page 3, col 1: To evaluate the performance on algal detection, mAP@IoU=50 (mean average precision of predictions with IoU >= 50%, abbreviated as mAP) is calculated [2], in which, IoU refers to the intersection of the predicted bounding box and the ground truth over their union."); and inputting to the artificial neural network function a microscope image of a microalgae culture sample to compute, for each respective localization among a plurality of localizations in the image each containing at least one respective micro-organism, a respective output representing a value of the one or more biological attributes for the at least one respective micro-organism ("Fig. 2: Model architecture. The 3 branches simultaneously outputs the genus, bounding box, and biological class of the alga. Tables 1 and 2 page 2, col 2: Cross-entropy is used as the loss function for algal classification at genus level and class level, termed as Lgenus and Lcls, respectively. Combined with the bounding box regression loss in algal detection, termed as Lbox, the total loss function can be defined as Ltotal=Lbox+Lgenus+λ∗Lcls"). Regarding claim 15, Qian discloses an artificial neural network function to process an input microscope image of a microalgae culture sample ("Fig. 2: Model architecture. The 3 branches simultaneously outputs the genus, bounding box, and biological class of the alga. Table 1 and 2 page 2, col 2: Cross-entropy is used as the loss function for algal classification at genus level and class level, termed as Lgenus and Lcls, respectively. Combined with the bounding box regression loss in algal detection, termed as Lbox, the total loss function can be defined as Ltotal=Lbox+Lgenus+λ∗Lcls") and compute, for each respective localization among a plurality of localizations in the image each containing at least one respective micro-organism, a respective output representing a value of the one or more biological attributes for the at least one respective micro-organism, the method comprising ("Fig. 2: Model architecture. The 3 branches simultaneously outputs the genus, bounding box, and biological class of the alga. Table 1 and 2 page 2, col 2: Cross-entropy is used as the loss function for algal classification at genus level and class level, termed as Lgenus and Lcls, respectively. Combined with the bounding box regression loss in algal detection, termed as Lbox, the total loss function can be defined as Ltotal=Lbox+Lgenus+λ∗Lcls"): providing microscope images for each of a microalgae culture sample (page 2, col 2: The dataset consists of 1859 high-resolution microscopic images of 37 genera of algae in 6 biological classes and annotations of genus and class. Some samples of algae in the dataset are shown in Fig. 1. The images were taken under microscopes with warm lighting. The color information is stored for more information compared to commonly used grayscale images); and for each microscope image, determining a plurality of annotations, each annotation comprising a localization in the image containing at least one given micro-organism ("page 2, col 1: Branch-2 is used for algal detection and localization. Page 2, col 2: Algae were located and identified with bounding boxes. Genera and classes of algae were annotated by professionals with expert knowledge."), each annotation further comprising a value of the one or more biological attributes ("page 3, col 1: Average classification accuracy (ACA) is calculated for classification tasks. Tables 1 and 2 on page 3, col 2"). Regarding claim 16, Qian discloses capturing the microscope image (page 2, col 2: Some samples of algae in the dataset are shown in Fig. 1. The images were taken under microscopes with warm lighting.); and pre-processing the captured microscope image by one or both of a color balancing of the image and a contrast enhancement (page 2, col 2: Some samples of algae in the dataset are shown in Fig. 1. The images were taken under microscopes with warm lighting. The color information is stored for more information compared to commonly used grayscale images [6]. 99.2% of the images have much higher resolution of 2752 × 2208 or 3072 × 2048 than images with resolution of 150 × 150 used in [12].). Regarding claim 17, Qian discloses wherein the providing of the dataset comprises, for each training pattern, determining the localizations of the annotations deterministically using a region of interest algorithm ("page 3, col 1: To evaluate the performance on algal detection, mAP@IoU=50 (mean average precision of predictions with IoU >= 50%, abbreviated as mAP) is calculated [2], in which, IoU refers to the intersection of the predicted bounding box and the ground truth over their union. Average classification accuracy (ACA) is calculated for classification tasks. The performance of the framework is affected by the value of λ in the loss function. Therefore, we first carried out experiments with various λ values from 0 to 0.5 to observe changes in the detection performance. The results are plotted in Fig. 3. It can be found that the performance is maximized when λ = 0.2. fig. 4"). Regarding claim 19, Qian discloses a neural network function trained to process an input microscope image of a microalgae culture sample and (page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. The framework consists of three branches that will be trained with different objectives for robust algal analysis) compute, for each respective localization among a plurality of localizations in the image each containing at least one respective micro-organism, a respective output representing a value of the one or more biological attributes for the at least one respective micro-organism ("page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. The framework consists of three branches that will be trained with different objectives for robust algal analysis. Fig. 2: Model architecture. The 3 branches simultaneously outputs the genus, bounding box, and biological class of the alga. page 3, col 1: To evaluate the performance on algal detection, mAP@IoU=50 (mean average precision of predictions with IoU >= 50%, abbreviated as mAP) is calculated [2], in which, IoU refers to the intersection of the predicted bounding box and the ground truth over their union."). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-6, 9-11, 13 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qian et al., ("Multi-Target Deep Learning for Algal Detection and Classification") and further in view of Ohya (US 20150302237 A1). Regarding claim 1, Qian discloses A computer-implemented method of machine-learning an artificial neural network (ANN) function configured for analyzing microscope images of microalgae culture samples with respect to one or more biological attributes (abstract: In this paper, we propose a novel multi-target deep learning framework for algal detection and classification. Extensive experiments were carried out on a large-scale colored microscopic algal dataset. Experimental results demonstrate that the proposed method leads to the promising performance on algal detection, class identification and genus identification. Page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. (a cnn is a specialized ANN that uses convolutional and pooling layers to efficiently extract patterns from spatial data (images, video).)), the one or more biological attributes comprising a category among a predetermined set of categories that includes a plurality of microalgae species and/or genera ("page 2, col 1: Branch-1 is used to predict the genus of algae. Branch-2 is used for algal detection and localization. Branch-3 is used to predict the class of algae. Page 4, col 1: First, undetected algae. Most of the undetected algae are almost transparent and blends into the background. Second, algal occlusion. Some algae overlap with others or with non-algae objects.") providing a dataset comprising training patterns, each training pattern comprising a microscope image of a microalgae culture sample and a plurality of annotations (page 2, col 2: The dataset consists of 1859 high-resolution microscopic images of 37 genera of algae in 6 biological classes and annotations of genus and class. Some samples of algae in the dataset are shown in Fig. 1. The images were taken under microscopes with warm lighting. The color information is stored for more information compared to commonly used grayscale images), each annotation comprising a localization in the image containing at least one given micro-organism ("page 2, col 1: Branch-2 is used for algal detection and localization. Page 2, col 2: Algae were located and identified with bounding boxes. Genera and classes of algae were annotated by professionals with expert knowledge."), each annotation further comprising a value of the one or more biological attributes for the at least one given micro-organism ("page 3, col 1: Average classification accuracy (ACA) is calculated for classification tasks. Table 1 on page 3, col 2"); and training an artificial neural network function based on the provided dataset (page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. The framework consists of three branches that will be trained with different objectives for robust algal analysis.), the artificial neural network function being configured for processing an input microscope image of a microalgae culture sample (fig. 2) and computing, for each respective localization among a plurality of localizations in the image each containing at least one respective micro-organism, a respective output representing a value of the one or more biological attributes for the at least one respective micro-organism ("Fig. 2: Model architecture. The 3 branches simultaneously outputs the genus, bounding box, and biological class of the alga. Table 1 and 2 page 2, col 2: Cross-entropy is used as the loss function for algal classification at genus level and class level, termed as Lgenus and Lcls, respectively. Combined with the bounding box regression loss in algal detection, termed as Lbox, the total loss function can be defined as Ltotal=Lbox+Lgenus+λ∗Lcls"). Qian implicitly discloses at least one non-algae micro-organism category (Page 4, col 1: First, undetected algae. Most of the undetected algae are almost transparent and blends into the background. Second, algal occlusion. Some algae overlap with others or with non-algae objects.). the one or more biological attributes further comprising a physiological state among a predetermined set of microalgae physiological states, the method comprising (page 4, col 1: In our feature work, 3D CNN [18] can be implemented with other biological features to improve the performance on algal detection and classification.). Qian does not explicitly disclose at least one non-algae micro-organism category, the one or more biological attributes further comprising a physiological state among a predetermined set of microalgae physiological states, the method comprising. However, in a similar field of endeavor of a cell monitoring device, Ohya teaches the one or more biological attributes comprising a category among a predetermined set of categories that includes a plurality of microalgae species and/or genera and at least one non-algae micro-organism category ("[0026] In the data use method according to the present invention, the sample is any one of an animal cell, a plant cell, an yeast cell, an eumycetes cell, a microalgae cell, a bacterium, an archaeon, a virus, and a phage and any one of a spore, a sporule, and a membrane vesicle produced by the cells and the organisms. [0033] The cell monitoring device according to the aspect may further include a cell morphology detecting section that classifies, among a plurality of the cell image region in the merged image, an image region in which the pigmented region is present as a target cell image, and classifies an image region in which the pigmented region is not present as a non-target cell image, and obtains a proportion of the non-target cell image in all of the cell images in the merged image."), the one or more biological attributes further comprising a physiological state among a predetermined set of microalgae physiological states, the method comprising ([0039] As a result, according to the present invention, indicators of the physiological cell states, the contamination status of other organisms, and the accumulation status of colored pigments are obtained as quantitative values. Thus, detection of the physiological state of the cells and the production status of useful components becomes easy.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian’s disclosure of microalgae detection with Ohya’s teaching of non-algae detection and algae physiological state determination, in order to use culture condition development and to breed strains that produce large amount of useful substances, in order to improve the production of the useful substances by microalgae cells or the like ([0031] of Ohya). Regarding claim 3, Qian does not disclose but Ohya teaches wherein the predetermined set of microalgae physiological states includes an agglomeration state and/or a duplication state ([0187] The image merging section 16 overlays the segment boundary line onto the pixel aggregation region image in the captured image, and calculates the proportion of the above-described pixel aggregation region for each segment. Furthermore, the image merging section 16 classifies segments in which the proportion of the pixel aggregation region is greater than the proportion determined in advance, as cell segments, which is the regions of the cell image in the captured image.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian’s disclosure of microalgae detection with Ohya’s teaching of non-algae detection and algae physiological state determination, in order to use culture condition development and to breed strains that produce large amount of useful substances, in order to improve the production of the useful substances by microalgae cells or the like ([0031] of Ohya). Regarding claim 4, Qian discloses wherein the plurality of microalgae species and/or genera includes one or more species and/or genera from the following families: Chlorophyceae, Xanthophyceae, Chrysophyceae, Bacillariophyceae, Cryptophyceae, Dinophyceae, Chloromonadineae, Euglenineae, Phaeophyceae, Rhodophyceae, and/or Cyanophyceae (table 2 (cryptophyceae Cyanophyceae)). Regarding claims 5 and 10, Qian discloses capturing the microscope image (page 2, col 2: Some samples of algae in the dataset are shown in Fig. 1. The images were taken under microscopes with warm lighting.); and pre-processing the captured microscope image by one or both of a color balancing of the image and a contrast enhancement (page 2, col 2: Some samples of algae in the dataset are shown in Fig. 1. The images were taken under microscopes with warm lighting. The color information is stored for more information compared to commonly used grayscale images [6]. 99.2% of the images have much higher resolution of 2752 × 2208 or 3072 × 2048 than images with resolution of 150 × 150 used in [12].). Regarding claims 6 and 11, Qian discloses wherein the providing of the dataset comprises, for each training pattern, determining the localizations of the annotations deterministically using a region of interest algorithm ("page 3, col 1: To evaluate the performance on algal detection, mAP@IoU=50 (mean average precision of predictions with IoU >= 50%, abbreviated as mAP) is calculated [2], in which, IoU refers to the intersection of the predicted bounding box and the ground truth over their union. Average classification accuracy (ACA) is calculated for classification tasks. The performance of the framework is affected by the value of λ in the loss function. Therefore, we first carried out experiments with various λ values from 0 to 0.5 to observe changes in the detection performance. The results are plotted in Fig. 3. It can be found that the performance is maximized when λ = 0.2. fig. 4"). Regarding claim 9, Qian discloses wherein the artificial neural network function comprises a multi-class classifier configured, for each respective localization containing a microalgae micro-organism, to determine a respective class from a predetermined set of classes comprising combinations of both a microalgae species or genus ("page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. The framework consists of three branches that will be trained with different objectives for robust algal analysis: Branch-1 is used to predict the genus of algae. Branch-2 is used for algal detection and localization. Branch-3 is used to predict the class of algae."). Qian implicitly discloses and a physiological state (page 4, col 1: In our feature work, 3D CNN [18] can be implemented with other biological features to improve the performance on algal detection and classification.). However, Ohya explicitly teaches and a physiological state ([0039] As a result, according to the present invention, indicators of the physiological cell states, the contamination status of other organisms, and the accumulation status of colored pigments are obtained as quantitative values. Thus, detection of the physiological state of the cells and the production status of useful components becomes easy.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian’s disclosure of microalgae detection with Ohya’s teaching of non-algae detection and algae physiological state determination, in order to use culture condition development and to breed strains that produce large amount of useful substances, in order to improve the production of the useful substances by microalgae cells or the like ([0031] of Ohya). Regarding claim 13, Qian discloses an object detection neural network configured for determination of the plurality of localizations ("page 2, col 1: Inspired by MT-DNNs, The proposed framework is designed based on the architecture of Faster R-CNN [2], as depicted in Fig. 2. We extend Faster R-CNN by adding an extra classification branch for multi-task learning. The framework consists of three branches that will be trained with different objectives for robust algal analysis: Branch-1 is used to predict the genus of algae. Branch-2 is used for algal detection and localization."). Regarding claim 21, Qian does not explicitly disclose but Ohya teaches wherein the device further comprises a processor coupled to the computer-readable medium (The controller 11 obtains those image data from the image capturing device 100 and save them in the image storage section 21 for storage.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian’s disclosure of microalgae detection with Ohya’s teaching of non-algae detection and algae physiological state determination, in order to use culture condition development and to breed strains that produce large amount of useful substances, in order to improve the production of the useful substances by microalgae cells or the like ([0031] of Ohya). Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Qian et al., ("Multi-Target Deep Learning for Algal Detection and Classification"), in view of Ohya (US 20150302237 A1) and further in view of Oh (US 20200279370 A1). Regarding claim 2, Qian and Ohya do not disclose or teach but in a similar field of endeavor of microalgae image analysis, Oh teaches wherein the predetermined set of microalgae physiological states includes one or more microalgae health states ([0063] Upon analyzing this, when the microalgae 1 in a healthy state is exposed to stress, the SMD value becomes small, but there is no change in the CMV value. However, when cells die by exposure to more stress, the SMD value becomes smaller and the CMV value becomes larger. When the microalgae 1 in a dead state and the microalgae 1 in a cyst state are compared, the SMD value does not change, and the CMV value increases.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian and Ohya’s disclosure of non-algae detection and algae physiological state determination, with Oh’s teaching of microalgae health states, in order to classify and analyze the microalgae into a healthy state, a cyst state, and a dead state ([0012]). Claim(s) 7, 8, 12 is rejected under 35 U.S.C. 103 as being unpatentable over Qian et al., ("Multi-Target Deep Learning for Algal Detection and Classification"), in view of Ohya (US 20150302237 A1) and further in view of Sekiguchi (US 20190156481 A1). Regarding claim 7, Qian implicitly discloses for each connected component, determining a bounding box (Fig. 5. Examples of algal detection results. The classification results are printed on the top left corner of the bounding boxes. The first 2 rows denote correct classification and detection results.). Qian and Ohya do not explicitly disclose but in a similar field of endeavor of cell nucleus detection, Sekiguchi teaches applying a low pass filter that outputs a binary image, wherein pixels of the binary image having value 0 correspond to pixels of the background and pixels of the binary image having value 1 correspond to a pixel of each microalgae of the culture ([0162] Therefore, in the binary image 83, it is possible to discriminate the region of the cell nucleus by detecting the position of the pixel whose estimate value of the pixel changes from 1 to 0 or the pixel changing from 0 to 1. As another embodiment, it also is possible to detect the boundary between the region of the cell nucleus and the other region, that is, detect the region of the cell nucleus.); detecting connected components in the binary image ([0160] When all the pixels in the input image have not been processed, in step S27 the processing unit 20A moves the center position of the window W2 by one pixel unit within the color density encoded diagrams 79r, 79g, and 79b shown in FIG. 3 similarly to step S19 in the deep learning process.); and for each connected component, determining a bounding box (fig. 11, ref s26 and s27). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian and Ohya’s disclosure of non-algae detection and algae physiological state determination, with Sekiguchi’s teaching of low-pass filters, in order to enable a device to support pathological tissue diagnosis, which will greatly contribute to the elimination of the shortage of pathologists and the improvement of the labor conditions of pathologists (0007). Regarding claim 8, Qian and Ohya do not explicitly disclose but in a similar field of endeavor of cell nucleus detection, Sekiguchi teaches wherein the artificial neural network function comprises a binary classifier configured, for each respective localization, to determine whether the at least one respective micro-organism is a microalgae or a non-algae micro-organism ([0080] The value 74b shown in the lower part of FIG. 2B is binary data of the true image 73. The binary data 74b of the true image 73 is also called a label value. For example, the label value 1 indicates the region of the cell nucleus, and the label value 0 indicates the other region. That is, in the true value image 73 shown in FIG. 1, the position of the label value changing from 1 to 0 or the position of the pixel changing from 0 to 1 corresponds to the boundary between the region of the cell nucleus and the other region.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian and Ohya’s disclosure of non-algae detection and algae physiological state determination, with Sekiguchi’s teaching of low-pass filters, in order to enable a device to support pathological tissue diagnosis, which will greatly contribute to the elimination of the shortage of pathologists and the improvement of the labor conditions of pathologists (0007). Regarding claim 12, Qian implicitly discloses for each connected component, determining a bounding box (Fig. 5. Examples of algal detection results. The classification results are printed on the top left corner of the bounding boxes. The first 2 rows denote correct classification and detection results.). Qian and Ohya do not explicitly disclose but in a similar field of endeavor of cell nucleus detection, Sekiguchi teaches applying a low pass filter that outputs a binary image, wherein pixels of the binary image having value 0 correspond to pixels of the background and pixels of the binary image having value 1 correspond to pixels of microalgae and non-algae micro-organisms of the culture ([0162] Therefore, in the binary image 83, it is possible to discriminate the region of the cell nucleus by detecting the position of the pixel whose estimate value of the pixel changes from 1 to 0 or the pixel changing from 0 to 1. As another embodiment, it also is possible to detect the boundary between the region of the cell nucleus and the other region, that is, detect the region of the cell nucleus.); detecting connected components in the binary image ([0160] When all the pixels in the input image have not been processed, in step S27 the processing unit 20A moves the center position of the window W2 by one pixel unit within the color density encoded diagrams 79r, 79g, and 79b shown in FIG. 3 similarly to step S19 in the deep learning process.); and for each connected component, determining a bounding box (fig. 11, ref s26 and s27). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian and Ohya’s disclosure of non-algae detection and algae physiological state determination, with Sekiguchi’s teaching of low-pass filters, in order to enable a device to support pathological tissue diagnosis, which will greatly contribute to the elimination of the shortage of pathologists and the improvement of the labor conditions of pathologists (0007). Claim(s) 18 is rejected under 35 U.S.C. 103 as being unpatentable over Qian et al., ("Multi-Target Deep Learning for Algal Detection and Classification"), and further in view of Sekiguchi (US 20190156481 A1). Regarding claim 18, Qian implicitly discloses for each connected component, determining a bounding box (Fig. 5. Examples of algal detection results. The classification results are printed on the top left corner of the bounding boxes. The first 2 rows denote correct classification and detection results.). Qian does not explicitly disclose but in a similar field of endeavor of cell nucleus detection, Sekiguchi teaches applying a low pass filter that outputs a binary image, wherein pixels of the binary image having value 0 correspond to pixels of the background and pixels of the binary image having value 1 correspond to a pixel of each microalgae of the culture ([0162] Therefore, in the binary image 83, it is possible to discriminate the region of the cell nucleus by detecting the position of the pixel whose estimate value of the pixel changes from 1 to 0 or the pixel changing from 0 to 1. As another embodiment, it also is possible to detect the boundary between the region of the cell nucleus and the other region, that is, detect the region of the cell nucleus.); detecting connected components in the binary image ([0160] When all the pixels in the input image have not been processed, in step S27 the processing unit 20A moves the center position of the window W2 by one pixel unit within the color density encoded diagrams 79r, 79g, and 79b shown in FIG. 3 similarly to step S19 in the deep learning process.); and for each connected component, determining a bounding box (fig. 11, ref s26 and s27). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine Qian’s disclosure of microalgae detection, with Sekiguchi’s teaching of low-pass filters, in order to enable a device to support pathological tissue diagnosis, which will greatly contribute to the elimination of the shortage of pathologists and the improvement of the labor conditions of pathologists (0007). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220317049 A1 to claim 3: [0153] In addition, the image processing device may also analyze the microscopic image of the algae to obtain an analysis result. For example, the analysis results may include at least one of the following parameters: a count of the algae, an average value of minor axis, an average value of major axis, or an aggregation rate. US 20210403854 A1 to claim 2: [0115] The central unit may output a result indicating the identified physiological state of the algae. Output 124 may indicate that the sample is healthy 126, or various kinds of stress, for example contamination 128, low nitrogen 130, or low phosphate 132. US 20050202523 A1 to claims 7, 12 and 18: [0052] The algorithm described here is an example used for detecting and enumerating bacteria on sol-gel images. The purpose of this algorithm is to detect bright spots with the expected size: Low-pass filtering was used for high frequency noise reduction. Image segmentation was used leading to a binary image of white blobs of bacteria on black background. The segmentation in this implementation was done using a simple threshold application, with a constant pre-defined threshold. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A NASHER whose telephone number is (571)272-1885. The examiner can normally be reached Mon - Fri 0800 - 1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHMED A NASHER/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601840
TUNING PARAMETER DETERMINATION METHOD FOR TRACKING AN OBJECT, A GROUP DENSITY-BASED CLUSTERING METHOD, AN OBJECT TRACKING METHOD, AND AN OBJECT TRACKING APPARATUS USING A LIDAR SENSOR
2y 5m to grant Granted Apr 14, 2026
Patent 12586329
MODELING METHOD, DEVICE, AND SYSTEM FOR THREE-DIMENSIONAL HEAD MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12582373
GENERATING SYNTHETIC ELECTRON DENSITY IMAGES FROM MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12567255
FEW-SHOT VIDEO CLASSIFICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561965
NEURAL NETWORK CACHING FOR VIDEO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+34.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month