DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/16/2024, 06/05/2025 and 07/28/2025 have been entered and considered. Initialed copies of the PTO-1449 by the Examiner are attached.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Claims 2-3, 7 and 9-10 recites limitations that use words like “means” (or “step”) or similar terms with functional language and do invoke 35 U.S.C. 112(f):
Claims 2 and 9; recites the limitation, “a detection network… calling the detection network to perform the position detection on the physiological image…” [Lines 2 and 3-4] [Lines 2 and 3-4].
Claims 2 and 9; recites the limitation, “a decomposition network… calling the decomposition network to perform the color channel decomposition…” [Lines 2 and 7-8] [Lines 2 and 7-8].
Claims 3 and 10; recites the limitation, “a position detection subnetwork … calling the position detection subnetwork to perform the position detection on the physiological image to obtain the position information…” [Lines 1-4] [Lines 1-4].
Claims 3 and 10; recites the limitation, “a region segmentation subnetwork … calling the region segmentation subnetwork to perform image segmentation on the physiological image…” [Lines 2 and 6-7] [Lines 2 and 6-7].
Claim 7; recites the limitation, “calling a detection network in the physiological image processing model to perform position detection…” [Lines 2 and 5-6].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
(i) “a detection network” (Fig. 2, #322. Paragraph [0049], [0185] - the detection network 322 includes: a position detection subnetwork 322a and a region segmentation subnetwork 322b implemented by processor 2301 connected to memory 2302 thus the retrieval model have sufficient structure or material wherein is a detection network implemented in an algorithm and executed by a processor).(ii) “a decomposition network” (Fig. 2, #324. Paragraph [0050, [0104] - the decomposition network 324 is a subnetwork in one of a convolutional neural network (CNN), a long short-term neural network (LSTM), a recurrent neural network (RNN), fully convolution networks (FCN), a U-Net, a SegNet and a LinkNet implemented by processor 2301 connected to memory 2302 thus the retrieval model have sufficient structure or material wherein is a detection network implemented in an algorithm and executed by a processor).(iii) “a position detection subnetwork” (Fig. 2, #322a Paragraph [0053-0054, [0104] - the position detection subnetwork 322a is a subnetwork in one of a convolutional neural network (CNN), a long short-term neural network (LSTM), a recurrent neural network (RNN), fully convolution networks (FCN), a U-Net, a SegNet and a LinkNet implemented by processor 2301 connected to memory 2302 thus the retrieval model have sufficient structure or material wherein is a position detection subnetwork implemented in an algorithm and executed by a processor).
(iv) “a region segmentation subnetwork” (Fig. 2, #322b Paragraph [0051, [0104] - the position detection subnetwork 322b is a subnetwork in one of a convolutional neural network (CNN), a long short-term neural network (LSTM), a recurrent neural network (RNN), fully convolution networks (FCN), a U-Net, a SegNet and a LinkNet implemented by processor 2301 connected to memory 2302 thus the a region segmentation subnetwork have sufficient structure or material wherein is a detection network implemented in an algorithm and executed by a processor).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: physiological image processing model in claims 1, 8 and 15.
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 8-13 and 15-20 are rejected under 35 U.S.C. 103 as being anticipated by Minghao et al (English Translation of CN 113763370) in view of Chukka et al (US20210285056A1).
Regarding claim 1, Minghao teaches a physiological image processing method (method for processing a digital pathological image – see page 3, [p][003]) performed by a computer device (computer – see page 8, 3 full [p][003]), the method comprising: obtaining a physiological image (step a1, an initial digital pathology image to be detected is acquired - see page 9, [p][005]); determining position information of at least one mutated object in the physiological image (the feature information of the initial digital pathological image is extracted, and the image type of the initial digital pathological image is determined according to the feature information - see page 9, [p][005]) based on a physiological image processing model (a classification module for inputting the target digital pathology image – see page 4, [p][002]); performing color channel decomposition on the physiological image to obtain staining information corresponding to the physiological image (performing convolution calculation on the characteristic information to predict at least two staining color matrixes corresponding to the characteristic matrix – see page 10, 3rd full para); and making statistics according to the position information and the staining information (determining an optical density vector corresponding to an initial staining channel in a bright field type digital pathological image and predicts a dyeing color matrix and probability distribution corresponding to each pixel according to the second convolution result, and the dyeing color matrix to which each pixel belongs can be visually expressed through the probability distribution, so as to achieve the purpose of splitting the dyeing channel – see page 10, 3rd and 4th full para) to obtain a staining result of the mutated objects in the physiological image (converting the staining color matrixes into corresponding staining channels to finally obtain a target digital pathological image - 4th full para).
Minghao does not explicitly teach counting.
However, Chukka explicitly teach counting (counting all detected dots within each tumor nucleus in each mapped tissue region – see [p][0015])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Minghao of a physiological image processing method performed by a computer device, the method comprising: obtaining a physiological image with the teaching of Chukka of having counting
Wherein having Minghao counting.
The motivation behind the modification would have been to determines a detection result of a target biological tissue according to the cell information for enhancing analysis and improving patient treatment and outcome because Minghao and Chukka are methods/systems for analyzing tissue samples. Wherein Minghao determines a detection result of a target biological tissue according to the cell information by acquiring a dyeing channel processing strategy corresponding to the image type while Chukka for automatically analyzing digital images of biological samples stained for the presence of protein and/or nucleic acid biomarkers and automatically detecting and quantifying signals corresponding to one or more biomarkers thus enhancing analysis and improving patient treatment and outcome (Please see Minghao et al (English Translation of CN 113763370); see page 3, [p][003] and Chukka et al (Pub No.: US20210285056A1; see abstract and [p][0057]).
Regarding claim 2, Minghao in view of Chukka teaches the method according to claim 1, Minghao teaches wherein the physiological image processing model comprises a detection network (detection model – see page 4, [p][006]) and a decomposition network (a convolution module – see page 10, 3rd full para) and the method further comprises: calling the detection network to perform the position detection on the physiological image to obtain the position information of the at least one mutated object in the physiological image (the detection model is adopted to detect the minimum circumscribed rectangle of the target biological tissue in the target digital image - see page 4, [p][006]); and calling the decomposition network to perform the color channel decomposition on the physiological image (finally converting the staining color matrixes into corresponding staining channels – see page 10, 2nd full para) to obtain the staining information corresponding to the physiological image (first classification model determines the target biological tissue in the target digital pathology image according to the channel properties of the staining channel –see page 3 [p][004]).
Regarding claim 3, Minghao in view of Chukka teaches the method according to claim 2, Minghao teaches wherein the detection network comprises a position detection subnetwork (detection model – see page 4, [p][006]) and a region segmentation subnetwork (detection model – see page 4, [p][006]); and the calling the detection network to perform position detection on the physiological image to obtain the position information of the at least one mutated object in the physiological image comprises: calling the position detection subnetwork to perform the position detection on the physiological image to obtain the position information of physiological objects in the physiological image (the detection model is adopted to detect the minimum circumscribed rectangle of the target biological tissue in the target digital image - see page 11, [p][006]); calling the region segmentation subnetwork to perform image segmentation on the physiological image to obtain a diseased region in the physiological image (the target digital pathological image is segmented according to the minimum circumscribed rectangle to obtain the target image of the target biological tissue – see page 11, [p][006]), the diseased region comprising the at least one mutated object (biological tissue sample can be the channel attribute of the tumor tissue sample corresponding to the staining channel – see page 11, [p][002]); and determining the physiological objects belonging to the diseased region as the mutated objects according to the diseased region and the position information of the physiological objects, and determining the position information of the mutated objects (the target staining channel can be called by the second classification model to clarify the position information and the boundary information of the cell nucleus - see page 11, [p][008]).
Regarding claim 4, Minghao in view of Chukka teach the method according to claim 2, Minghao teaches wherein the calling the decomposition network to perform the color channel decomposition on the physiological image to obtain the staining information corresponding to the physiological image comprises: calling the decomposition network to perform the color channel decomposition on the physiological image to obtain color information of the physiological image in at least two color channels (performing convolution calculation on the characteristic information to predict at least two staining color matrixes corresponding to the characteristic matrix - see page 10, 4th full para); and determining the color information of the physiological image in the first color channel as the staining information (establishing a color classifier according to the staining intensity of each biomarker – see page 12, [p][008]).
Regarding claim 5, Minghao teaches the method according to claim 1, the staining information being used for indicating at least two staining states of the physiological objects in the physiological image (wherein the biological tissue sample image comprises a biological tissue sample carrying at least two staining channels - see page 7, 1st full para).
Minghao does not explicitly disclose wherein the staining counting result comprises staining counting information; and the making statistics according to the position information and the staining information to obtain a staining counting result of the mutated objects comprises: obtaining a first counting result of the mutated objects belonging to a first staining state according to the position information of the mutated objects and the staining information; obtaining a second counting result of the mutated objects according to the position information of the mutated objects; and determining a ratio of the first counting result to the second counting result as the staining counting information of the mutated objects.
However, Chukka explicitly teach wherein the staining counting result comprises staining counting information (the biological sample is stained for the presence of at least two nucleic acid biomarkers, and wherein dots corresponding to each of the at least two nucleic acid biomarkers are detected and counted for each nucleus – see [p][0016]); and the making statistics according to the position information and the staining information to obtain a staining counting result of the mutated objects (metrics derived from color include local statistics of each of the colors (mean/median/variance/std dev) and/or color intensity correlations in a local image window – see [p][0111]) comprises: obtaining a first counting result of the mutated objects belonging to a first staining state according to the position information of the mutated objects and the staining information (the detection of first and second dots representing in-situ hybridization signals of different colors comprises generating a first color channel image and a second color channel image via color deconvolution (e.g. using the unmixing module 203) of the digital image, the first color channel image corresponding to the color spectrum contribution of the first stain and the second color channel image corresponding to the color spectrum contribution of the second stain – see [p][0150]), obtaining a second counting result of the mutated objects according to the position information of the mutated objects (the detection of first and second dots representing in-situ hybridization signals of different colors comprises generating a first color channel image and a second color channel image via color deconvolution (e.g. using the unmixing module 203) of the digital image, the first color channel image corresponding to the color spectrum contribution of the first stain and the second color channel image corresponding to the color spectrum contribution of the second stain – see [p][0150]); and determining a ratio of the first counting result to the second counting result as the staining counting information of the mutated objects (a ratio of the first to second dots is counted for each nucleus – see [p][0163])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Minghao of a physiological image processing method performed by a computer device, the method comprising: obtaining a physiological image with the teaching of Chukka of having wherein the staining counting result comprises staining counting information; and the making statistics according to the position information and the staining information to obtain a staining counting result of the mutated objects comprises: obtaining a first counting result of the mutated objects belonging to a first staining state according to the position information of the mutated objects and the staining information; obtaining a second counting result of the mutated objects according to the position information of the mutated objects; and determining a ratio of the first counting result to the second counting result as the staining counting information of the mutated objects
Wherein having Minghao wherein the staining counting result comprises staining counting information; and the making statistics according to the position information and the staining information to obtain a staining counting result of the mutated objects comprises: obtaining a first counting result of the mutated objects belonging to a first staining state according to the position information of the mutated objects and the staining information; obtaining a second counting result of the mutated objects according to the position information of the mutated objects; and determining a ratio of the first counting result to the second counting result as the staining counting information of the mutated objects.
The motivation behind the modification would have been to determines a detection result of a target biological tissue according to the cell information for enhancing analysis and improving patient treatment and outcome because Minghao and Chukka are methods/systems for analyzing tissue samples. Wherein Minghao determines a detection result of a target biological tissue according to the cell information by acquiring a dyeing channel processing strategy corresponding to the image type while Chukka for automatically analyzing digital images of biological samples stained for the presence of protein and/or nucleic acid biomarkers and automatically detecting and quantifying signals corresponding to one or more biomarkers thus enhancing analysis and improving patient treatment and outcome (Please see Minghao et al (English Translation of CN 113763370); see page 3, [p][003] and Chukka et al (Pub No.: US20210285056A1; see abstract and [p][0057]).
Regarding claim 6, Minghao teaches the method according to claim 1, the making statistics according to the position information of the mutated objects comprises: marking at least one staining state of the mutated objects in the physiological image according to the position information of the mutated objects (the target staining channel can be called by the second classification model to clarify the position information and the boundary information of the cell nucleus - see page 11, [p][006])
Minghao does not explicitly teach wherein the staining counting result comprises a staining counting image and the staining information to obtain a staining counting result of the mutated objects and the staining information to obtain the staining counting image.
However, Chukka explicitly teach wherein the staining counting result comprises a staining counting image and the staining information to obtain a staining counting result of the mutated objects (counting all detected dots within each tumor nucleus in each mapped tissue region – see [p][0015]) and the staining information to obtain the staining counting image (see [p][0163]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Minghao of a physiological image processing method performed by a computer device, the method comprising: obtaining a physiological image with the teaching of Chukka of having wherein the staining counting result comprises a staining counting image and the staining information to obtain a staining counting result of the mutated objects.
Wherein having Minghao wherein the staining counting result comprises a staining counting image and the staining information to obtain a staining counting result of the mutated objects.
The motivation behind the modification would have been to determines a detection result of a target biological tissue according to the cell information for enhancing analysis and improving patient treatment and outcome because Minghao and Chukka are methods/systems for analyzing tissue samples. Wherein Minghao determines a detection result of a target biological tissue according to the cell information by acquiring a dyeing channel processing strategy corresponding to the image type while Chukka automatically analyzing digital images of biological samples stained for the presence of protein and/or nucleic acid biomarkers and automatically detecting and quantifying signals corresponding to one or more biomarkers thus enhancing analysis and improving patient treatment and outcome (Please see Minghao et al (English Translation of CN 113763370); see page 3, [p][003] and Chukka et al (Pub No.: US20210285056A1; see abstract and [p][0057]).
Regarding independent claim 8, Minghao teaches a computer device (system for processing a digital pathological image – see page 3, [p][003]), comprising: a processor (see page 8, 2nd full para) and a memory (see page 8, 2nd full para), the memory storing at least one program (a computer program - see page 8, 2nd full para), and the processor being configured to execute the at least one program in the memory and causing the computer device to implement a physiological image processing method (see page 8, 2nd full para) including: obtaining a physiological image (step a1, an initial digital pathology image to be detected is acquired - see page 9, [p][005]); determining position information of at least one mutated object in the physiological image (the feature information of the initial digital pathological image is extracted, and the image type of the initial digital pathological image is determined according to the feature information - see page 9, [p][005]) based on a physiological image processing model (a classification module for inputting the target digital pathology image – see page 4, [p][002]); performing color channel decomposition on the physiological image to obtain staining information corresponding to the physiological image (performing convolution calculation on the characteristic information to predict at least two staining color matrixes corresponding to the characteristic matrix – see page 10, 3rd full para); and making statistics according to the position information and the staining information (determining an optical density vector corresponding to an initial staining channel in a bright field type digital pathological image and predicts a dyeing color matrix and probability distribution corresponding to each pixel according to the second convolution result, and the dyeing color matrix to which each pixel belongs can be visually expressed through the probability distribution, so as to achieve the purpose of splitting the dyeing channel – see page 10, 3rd and 4th full para) to obtain a staining result of the mutated objects in the physiological image (converting the staining color matrixes into corresponding staining channels to finally obtain a target digital pathological image – see page 10, 4th full para).
Minghao does not explicitly teach counting.
However, Chukka explicitly teach counting (counting all detected dots within each tumor nucleus in each mapped tissue region – see [p][0015])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Minghao of a physiological image processing method performed by a computer device, the method comprising: obtaining a physiological image with the teaching of Chukka of having counting
Wherein having Minghao counting.
The motivation behind the modification would have been to determines a detection result of a target biological tissue according to the cell information for enhancing analysis and improving patient treatment and outcome because Minghao and Chukka are methods/systems for analyzing tissue samples. Wherein Minghao determines a detection result of a target biological tissue according to the cell information by acquiring a dyeing channel processing strategy corresponding to the image type while Chukka for automatically analyzing digital images of biological samples stained for the presence of protein and/or nucleic acid biomarkers and automatically detecting and quantifying signals corresponding to one or more biomarkers thus enhancing analysis and improving patient treatment and outcome (Please see Minghao et al (English Translation of CN 113763370); see page 3, [p][003] and Chukka et al (Pub No.: US20210285056A1; see abstract and [p][0057]).
Regarding claim 9, which corresponds to claim 2 except for reciting a different statutory category of a computer device. Therefore, the rejection analysis of claim 2 are fully applicable to claim 9.
Regarding claim 10, which corresponds to claim 3 except for reciting a different statutory category of a computer device. Therefore, the rejection analysis of claim 3 are fully applicable to claim 10.
Regarding claim 11, which corresponds to claim 4 except for reciting a different statutory category of a computer device. Therefore, the rejection analysis of claim 4 are fully applicable to claim 11.
Regarding claim 12, which corresponds to claim 5 except for reciting a different statutory category of a computer device. Therefore, the rejection analysis of claim 5 are fully applicable to claim 12.
Regarding claim 13, which corresponds to claim 6 except for reciting a different statutory category of a computer device. Therefore, the rejection analysis of claim 6 are fully applicable to claim 13.
Regarding independent claim 15, Minghao teaches a non-transitory computer-readable storage medium (memory - see page 8, 2nd full para) storing at least one program (a computer program - see page 8, 2nd full para), and the at least one program being loaded and executed by a processor (see page 8, 2nd full para) of a computer device (computer – see page 8, 3 full [p][003]) and causing the computer device to implement a physiological image processing method (see page 8, 2nd full para) including: obtaining a physiological image (step a1, an initial digital pathology image to be detected is acquired - see page 9, [p][005]); determining position information of at least one mutated object in the physiological image (the feature information of the initial digital pathological image is extracted, and the image type of the initial digital pathological image is determined according to the feature information - see page 9, [p][005]) based on a physiological image processing model (a classification module for inputting the target digital pathology image – see page 4, [p][002]); performing color channel decomposition on the physiological image to obtain staining information corresponding to the physiological image (performing convolution calculation on the characteristic information to predict at least two staining color matrixes corresponding to the characteristic matrix – see page 10, 3rd full para); and making statistics according to the position information and the staining information (determining an optical density vector corresponding to an initial staining channel in a bright field type digital pathological image and predicts a dyeing color matrix and probability distribution corresponding to each pixel according to the second convolution result, and the dyeing color matrix to which each pixel belongs can be visually expressed through the probability distribution, so as to achieve the purpose of splitting the dyeing channel – see page 10, 3rd and 4th full para) to obtain a staining result of the mutated objects in the physiological image (converting the staining color matrixes into corresponding staining channels to finally obtain a target digital pathological image - 4th full para).
Minghao does not explicitly teach counting.
However, Chukka explicitly teach counting (counting all detected dots within each tumor nucleus in each mapped tissue region – see [p][0015])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Minghao of a physiological image processing method performed by a computer device, the method comprising: obtaining a physiological image with the teaching of Chukka of having counting
Wherein having Minghao counting.
The motivation behind the modification would have been to determines a detection result of a target biological tissue according to the cell information for enhancing analysis and improving patient treatment and outcome because Minghao and Chukka are methods/systems for analyzing tissue samples. Wherein Minghao determines a detection result of a target biological tissue according to the cell information by acquiring a dyeing channel processing strategy corresponding to the image type while Chukka for automatically analyzing digital images of biological samples stained for the presence of protein and/or nucleic acid biomarkers and automatically detecting and quantifying signals corresponding to one or more biomarkers thus enhancing analysis and improving patient treatment and outcome (Please see Minghao et al (English Translation of CN 113763370); see page 3, [p][003] and Chukka et al (Pub No.: US20210285056A1; see abstract and [p][0057]).
Regarding claim 16, which corresponds to claim 2 except for reciting a different statutory category of a non-transitory computer-readable storage medium. Therefore, the rejection analysis of claim 2 are fully applicable to claim 16.
Regarding claim 17, which corresponds to claim 3 except for reciting a different statutory category of a non-transitory computer-readable storage medium. Therefore, the rejection analysis of claim 3 are fully applicable to claim 17.
Regarding claim 18, which corresponds to claim 4 except for reciting a different statutory category of a non-transitory computer-readable storage medium. Therefore, the rejection analysis of claim 4 are fully applicable to claim 18.
Regarding claim 19, which corresponds to claim 5 except for reciting a different statutory category of a non-transitory computer-readable storage medium. Therefore, the rejection analysis of claim 5 are fully applicable to claim 19.
Regarding claim 20, which corresponds to claim 6 except for reciting a different statutory category of a non-transitory computer-readable storage medium. Therefore, the rejection analysis of claim 6 are fully applicable to claim 20.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being anticipated by
Minghao et al (English Translation of CN 113763370) in view of Chukka et al (US20210285056A1) as applied to claim 1 and 8 further in view of Su et al (NPL titled: Automatic Detection Method for Cancer Cell Nucleus Image Based on Deep-Learning Analysis and Color Layer Signature Analysis Algorithm).
Regarding claim 7, Minghao teaches the method according to claim 1, wherein the physiological image processing model is trained (see page 10, last two lines and see page 11, [p][001]) by: obtaining a sample physiological image and mark information of the sample physiological image (acquiring label information corresponding to the biological tissue sample, wherein the label information is used for identifying the channel attribute of the staining channel corresponding to the biological tissue sample – see page 10, last two lines); calling a detection network (detection model – see page 4, [p][006]) in the physiological image processing model to perform position detection on the sample physiological image to obtain a predicted detection result of the sample physiological image (training the initial classification model by using the biological tissue sample image and the label information so as to enable the initial classification model to learn the corresponding relation between the channel attribute and the biological tissue sample and obtain a first classification model – see page 11, [p][001]);
Minghao does not explicitly teach training the physiological image processing model according to an error between the predicted detection result and the mark information to obtain a trained physiological image processing model.
However, Su explicitly teaches training the physiological image processing model according to an error between the predicted detection result and the mark information to obtain a trained physiological image processing model ([t]he loss function includes calculating the errors of the bounding boxes’ coordinate regression, the source prediction, and the class score prediction – see section 3.1, [p][002]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Minghao of a physiological image processing method performed by a computer device, the method comprising: obtaining a physiological image with the teaching of Su of having training the physiological image processing model according to an error between the predicted detection result and the mark information to obtain a trained physiological image processing model.
Wherein having Minghao training the physiological image processing model according to an error between the predicted detection result and the mark information to obtain a trained physiological image processing model.
The motivation behind the modification would have been to determines a detection result of a target biological tissue according to the cell information for integrating the application of a convolutional neural network for normal cell identification and the proposed color layer signature analysis because Minghao and Su are methods/systems for analyzing tissue samples. Wherein Minghao determines a detection result of a target biological tissue according to the cell information by acquiring a dyeing channel processing strategy corresponding to the image type while Su integrates the application of a convolutional neural network for normal cell identification and the proposed color layer signature analysis (Please see Minghao et al (English Translation of CN 113763370); see page 3, [p][003] and Su et al (NPL titled: Automatic Detection Method for Cancer Cell Nucleus Image Based on Deep-Learning Analysis and Color Layer Signature Analysis Algorithm); see abstract).
Regarding claim 14, which corresponds to claim 7 except for reciting a different statutory category of a computer device. Therefore, the rejection analysis of claim 7 are fully applicable to claim 14.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Zhang et al (Pub No.: 20240233416) discloses a methods and systems for analysing the cellular composition of a sample are described, comprising: providing an image of the sample in which a plurality of cellular populations are associated with respective signals and classifying a plurality of query cells in the image between a plurality of classes corresponding to respective cellular populations in the plurality of cellular populations. This is performed by providing a query single cell image to an encoder module of a machine learning model to produce a feature vector for the query image, and assigning the query cell to one of the plurality of classes based on the feature vector for the query image and feature vectors produced by the encoder module for each of a plurality of reference single cell images. The machine leaning model comprises: the encoder module, configured to take as input a single cell image and to produce as output a feature vector the single cell image, and a similarity module configured to take as input a pair of feature vectors for a pair of single cell images and to produce as output a score indicative of the similarity between the single cell images. Thus, the machine learning model can be obtained without the need for an extensively annotated dataset. The methods find use in the analysis of multiplex immunohistochemistry/immunofluorescence in a variety of clinical contexts.
Maher et al (Pub No.: 20210310075) discloses a methods and systems for detecting cancer and/or determining a cancer tissue of origin are disclosed. A multiclass cancer classifier is disclosed that is trained with a plurality of biological samples containing cfDNA fragments and at least one synthetic training sample generated from the biological samples. The analytics system generates the synthetic training sample by sampling fragments from a training sample labeled as cancer and sampling fragments from another training sample labeled as non-cancer. The sampling probability is determined based on a limit of detection of the cancer classifier, e.g., in order to generate synthetic training samples with cancer tumor fraction proximate to the limit of detection.
Gupta et al (US Patent No.: 10718694) disclosure a counterstains for staining a biological sample on a single slide in preparation for microscopic examination. The counterstains are used to analyze the sample on the single slide using both brightfield and fluorescent illumination. The counterstains can identify both morphological details and molecular structures within the cells contained in the sample. The counterstains can be used in conjunction with other molecular stains.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRAE S ALLISON whose telephone number is (571)270-1052. The examiner can normally be reached on Monday-Friday 9am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns, can be reached on (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRAE S ALLISON/Primary Examiner, Art Unit 2673
January 20, 2026