DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP-2020-163915, filed on 04/28/2023.
Response to Amendment
Claim Objection: The amended claims filed on 10/14/2025 overcomes the Claim Objection in the previous office action.
With respect to 35 U.S.C. 112(b) Rejection: The amended claims filed on 10/14/2025 overcomes the Claim 35 U.S.C. 112(b) rejection in the previous office action.
Applicant Arguments
With respect to 35 U.S.C. 101 Rejection: Applicant argues that “The present application is directed to appropriately detecting an object region in a medical image, and more particularly to avoid artifacts or virtual images that are not intended or do not actually exist and is an image that is formed due to a device capturing a medical image, imaging conditions, or the like. (See para. [0005] of U.S. Patent Application Publication No. 2023/02302444 A1). … "an evaluation index regarding detection accuracy of an object region included in a medical image. The information processing device 1 derives determination information using the second learning model 142. The second learning model 142 can be a machine learning model configured to output an evaluation index regarding detection accuracy of an object region included in a medical image when the medical image is input. The auxiliary storage unit 14 of the information processing device 1 stores definition information regarding the second learning model 142. The evaluation index regarding the detection accuracy of the object region is information indicating detection accuracy estimated when the object region is detected from the medical image. For example, as the evaluation index, a value of the loss function for an output value of the first learning model 141 that detects the object region included in the medical image may be used. As described above, the loss function indicates an error between the output value of the first learning model 141 and the correct answer label of the training data and is an index of accuracy of the first learning model 141." (See para. [0074] of U.S. Patent Application Publication No. 2023/0230244 A1). … In addition, "by executing the preprocessing before detection of the object region, the object region can be appropriately detected using only the medical image estimated to have high detection accuracy of the object region. An influence of the artifact in the medical image can be reduced and the accuracy of the detection result obtained by the first learning model 141 can be improved. Since a warning is displayed for the medical image estimated to have a relatively low detection accuracy of the object region, it is possible to reliably notify the doctor or the like that the object region is not detected." (See para. [0072] of U.S. Patent Application Publication No. 2023/0230244 A1). … Accordingly, Applicants respectfully submit that since claims 1, 9, and 15 at a minimum recite additional elements that integrate the judicial exception into a practical application (Prong Two of Step 2A) and further recite additional elements that amount to significantly more than the judicial exception (Step 2B), withdrawal of the rejection under 35 U.S.C. 101 is respectfully requested.” (Remark Pages 4-6)
With respect to 35 U.S.C. 103 Rejection: Applicant argues that “As set forth in Wujek, "[e]valuating the quality of the output may include passing the output to a loss layer in order to calculate an error. For example, an error between the training data and the output can be computed." … In other words, Wujek merely calculates the difference by comparing the output with the training data, rather than acquiring (or acquire) the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model trained for outputting the evaluation index regarding detection accuracy of the object region included in the medical image as recited in claims 1, 9, and 15 as amended. In addition, Takenouchi teaches away from Wujek by specifically omitting the classification process when an image is determined to be inappropriate. (See para. [0113] of Takenouchi). In contrast, Wujek generates output of deep learning model once to examine the quality of the output as set forth para. [0028] and shown in Fig 2 of Wujek, which deviates from the purpose of Takenouchi, which is "... avoid that a classification result with low truth (false recognition result) is generated from an image unsuitable for recognition and provided to the user." (See para. [0113] in Takenouchi).” (Remark Pages 7-8)
Response to Arguments
Claim Rejections - 35 USC § 101: With respect to claims 1, 9 and 15, Applicant's arguments filed 10/14/2025 have been fully considered but they are not persuasive. Applicant respectfully traverses and submits that amended claims 1, 9 and 15 is not directed to an abstract idea. Claim 1 includes the following limitation: “an evaluation index regarding detection accuracy of an object region included in a medical image … The evaluation index regarding the detection accuracy of the object region is information indicating detection accuracy estimated when the object region is detected from the medical image. For example, as the evaluation index, a value of the loss function for an output value of the first learning model 141 that detects the object region included in the medical image may be used. As described above, the loss function indicates an error between the output value of the first learning model 141 and the correct answer label of the training data and is an index of accuracy of the first learning model 141”. In addition, “by executing the preprocessing before detection of the object region, the object region can be appropriately detected using only the medical image estimated to have high detection accuracy of the object region. An influence of the artifact in the medical image can be reduced and the accuracy of the detection result obtained by the first learning model 141 can be improved. Since a warning is displayed for the medical image estimated to have a relatively low detection accuracy of the object region, it is possible to reliably notify the doctor or the like that the object region is not detected.” (Remark Page 4-5)
Examiner respectfully disagrees. The claim amendments does not include the limitation: “ the evaluation index, a value of the loss function for an output value of the first learning model … the loss function indicates an error between the output value of the first learning model 141 and the correct answer label of the training data and is an index of accuracy of the first learning model” and/or “An influence of the artifact in the medical image can be reduced and the accuracy of the detection result obtained by the first learning model 141 can be improved” as in claims 1, 9 and 15. Further, the claims amendments recites the limitation: “the determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; and acquiring the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model trained for outputting the evaluation index regarding detection accuracy of the object region included in the medical image.” The limitations is mental process that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions such as the person can see/identify an object region of interest in an image and/or collect information of the detect accuracy of the region of interest and then make an evaluation/judgments an object region of interest in an image. For further detail, see the Claim Rejections - 35 USC § 101 below.
Claim Rejections 35 USC 103: Applicant' s arguments filed on 10/14/2025 have been fully considered and are persuasive. Therefore, the previous rejection has been withdrawn. . However, upon further consideration of amendments, a new ground(s) of rejection is made in view of Hsieh et al (U.S. 20190340470 A1; Hsieh). Since this new ground of rejection was not necessitated by the amendment, this action is NON-FINAL rejection.
Claim Status
Claims 9-14 is/are interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claims 18-19 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claim(s) 1-5, 9-13, 15-19 and 21-23 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim(s) 1-5, 9-13, 15-19 and 21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takenouchi (WO-2020054543 A1), in view of Hsieh et al (U.S. 20190340470 A1; Hsieh).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a control unit in claims 9-14.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification discloses in Paragraph 38: “The control unit11 can include, for example, one or a plurality of arithmetic processing units such as a central processing unit (CPU), a micro- processing unit (MPU), and a graphics processing unit (GPU) and executes various types of information processing, control processing, and the like by reading and executing a program P stored in the auxiliary storage unit14.” as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 18-19 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 18 recites the limitation “a third model” in line 5. There is insufficient antecedent basis for this limitation in the claim. It is not clear to the Examiner if this is the same as “ a third model” in line 15 in claim 15. .
Claims 19 is also rejected under 35 USC 112(b) because they are dependent on claim 18.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5, 9-13, 15-19 and 21-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claim 1, and based upon consideration of all of the relevant factors with respect to the claim as a whole, claim(s) 1-20 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is/are therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 1, and similar rationale applies to independent Claim(s) 9 and 15. The rationale, under MPEP § 2106, for this finding is explained below:
The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria.
Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter?
When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims is related to a process since the claim is directed to a non-transitory computer-readable medium storing a computer program executed by a computer processor to execute a process.
Step 2a, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception?
The Examiner interprets that the judicial exception applies since Claim 1 limitation of acquiring a medical image generated based on a signal detected by a catheter inserted into a luminal organ; (insignificant pre/post-solution extra activity of generating data)
deriving determination information for determining whether to detect an object region from the medical image based on the acquired medical image, the determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; (mental process including observation and evaluation, and can be done mentally in the human mind or by a human using a pen and paper, as evaluating/observation an object region of interest in an image)
determining whether to detect the object region from the medical image based on the derived determination information; (mental process including observation and evaluation, and can be done mentally in the human mind or by a human using a pen and paper, as evaluating/observation an object region of interest in an image)
detecting the object region included in the medical image by inputting the acquired medical image into a first model trained for detecting the object region included in the medical image when the medical image is input in a case where the object region is determined to be detected from the medical image; (mental process including observation and evaluation, and can be done mentally in the human mind or by a human using a pen and paper, as evaluating/observation an object region of interest in an image) and
acquiring the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model trained for outputting the evaluation index regarding detection accuracy of the object region (mental process that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions such as the person can see/identify an object region of interest in an image and/or collect information of the detect accuracy of the region of interest and then make an evaluation/judgments an object region of interest in an image) included in the medical image are directed to an abstract idea.
The claim is related to mental process by a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and/or performing a mental process in a computer environment. An example of a case identifying a mental process performed in a computer environment as an abstract idea is Symantec Corp., 838 F.3d at 1316-18, 120 USPQ2d at 1360 ;
If the claim recites a judicial exception (i.e., an abstract idea enumerated in MPEP § 2106.04(a)(2), a law of nature, or a natural phenomenon), the claim requires further analysis in Prong Two.
Step 2a, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
The Examiner interprets that Claim 1 limitation does not provide additional elements or combination of additional elements to a practical application since the claim are performed by processor see MPEP 2106.05(g). or Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). See, MPEP §2106.04(a), Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract").
If there are no additional elements in the claim, then it cannot be eligible. In such a case, after making the appropriate rejection (see MPEP § 2106.07 for more information on formulating a rejection for lack of eligibility), it is a best practice for the examiner to recommend an amendment, if possible, that would resolve eligibility of the claim.
Step 2b: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception.
The Examiner interprets that the Claim(s) 1 do not amount to significantly more since the Claim(s) 1 is/state adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)).
Furthermore, the first trained model, the second model, the generic computer components of the processor recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system.
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Claims 2-5, 8-13, 16-19 and 21-23 are depending on the independent claim/s include all the limitation of the independent claim. The Examiner finds that Claim(s) 2-5, 8-13, 16-19 and 21-23 do/does not state significantly more since the claim only recites determine information and output the result from the input image.
Thus, Claims 1-5, 9-13, 15-19 and 21-23 recite the same abstract idea and therefore are not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more.
Therefore, the Examiner interprets that the claims are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 9-13, 15-19 and 21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Takenouchi (WO-2020054543 A1), in view of Hsieh et al (U.S. 20190340470 A1; Hsieh).
Regarding claim 1, Takenouchi discloses a non-transitory computer-readable medium storing a computer program executed by a computer processor (Paragraph 36: “The CPU 70 controls each unit in the processor device 16 and totally controls the entire endoscope system 10. The ROM 72 stores various programs and control data for controlling the operation of the processor device 16. The program and data executed by the CPU 70 are temporarily stored in the RAM 74.”) to execute a process comprising:
acquiring a medical image generated based on a signal detected by a catheter inserted into a luminal organ; (Paragraphs 51-53: “ while inserting the insertion section 20 of the electronic endoscope 12 into the body cavity and illuminating the inside of the body cavity with the illumination light from the light source device 14, an image of the inside of the body cavity captured by the imaging element 62 is displayed on the screen of the display device 18. … visualization in which a blood vessel in a specific depth region of the observation target is emphasized. In this mode, an image is generated and an image suitable for observing a blood vessel is displayed on the display device 18.”; Paragraph 99: “In step S11, the medical image processing apparatus 160 receives the current image via the image acquisition unit 162. The image acquired by the image acquisition unit 162 is a medical image including a subject image captured using the electronic endoscope 12, and is one image of a time-series image sequentially captured in a time-series manner.)
deriving determination information (calculated feature amount) for determining whether to detect an object region from the medical image based on the acquired medical image; (Paragraph 66: “The availability determination unit 164 includes a recognition unit 164A. The recognizing unit 164A recognizes whether the input image is appropriate or inappropriate for image recognition.”)
determining whether to detect the object region from the medical image based on the derived determination information; (Paragraph 101; Paragraphs 68 “the availability determination unit 164 determines whether the image acquired from the image acquisition unit 162 is an image inappropriate for recognition. The availability determination unit 164 includes a recognition unit 164A. The recognizing unit 164A recognizes whether the input image is appropriate or inappropriate for image recognition. Here, “appropriate for image recognition” means that the image is suitable for recognition processing for classifying lesions, which is the main purpose of recognition. “Inappropriate for image recognition” means that the image is inappropriate for recognition such as classification of lesions, … the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition. … and determine whether or not the image is inappropriate for recognition using the calculated feature amount.”, Examiner interpreted “ the medical image” and “the acquired medical image” as one medical image. Also, “image recognition as classification of lesions region” interpreted as “object region”) and
detecting the object region included in the medical image by inputting the acquired medical image into a first model trained (The classification unit 170) for detecting the object region included in the medical image when the medical image is input, in a case where the object region is determined to be detected from the medical image; (Paragraphs 90-91: “ For the classification processing of the classification unit 170 shown in FIG. 4, for example, a convolutional neural network (CNN) is used. The classification unit 170 is configured using a second learned model learned by machine learning so as to perform an image classification task of classifying the image into a specific class … When the determination result obtained from the availability determination unit 164 is “an image suitable for recognition”, the classification unit 170 executes a classification process. The classification unit 170 extracts a feature amount from the image and classifies the image. The classification unit 170 may detect a region of interest (eg, a lesion region), detect a lesion region, and / or perform segmentation based on the calculated feature amount. Further, the classification unit 170 may perform the classification process using the feature amount calculated by the recognition unit 164A.” ; Paragraphs 101-104 : “If the availability determination unit 164 determines in the determination process of step S14 that the classification is possible, the process proceeds to steps S16 and S20. … in step S20, the classification unit 170 performs processing for recognizing a lesion area from within the image and classifying the lesion area into a predetermined class.”) and
acquiring detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model (the recognizing unit 164A) trained for outputting the detection accuracy of the object region included in the medical image. (Paragraph 68: “The recognition unit 164A can be configured using, for example, a convolutional neural network (CNN). For example, the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition.”)
However, Takenouchi does not disclose the determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; and
acquiring the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model trained for outputting the evaluation index regarding detection accuracy of the object region included in the medical image.
Hsieh discloses determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; (Figs. 21A-B; Figs. 22B; Paragraph 217: “hyper parameters are tuned by inputting unlabeled images 2251 to the learning layers of the convolutional network 2245. After classification 2247, one or more image quality indices 2259 are generated.”; Paragraph 203: “image quality is generated for computer analysis as a change in probabilistic values of image classification. On a scale of 1-5, for example, a 3 indicates the image is diagnosable (e.g., is of diagnostic quality), a 5 indicates a perfect image (e.g., probably at too high of a dose), and a 1 indicates the image data is not usable for diagnosis. As a result, a preferred score is 3-4. The DDLD 1532 can generate an IQI based on acquired image data by mimicking radiologist behavior and the 1-5 scale. Using image data attributes, the DDLD 1532 can analyze an image and determine features (e.g., a small lesion) and evaluate diagnostic quality of each feature in the image data”)) and
acquiring the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model trained (the convolutional network 2245) for outputting the evaluation index regarding detection accuracy of the object region included in the medical image. (Fig. 22B; Paragraph 217: “hyper parameters are tuned by inputting unlabeled images 2251 to the learning layers of the convolutional network 2245. After classification 2247, one or more image quality indices 2259 are generated.”; Paragraph 203: “image quality is generated for computer analysis as a change in probabilistic values of image classification. On a scale of 1-5, for example, a 3 indicates the image is diagnosable (e.g., is of diagnostic quality), a 5 indicates a perfect image (e.g., probably at too high of a dose), and a 1 indicates the image data is not usable for diagnosis. As a result, a preferred score is 3-4. The DDLD 1532 can generate an IQI based on acquired image data by mimicking radiologist behavior and the 1-5 scale. Using image data attributes, the DDLD 1532 can analyze an image and determine features (e.g., a small lesion) and evaluate diagnostic quality of each feature in the image data”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Takenouchi by including learning and testing/evaluation phases for an image quality deep learning network that is taught by Hsieh, to make the invention that image quality assessment and feedback using a deployed network model; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the determine image quality and reconstruction feedback based on acquired image data as well as improve operation of imaging and/or other healthcare systems using a plurality of deep learning and/or other machine learning techniques.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 2, Takenouchi, as modified by Hsieh, discloses all the claims invention. Takenouchi further discloses further comprising: outputting a warning indicating that the object region is not detected from the medical image when the object region is determined not to be detected from the medical image. (Paragraphs 67-71: “Inappropriate for image recognition” means that the image is inappropriate for recognition such as classification of lesions, which is the main purpose. …. when the availability determination unit 164 determines that the classification is difficult, the process proceeds to the notification control unit 172 without performing the processes by the motion estimation unit 166, the behavior determination unit 168, and the classification unit 170.”)
Regarding claim 3, Takenouchi, as modified by Hsieh, discloses all the claims invention. Takenouchi further discloses the determination information includes an output of an activation function included in the first model; and acquiring an output of the activation function included in the first model using the first model. (Paragraph 68: “The recognition unit 164A can be configured using, for example, a convolutional neural network (CNN). For example, the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition. … and determine whether or not the image is inappropriate for recognition using the calculated feature amount.”, the person or one having ordinary skill in the art can understand that the output layer of CNN included activation function. )
Regarding claim 4, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses the determination information includes presence or absence of an artifact in the medical image; and acquiring the presence or absence of the artifact in the medical image by inputting the acquired medical image into a third model trained for detecting the presence or absence of the artifact in the medical image. (Paragraph 145: “each DDLD 1522, 1532, 1542 determines a signature. For example, the DDLD 1522, 1532, 1542 determines signature(s) for machine (e.g., imaging device 1410, information subsystem 1420, etc.) service issues, clinical issues related to patient health, noise texture issues, artifact issues, etc.”; Paragraph 230: “a regression and/or classification method can be used to generate image quality metrics by labeling the training data with an absolute value and/or level of the corresponding image IQ metric. That is, metrics can include quantitative measures of image quality (e.g., noise level, detectability, etc.), descriptive measures of image quality (e.g., Likert score, etc.), a classification of image quality (e.g., whether the image is diagnostic or not, has artifacts or not, etc.)”)
Regarding claim 5, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses the third model includes a model trained by unsupervised learning using a medical image with no artifact. (Paragraph 214: “hyper parameters are tuned by inputting unlabeled images 2221 to the unsupervised learning layer 2213 and then to the supervised learning layers of the convolutional network 2215. After classification 2217, one or more image quality indices 2219 are generated”; Paragraph 230: “a regression and/or classification method can be used to generate image quality metrics by labeling the training data with an absolute value and/or level of the corresponding image IQ metric. That is, metrics can include quantitative measures of image quality (e.g., noise level, detectability, etc.), descriptive measures of image quality (e.g., Likert score, etc.), a classification of image quality (e.g., whether the image is diagnostic or not, has artifacts or not, etc.)”)
Regarding claim 9, Takenouchi discloses an information processing device (Paragraph 5: “image processing that provides information that supports diagnosis by processing time-series medical images.”) comprising: a control unit configured to:
acquire a medical image generated based on a signal detected by a catheter inserted into a luminal organ; (Paragraphs 51-53: “ while inserting the insertion section 20 of the electronic endoscope 12 into the body cavity and illuminating the inside of the body cavity with the illumination light from the light source device 14, an image of the inside of the body cavity captured by the imaging element 62 is displayed on the screen of the display device 18. … visualization in which a blood vessel in a specific depth region of the observation target is emphasized. In this mode, an image is generated and an image suitable for observing a blood vessel is displayed on the display device 18.”; Paragraph 99: “In step S11, the medical image processing apparatus 160 receives the current image via the image acquisition unit 162. The image acquired by the image acquisition unit 162 is a medical image including a subject image captured using the electronic endoscope 12, and is one image of a time-series image sequentially captured in a time-series manner.)
derive determination information (calculated feature amount) for determining whether to detect an object region from the medical image based on the acquired medical image; (Paragraph 66: “The availability determination unit 164 includes a recognition unit 164A. The recognizing unit 164A recognizes whether the input image is appropriate or inappropriate for image recognition.”)
determine whether to detect the object region from the medical image based on the derived determination information; (Paragraph 101; Paragraphs 68 “the availability determination unit 164 determines whether the image acquired from the image acquisition unit 162 is an image inappropriate for recognition. The availability determination unit 164 includes a recognition unit 164A. The recognizing unit 164A recognizes whether the input image is appropriate or inappropriate for image recognition. Here, “appropriate for image recognition” means that the image is suitable for recognition processing for classifying lesions, which is the main purpose of recognition. “Inappropriate for image recognition” means that the image is inappropriate for recognition such as classification of lesions, … the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition. … and determine whether or not the image is inappropriate for recognition using the calculated feature amount.”, Examiner interpreted “ the medical image” and “the acquired medical image” as one medical image. Also, “image recognition as classification of lesions region” interpreted as “object region”) and
detect the object region included in the medical image by inputting the acquired medical image into a first model trained (The classification unit 170) for detecting the object region included in the medical image when the medical image is input, in a case where the object region is determined to be detected from the medical image; (Paragraphs 90-91: “ For the classification processing of the classification unit 170 shown in FIG. 4, for example, a convolutional neural network (CNN) is used. The classification unit 170 is configured using a second learned model learned by machine learning so as to perform an image classification task of classifying the image into a specific class … When the determination result obtained from the availability determination unit 164 is “an image suitable for recognition”, the classification unit 170 executes a classification process. The classification unit 170 extracts a feature amount from the image and classifies the image. The classification unit 170 may detect a region of interest (eg, a lesion region), detect a lesion region, and / or perform segmentation based on the calculated feature amount. Further, the classification unit 170 may perform the classification process using the feature amount calculated by the recognition unit 164A.” ; Paragraphs 101-104 : “If the availability determination unit 164 determines in the determination process of step S14 that the classification is possible, the process proceeds to steps S16 and S20. … in step S20, the classification unit 170 performs processing for recognizing a lesion area from within the image and classifying the lesion area into a predetermined class.”) and
acquire detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model (the recognizing unit 164A) trained for outputting the detection accuracy of the object region included in the medical image. (Paragraph 68: “The recognition unit 164A can be configured using, for example, a convolutional neural network (CNN). For example, the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition.”)
However, Takenouchi does not disclose the determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; and acquire the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model trained for outputting the evaluation index regarding detection accuracy of the object region included in the medical image.
Hsieh discloses the determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; (Figs. 21A-B; Figs. 22B; Paragraph 217: “hyper parameters are tuned by inputting unlabeled images 2251 to the learning layers of the convolutional network 2245. After classification 2247, one or more image quality indices 2259 are generated.”; Paragraph 203: “image quality is generated for computer analysis as a change in probabilistic values of image classification. On a scale of 1-5, for example, a 3 indicates the image is diagnosable (e.g., is of diagnostic quality), a 5 indicates a perfect image (e.g., probably at too high of a dose), and a 1 indicates the image data is not usable for diagnosis. As a result, a preferred score is 3-4. The DDLD 1532 can generate an IQI based on acquired image data by mimicking radiologist behavior and the 1-5 scale. Using image data attributes, the DDLD 1532 can analyze an image and determine features (e.g., a small lesion) and evaluate diagnostic quality of each feature in the image data”) and
acquire the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a second model trained (the convolutional network 2245) for outputting the evaluation index regarding detection accuracy of the object region included in the medical image. (Fig. 22B; Paragraph 217: “hyper parameters are tuned by inputting unlabeled images 2251 to the learning layers of the convolutional network 2245. After classification 2247, one or more image quality indices 2259 are generated.”; Paragraph 203: “image quality is generated for computer analysis as a change in probabilistic values of image classification. On a scale of 1-5, for example, a 3 indicates the image is diagnosable (e.g., is of diagnostic quality), a 5 indicates a perfect image (e.g., probably at too high of a dose), and a 1 indicates the image data is not usable for diagnosis. As a result, a preferred score is 3-4. The DDLD 1532 can generate an IQI based on acquired image data by mimicking radiologist behavior and the 1-5 scale. Using image data attributes, the DDLD 1532 can analyze an image and determine features (e.g., a small lesion) and evaluate diagnostic quality of each feature in the image data”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Takenouchi by including learning and testing/evaluation phases for an image quality deep learning network that is taught by Hsieh, to make the invention that image quality assessment and feedback using a deployed network model; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the determine image quality and reconstruction feedback based on acquired image data as well as improve operation of imaging and/or other healthcare systems using a plurality of deep learning and/or other machine learning techniques.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 10, Takenouchi, as modified by Hsieh, discloses all the claims invention. Takenouchi further discloses the control unit is configured to: output a warning indicating that the object region is not detected from the medical image when the object region is determined not to be detected from the medical image. (Paragraphs 67-71: “Inappropriate for image recognition” means that the image is inappropriate for recognition such as classification of lesions, which is the main purpose. …. when the availability determination unit 164 determines that the classification is difficult, the process proceeds to the notification control unit 172 without performing the processes by the motion estimation unit 166, the behavior determination unit 168, and the classification unit 170.”)
Regarding claim 11, Takenouchi, as modified by Hsieh, discloses all the claims invention. Takenouchi further discloses the determination information includes an output of an activation function included in the first model; and the control unit is configured to acquire an output of the activation function included in the first model using the first model. (Paragraph 68: “The recognition unit 164A can be configured using, for example, a convolutional neural network (CNN). For example, the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition. … and determine whether or not the image is inappropriate for recognition using the calculated feature amount.”, the person or one having ordinary skill in the art can understand that the output layer of CNN included activation function. )
Regarding claim 12, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses the determination information includes presence or absence of an artifact in the medical image; and the control unit is configured to acquire the presence or absence of the artifact in the medical image by inputting the acquired medical image into a third model trained for detecting the presence or absence of the artifact in the medical image. (Paragraph 145: “each DDLD 1522, 1532, 1542 determines a signature. For example, the DDLD 1522, 1532, 1542 determines signature(s) for machine (e.g., imaging device 1410, information subsystem 1420, etc.) service issues, clinical issues related to patient health, noise texture issues, artifact issues, etc.”; Paragraph 230: “a regression and/or classification method can be used to generate image quality metrics by labeling the training data with an absolute value and/or level of the corresponding image IQ metric. That is, metrics can include quantitative measures of image quality (e.g., noise level, detectability, etc.), descriptive measures of image quality (e.g., Likert score, etc.), a classification of image quality (e.g., whether the image is diagnostic or not, has artifacts or not, etc.)”)
Regarding claim 13, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses the third model includes a model trained by unsupervised learning using a medical image with no artifact. (Paragraph 214: “hyper parameters are tuned by inputting unlabeled images 2221 to the unsupervised learning layer 2213 and then to the supervised learning layers of the convolutional network 2215. After classification 2217, one or more image quality indices 2219 are generated”; Paragraph 230: “a regression and/or classification method can be used to generate image quality metrics by labeling the training data with an absolute value and/or level of the corresponding image IQ metric. That is, metrics can include quantitative measures of image quality (e.g., noise level, detectability, etc.), descriptive measures of image quality (e.g., Likert score, etc.), a classification of image quality (e.g., whether the image is diagnostic or not, has artifacts or not, etc.)”)
Regarding claim 15, Takenouchi discloses an information processing (Paragraph 5: “image processing that provides information that supports diagnosis by processing time-series medical images.”) method comprising:
acquiring a medical image generated based on a signal detected by a catheter inserted into a luminal organ; (Paragraphs 51-53: “ while inserting the insertion section 20 of the electronic endoscope 12 into the body cavity and illuminating the inside of the body cavity with the illumination light from the light source device 14, an image of the inside of the body cavity captured by the imaging element 62 is displayed on the screen of the display device 18. … visualization in which a blood vessel in a specific depth region of the observation target is emphasized. In this mode, an image is generated and an image suitable for observing a blood vessel is displayed on the display device 18.”; Paragraph 99: “In step S11, the medical image processing apparatus 160 receives the current image via the image acquisition unit 162. The image acquired by the image acquisition unit 162 is a medical image including a subject image captured using the electronic endoscope 12, and is one image of a time-series image sequentially captured in a time-series manner.)
deriving determination information (calculated feature amount) for determining whether to detect an object region from the medical image based on the acquired medical image; (Paragraph 66: “The availability determination unit 164 includes a recognition unit 164A. The recognizing unit 164A recognizes whether the input image is appropriate or inappropriate for image recognition.”)
determining whether to detect the object region from the medical image based on the derived determination information; (Paragraph 101; Paragraphs 68 “the availability determination unit 164 determines whether the image acquired from the image acquisition unit 162 is an image inappropriate for recognition. The availability determination unit 164 includes a recognition unit 164A. The recognizing unit 164A recognizes whether the input image is appropriate or inappropriate for image recognition. Here, “appropriate for image recognition” means that the image is suitable for recognition processing for classifying lesions, which is the main purpose of recognition. “Inappropriate for image recognition” means that the image is inappropriate for recognition such as classification of lesions, … the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition. … and determine whether or not the image is inappropriate for recognition using the calculated feature amount.”, Examiner interpreted “ the medical image” and “the acquired medical image” as one medical image. Also, “image recognition as classification of lesions region” interpreted as “object region”) and
detecting the object region included in the medical image by inputting the acquired medical image into a first model trained (The classification unit 170) for detecting the object region included in the medical image when the medical image is input, in a case where the object region is determined to be detected from the medical image; (Paragraphs 90-91: “ For the classification processing of the classification unit 170 shown in FIG. 4, for example, a convolutional neural network (CNN) is used. The classification unit 170 is configured using a second learned model learned by machine learning so as to perform an image classification task of classifying the image into a specific class … When the determination result obtained from the availability determination unit 164 is “an image suitable for recognition”, the classification unit 170 executes a classification process. The classification unit 170 extracts a feature amount from the image and classifies the image. The classification unit 170 may detect a region of interest (eg, a lesion region), detect a lesion region, and / or perform segmentation based on the calculated feature amount. Further, the classification unit 170 may perform the classification process using the feature amount calculated by the recognition unit 164A.” ; Paragraphs 101-104 : “If the availability determination unit 164 determines in the determination process of step S14 that the classification is possible, the process proceeds to steps S16 and S20. … in step S20, the classification unit 170 performs processing for recognizing a lesion area from within the image and classifying the lesion area into a predetermined class.”) and
acquiring detection accuracy of the object region included in the medical image by inputting the acquired medical image into a third model (the recognizing unit 164A) trained for outputting the detection accuracy of the object region included in the medical image. (Paragraph 68: “The recognition unit 164A can be configured using, for example, a convolutional neural network (CNN). For example, the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition.”)
However, Takenouchi does not disclose the determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; and
acquiring the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a third model trained for outputting the evaluation index regarding detection accuracy of the object region included in the medical image.
Hsieh discloses determination information includes an evaluation index regarding detection accuracy of the object region included in the medical image; (Figs. 21A-B; Figs. 22B; Paragraph 217: “hyper parameters are tuned by inputting unlabeled images 2251 to the learning layers of the convolutional network 2245. After classification 2247, one or more image quality indices 2259 are generated.”; Paragraph 203: “image quality is generated for computer analysis as a change in probabilistic values of image classification. On a scale of 1-5, for example, a 3 indicates the image is diagnosable (e.g., is of diagnostic quality), a 5 indicates a perfect image (e.g., probably at too high of a dose), and a 1 indicates the image data is not usable for diagnosis. As a result, a preferred score is 3-4. The DDLD 1532 can generate an IQI based on acquired image data by mimicking radiologist behavior and the 1-5 scale. Using image data attributes, the DDLD 1532 can analyze an image and determine features (e.g., a small lesion) and evaluate diagnostic quality of each feature in the image data”)) and
acquiring the evaluation index regarding detection accuracy of the object region included in the medical image by inputting the acquired medical image into a third model trained (the convolutional network 2245) for outputting the evaluation index regarding detection accuracy of the object region included in the medical image. (Fig. 22B; Paragraph 217: “hyper parameters are tuned by inputting unlabeled images 2251 to the learning layers of the convolutional network 2245. After classification 2247, one or more image quality indices 2259 are generated.”; Paragraph 203: “image quality is generated for computer analysis as a change in probabilistic values of image classification. On a scale of 1-5, for example, a 3 indicates the image is diagnosable (e.g., is of diagnostic quality), a 5 indicates a perfect image (e.g., probably at too high of a dose), and a 1 indicates the image data is not usable for diagnosis. As a result, a preferred score is 3-4. The DDLD 1532 can generate an IQI based on acquired image data by mimicking radiologist behavior and the 1-5 scale. Using image data attributes, the DDLD 1532 can analyze an image and determine features (e.g., a small lesion) and evaluate diagnostic quality of each feature in the image data”)
Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Takenouchi by including learning and testing/evaluation phases for an image quality deep learning network that is taught by Hsieh, to make the invention that image quality assessment and feedback using a deployed network model; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the determine image quality and reconstruction feedback based on acquired image data as well as improve operation of imaging and/or other healthcare systems using a plurality of deep learning and/or other machine learning techniques.
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention.
Regarding claim 16, Takenouchi, as modified by Hsieh, discloses all the claims invention. Takenouchi further discloses further comprising: outputting a warning indicating that the object region is not detected from the medical image when the object region is determined not to be detected from the medical image. (Paragraphs 67-71: “Inappropriate for image recognition” means that the image is inappropriate for recognition such as classification of lesions, which is the main purpose. …. when the availability determination unit 164 determines that the classification is difficult, the process proceeds to the notification control unit 172 without performing the processes by the motion estimation unit 166, the behavior determination unit 168, and the classification unit 170.”)
Regarding claim 17, Takenouchi, as modified by Hsieh, discloses all the claims invention. Takenouchi further discloses the determination information includes an output of an activation function included in the first model, and the method further comprises: acquiring an output of the activation function included in the first model using the first model. (Paragraph 68: “The recognition unit 164A can be configured using, for example, a convolutional neural network (CNN). For example, the recognizing unit 164A is configured using a first learned model learned by machine learning so as to perform a task of two classifications of an image suitable for recognition and an image unsuitable for recognition. … and determine whether or not the image is inappropriate for recognition using the calculated feature amount.”, the person or one having ordinary skill in the art can understand that the output layer of CNN included activation function. )
Regarding claim 18, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses the determination information includes presence or absence of an artifact in the medical image; and acquiring the presence or absence of the artifact in the medical image by inputting the acquired medical image into a third model trained for detecting the presence or absence of the artifact in the medical image. (Paragraph 145: “each DDLD 1522, 1532, 1542 determines a signature. For example, the DDLD 1522, 1532, 1542 determines signature(s) for machine (e.g., imaging device 1410, information subsystem 1420, etc.) service issues, clinical issues related to patient health, noise texture issues, artifact issues, etc.”; Paragraph 230: “a regression and/or classification method can be used to generate image quality metrics by labeling the training data with an absolute value and/or level of the corresponding image IQ metric. That is, metrics can include quantitative measures of image quality (e.g., noise level, detectability, etc.), descriptive measures of image quality (e.g., Likert score, etc.), a classification of image quality (e.g., whether the image is diagnostic or not, has artifacts or not, etc.)”)
Regarding claim 19, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses the third model includes a model trained by unsupervised learning using a medical image with no artifact. (Paragraph 214: “hyper parameters are tuned by inputting unlabeled images 2221 to the unsupervised learning layer 2213 and then to the supervised learning layers of the convolutional network 2215. After classification 2217, one or more image quality indices 2219 are generated”; Paragraph 230: “a regression and/or classification method can be used to generate image quality metrics by labeling the training data with an absolute value and/or level of the corresponding image IQ metric. That is, metrics can include quantitative measures of image quality (e.g., noise level, detectability, etc.), descriptive measures of image quality (e.g., Likert score, etc.), a classification of image quality (e.g., whether the image is diagnostic or not, has artifacts or not, etc.)”)
Regarding claim 21, Takenouchi, as modified by Hsieh, discloses all the claims invention. Takenouchi further discloses further comprising: displaying the acquired medical image generated based on the signal detected by the catheter inserted into the luminal organ on a screen of a display device. (Paragraph 21: “The image data converted by the processor device 16 is displayed on the display device 18 as an endoscopic photographed image (observed image).”; (Paragraphs 51-53: “ while inserting the insertion section 20 of the electronic endoscope 12 into the body cavity and illuminating the inside of the body cavity with the illumination light from the light source device 14, an image of the inside of the body cavity captured by the imaging element 62 is displayed on the screen of the display device 18. … visualization in which a blood vessel in a specific depth region of the observation target is emphasized. In this mode, an image is generated and an image suitable for observing a blood vessel is displayed on the display device 18.”;)
Regarding claim 22, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses further comprising: outputting the warning indicating that the object region is not detected from the medical image when the object region is determined not to be detected from the medical image on a screen of a display device or via an audio warning. (Paragraphs 246: “ at block 2912, the reconstructed image is analyzed. For example, the image is analyzed by the DDLD 1532 for quality, IQI, data quality index, other image quality metric(s), etc. … At block 2914, the reconstructed image is sent to the diagnosis engine 1450. The image can be displayed and/or further processed by the diagnosis engine 1450 and its DDLD 1542 to facilitate diagnosis of the patient 1406, for example. … As described above, an IQI, other data quality index, detectability index, diagnostic index, etc., can be generated to represent a reliability and/or usefulness of the data for diagnosis of the patient 1406. … At block 2922, if the acquired and processed image and/or image data is not of sufficient quality, then the reconstruction DDLD 1532 sends feedback to the acquisition learning and improvement factory 1520 indicating that the image data obtained is not of sufficient quality for analysis and diagnosis”, the person ordinary skill in the art would understand the image data is not of sufficient quality (detectability index) is interpreted as “the object region is not detected. )
Regarding claim 22, Takenouchi, as modified by Hsieh, discloses all the claims invention. Hsieh further discloses the second model includes an input layer to which the medical image is input, an intermediate layer that extracts a feature amount of the medical image, and an output layer that outputs output data indicating the evaluation index for the medical image; and the input layer includes a plurality of nodes that receive an input of a pixel value of each pixel included in the medical image, and deliver the input pixel value to the intermediate layer, the intermediate layer includes a plurality of nodes that extract a feature amount of input data and deliver the feature amount extracted using various parameters to the output layer, and the output layer outputs continuous values indicating the evaluation index. (Figs. 21A-21B; Paragraph 208: “FIGS. 21A-21B illustrate example learning and testing/evaluation phases for an image quality deep learning network. As shown in the example of FIG. 21A, known, labeled images 2110 are applied to a convolution network 2120. The images 2110 are obtained using multiple users, and their image quality indices are known. As discussed above with respect to FIGS. 1-3, the convolution 2120 is applied to the input images 2110 to generate a feature map, and pooling 2130 reduces image size to isolate portions 2125 of the images 2110 including features of interest to form a fully connected layer 2140. A classifier 2150 (e.g., a softmax classifier, etc.) associates weights with nodes representing features of interest. The classifier 2150 provides weighted features that can be used to generate a known image quality index 2160.”; Paragraph 211)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Podilchuk et al (U.S. 20180053300 A1), “Method and System of Computer-Aided Detection Using Multiple Images From Different Views of a Region of Interest to Improve Detection Accuracy”, teaches about A system and method of computer-aided detection (CAD or CADe) of medical images that utilizes persistence between images of a sequence to identify regions of interest detected with low interference from artifacts to reduce false positives and improve probability of detection of true lesions, thereby providing improved performance over static CADe methods for automatic ROI lesion detection.
Kamiyama et al (U.S. 20180070798 A1), “Image Processing Apparatus, Image Processing Method , and Computer-Readable Recording Medium”, teaches about an image processing method includes: detecting, from an image acquired by imaging inside a lumen of a living body, a candidate region for a specific region that is a region where a specific part in the lumen has been captured; acquiring information related to the detected candidate region; determining an identification means for identification of, based on the information related to the candidate region, whether or not the candidate region is the specific region; and identifying whether or not the candidate region is the specific region by using the determined identification means.
Liang et al (U.S. 20180225820 A1), “Method, Systems, and Media for Simultaneously Monitoring Colonoscopic Video Quality And Detecting Polyps in Colonoscopy”, teaches about the mechanisms can include a quality monitoring system that uses a first trained classifier to monitor image frames from a colonoscopic video to determine which image frames are informative frames and which image frames are non-informative frames. The informative image frames can be passed to an automatic polyp detection system that uses a second trained classifier to localize and identify whether a polyp or any other suitable object is present in one or more of the informative image frames.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUY TRAN/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674