Prosecution Insights
Last updated: April 19, 2026
Application No. 18/334,779

MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD

Non-Final OA §102§103§112
Filed
Jun 14, 2023
Examiner
SOFRONIOU, MICHAEL MARIO
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Canon Medical Systems Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
11
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
37.8%
-2.2% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Typographic Conventions Throughout this office action, shorthand notation for referencing locations of elements in documents are utilized. The following is a brief summary of the shorthand utilized: Sec. – is used to denote an associated section with a header in non-patent literature ¶ – is used to denote the number and location of a paragraph col. – is used to denote a column number ln. – is used to denote a line; if a line number is not demarcated in a document, the line number will be assumed to start at 1 for each paragraph. Each claim is considered to stand as its own paragraph. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/14/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not sufficiently descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The present invention relates to providing labeling tools and workflows for analyzing structures in a medical image. Adding sufficient detail to the title relating to the field of endeavor in medical image analysis would properly reflect the purpose or context of the present invention. The disclosure is objected to because of the following informalities: on pg. 14, ln. 12 – the word “venders” should be spelled as “vendors”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claims 18 & 19 recite limitations that use words like “means” (or “step”) or similar terms with functional language and invoke 35 U.S.C. § 112(f): Claim 18 – first recites the limitation, “an obtaining step of obtaining…” [ln. 2], which is later recited in claim 19 [ln. 2] Claim 18 – first recites the limitation, “a receiving step of receiving…” [ln. 4], which is later recited in claim 19 [ln. 4] Claim 18 – recites the limitation, “an analyzing step of analyzing…” [ln. 6] Claim 18 – recites the limitation, “a tool set generating step of generating…” [ln. 10] Claim 19 – recites the limitation, “a workflow generating step of generating…” [ln. 6] Because these claim limitations are being interpreted under 35 U.S.C. § 112(f) or pre-AIA 35 U.S.C. § 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof: “an obtaining step” (Fig. 1; element 10 [pg. 8; ln. 21-31]) – while “an obtaining step” is not explicitly recited in the specification, the “obtaining function 10” is described as being configured to obtain a medical image acquired by scanning an examined subject that needs to be labeled. The obtaining function is included as part of the processing circuitry 205, which utilizes a processor [pg. 7; ln. 8-13]. “a receiving step” (Fig. 1; element 20 [pg. 8; ln. 32-35 & pg. 9; ln. 1-24]) – while “a receiving step” is not explicitly recited in the specification, the “receiving function 20” is described as being configured to receive labeling steps in a labeling task performed on the medical image. Labeling steps can be indicated by adding a symbol to the image in the region of interest or target structure for subsequent segmentation, classification, or detection. The receiving function is included as part of the processing circuitry 205, which utilizes a processor [pg. 7; ln. 8-13]. “an analyzing step” (Fig. 1; element 40 [pg. 13; ln. 1-28]) – while “an analyzing step” is not explicitly recited in the specification, the “analyzing function 40” is described as being configured to analyze a local characteristic of the target structure serving as a labeling target in the medical image. The local characteristic can be set in advance, and used to determine appropriate tools for the labeling task. As illustrated in Figs 3A and 3B, the analyzing function extracts gradation value ranges of labeled blood vessels and its associated peripheral regions [pg. 14; ln. 31-37 & pg. 15; ln. 1-16]. The analyzing function is included as part of the processing circuitry 205, which utilizes a processor [pg. 7; ln. 8-13]. “a tool set generating step” (Fig. 1; element 50 [pg. 13; ln. 4-7;29-35 & pg. 14; ln. 1-30]) – while “a tool set generating step” is not explicitly recited in the specification, the “tool set generating function 50” is described as being configured to generate a usable tool set corresponding to the medical image labeling task, based on the local characteristic analyzed by analyzing function 40. These can be either display tools for viewing the image or labeling tools for labeling the image. The tool set generating function is included as part of the processing circuitry 205, which utilizes a processor [pg. 7; ln. 8-13]. “a workflow generating step” (Fig. 1; element 60 [pg. 19; ln. 1-18]) – while “a workflow generating step” is not explicitly recited in the specification, the “workflow generating function 60” is described as being configured to record the labeling steps in the labeling task during the medical image labeling steps to generate a workflow indicating the medical image labeling steps. More specifically, it records the sequential details of user actions, use of tools, and results of use in the workflow. The workflow generating function is included as part of the processing circuitry 205, which utilizes a processor [pg. 7; ln. 8-13]. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1,2, 4-14 & 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhao et al (US 2018/0330525 A1). Regarding claim 1, Zhao et al et al disclose an interactive annotation wizard for annotating medical imaging data. More specifically, Zhao et al teach a medical image processing apparatus (image processing system 100 [¶0028; Fig. 1]) comprising processing circuity (processors 303 of the advanced image processing system 140 [¶0064; Fig. 3]) configured to: to obtain a medical image subject to a labeling process (medical data store 206 obtains and stores medical images and data [¶0048; Fig. 2] for subsequent image processing via a variety of image processing tools [¶0042; illustrated in Fig. 8H] including tools to label images); to receive a labeling step in a labeling task performed on the medical image (during image process 722 a colonoscopy flythrough (a kind of labeling task to identify abnormal structures in a colon) is performed for labeling potential polyps through digestive track [¶0087-89; Fig. 8A-K]); to analyze, while the labeling step in the labeling task is received, a local characteristic of a target structure serving as a labeling target in the medical image (during image process 722, potential polyp candidates (in this case, a target structure for a colonoscopy flythrough) are displayed in a GUI to denote their position in the colon (in this case, a local characteristic of a target structure)) [¶0092; Fig. 8D] to generate a usable tool set corresponding to the labeling task performed on the medical image, on a basis of the local characteristic (during image process 722 (the colonoscopy flythrough) a “pick polyp” tool is presented that allows a user to both identify and measure the volume of individual polyps (labeled as polyps #1-3 based on their position) that have been identified during the colonoscopy flythrough [¶0070; Fig. 8H]). Regarding claim 2, Zhao et al teach the medical image processing apparatus according to claim 1 (as described above), wherein the usable tool set is a tool set including a plurality of labeling tools (refer to Fig. 8H which denotes particular tools when the labeling task is related to selecting polyps in the digestive track, with a “pick polyp” tool that allows a user to denote a location, shape, or size of a polyp in an image or camera tools to adjust the view in the image [¶0074 & 92]). Regarding claim 4, Zhao et al teach the medical image processing apparatus according to claim 1 (as previously described), wherein the processing circuitry is configured to record the received labeling step in the labeling task (during the labeling step of annotating polyps in the colon during the colonoscopy flythrough 713 (the labeling task), labeled polyps and their locations in the colon are recorded [¶0089; Fig. 8A]) and to generate a workflow indicating the labeling step performed on the medical image corresponding to the labeling task performed on the medical image (during the colonoscopy flythrough 713 (the labeling task), the workflow 703 is presented to a user to iterate through and pick polyps in the colon [¶0081; Fig. 7A & 8A-H]). Regarding claim 5, Zhao et al teach the medical image processing apparatus according to claim 1 (as previously described), the processing circuitry is configured to conduct a search to determine whether or not a usable tool set corresponding to the labeling task performed on the medical image is present (for each image processing step, advanced tools specific to each image processing step are presented to the user as seen in Fig. 8H, and are determined by the image processing server based on the types of images, body parts, medical procedures, and medical conditions associated with the image (the examiner notes that in order to output a tool, the tools must first be searched to find the appropriate tool) [¶0095]). Regarding claim 6, Zhao et al teach the medical image processing apparatus according to claim 5 (as previously described), upon finding the usable tool set in the search, the processing circuitry is configured to output the usable tool set as a candidate labeling tool (for each image processing step, advanced tools specific to each image processing step are presented to the user as seen in Fig. 8H, and are determined by the image processing server based on the types of images, body parts, medical procedures, and medical conditions associated with the image (the examiner notes that in order to output a tool, the tools must first be searched to find the appropriate tool) [¶0095]). Regarding claim 7, Zhao et al teach a medical image processing apparatus (image processing system 100 [¶0028; Fig. 1]) comprising processing circuity (processors 303 of the advanced image processing system 140 [¶0064; Fig. 3]) configured to: to obtain a medical image subject to a labeling process (medical data store 206 obtains and stores medical images and data for subsequent image processing [¶0048; Fig. 2] via a variety of image processing tools [¶0042; illustrated in Fig. 8H] including tools to label images); to receive a labeling step in a labeling task performed on the medical image (during image process 722, polyps in a digestive track are labeled [¶0089; Fig. A]); to record the received labeling step in the labeling task and to generate a workflow indicating the labeling step performed on the medical image (a process line or workflow 703 is illustrated at the top of the GUI in Fig. 7A [¶0081]; when the flythrough option 713 is selected as the labeling task, the GUI is updated for polyp picking [¶0088-89; Fig. 8A-H]). Regarding claim 8, Zhao et al teach the medical image processing apparatus according to claim 7 (as described above), wherein the processing circuitry is configured to conduct a search to determine whether or not an existing workflow corresponding to the labeling task performed on the medical image is present (referring to Fig. 7A, a series of selectable tags 711-714 corresponding to different imaging procedures associated with the abdomen or pelvis are selectable, each with their own corresponding workflow, such as flythrough 713 [¶0080]). Regarding claim 9, Zhao et al teach the medical image processing apparatus according to claim 7 (as previously described), wherein the processing circuitry is configured to assist the labeling process performed on the medical image on a basis of the workflow and to cause at least a part of the labeling step performed on the medical image to conform to the workflow (in Fig. 8A, when the workflow 703 corresponding to a colonoscopy flythrough is selected, automatic image processing is performed to pre-identify potential polyps in the colon to assist a user during the polyp picking process – the user is then allowed to then manually adjust and confirm accurate polyp picking [¶0089]). Regarding claim 10, Zhao et al teach the medical image processing apparatus according to claim 7 (as previously described), wherein the workflow includes a display part indicating a step for adjusting a display state and a labeling part indicating a step for performing the labeling process by using a labeling tool (in Fig. 8H, several different views of the colon are presented, with a variety of “camera” tools to select from the GUI, such as “lock on path”, “orbit”, and “forward view” which allow a user to adjust the display during the colonoscopy flythrough image processing [¶0086-89]). Regarding claim 11, Zhao et al teach the medical image processing apparatus according to claim 8 (as previously described), wherein the workflow includes a display part indicating a step for adjusting a display state (in Fig. 8H, several different views of the colon are presented, with a variety of “camera” tools to select from the GUI, such as “lock on path”, “orbit”, and “forward view” which allow a user to adjust the display during the colonoscopy flythrough image processing [¶0086-89]), and upon finding the existing workflow in the search, the processing circuitry is configured to adjust a display state of the medical image being displayed, in accordance with a final display result in the display part of the existing workflow (referring to Fig. 7A, a series of selectable tags 711-714 corresponding to different imaging procedures associated with the abdomen or pelvis are selectable, each with their own corresponding workflow, such as flythrough 713 [¶0080], multiple display areas 801-804 are presented [¶0088-89; Fig. 8A]). Regarding claim 12, Zhao et al teach the medical image processing apparatus according to claim 8 (as previously described), wherein the workflow includes a labeling part indicating a step for performing the labeling process by using a labeling tool (when the workflow corresponds to a colonoscopy flythrough, the GUI provides particular tools for selecting polyps in the colon, with a “pick polyp” tool that allows a user to denote a location, shape, or size of a polyp in an image or camera tools to adjust the view in the image [¶0074 & 92; Fig. 8H]), and upon finding the existing workflow in the search, the processing circuitry is configured to form and display a flowchart for performing the labeling process by using the labeling tool, according to the labeling part of the existing workflow (when the workflow corresponds to a colonoscopy flythrough, the GUI displays a flowchart 703, which walks the user through the different steps associated with the existing workflow [¶0088-92]). Regarding claim 13, Zhao et al teach the medical image processing apparatus according to claim 7 (as previously described), wherein the processing circuitry is configured to analyze a global characteristic of a labeled result from the received labeling task (after automatic preliminary labeling of potential polyps is performed for the colonoscopy flythrough, of which, the relevant organ of the labeling task (in this case, the colon) is being treated as the global characteristic [¶0088-89; Fig. 8A]), and the processing circuitry is configured to optimize the workflow by correcting the existing workflow (in the colonoscopy flythrough 703, a user is able to view results during step 723, and either confirm results (724) or, if the results are not satisfactory, the user is able to iterate back to process step 722 for optimal polyp picking [¶0088-89; Fig. 8A]). Regarding claim 14, Zhao et al teach the medical image processing apparatus according to claim 13 (as described above), wherein the processing circuitry is configured to correct a parameter used in the workflow, on a basis of the analyzed global characteristic of the labeled result (a user may be prompted questions relating to updating parameters or processing preferences associated with the workflow (in this case, a colonoscopy flythrough) – with one example of a parameter being the default display windows, which can also be “remembered” for subsequent processing using the same workflow ([¶0086]). Regarding claim 19, Zhao et al teach a medical image processing method (image processing method 600 [¶0078; Fig. 6]) comprising: an obtaining step of obtaining a medical image subject to a labeling process (medical data store 206 obtains and stores medical images and data for subsequent image processing [¶0048; Fig. 2] via a variety of image processing tools [¶0042; illustrated in Fig. 8H] including tools to label images); a receiving step of receiving a labeling step in a labeling task performed on the medical image (during image process 722, polyps in a digestive track are labeled [¶0089; Fig. A]); and a workflow generating step of recording the labeling step in the labeling task received at the receiving step and generating a workflow indicating the labeling step performed on the medical image (a process line or workflow 903 is illustrated at the top of the GUI in Fig. 7A [¶0081]; when the flythrough option 713 is selected as the labeling task, the GUI is updated for polyp picking [¶0088-89; Fig. 8A-H]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al (US 2018/0330525 A1) in view of Liu, Yaoyong (WO 2019/233392 A1). Regarding claim 15, Zhao et al teach the medical image processing apparatus according to claim 13 (as previously described), wherein the workflow is structured with a plurality of operation steps (a process line or workflow 903 is illustrated at the top of the GUI in Fig. 7A [¶0081], comprising a series of image processing steps), but does not teach that the processing circuitry is configured to add new operations to the workflow on the basis of an analyzed global characteristic of the labeled result. Liu, Yaoyong however, is analogous art in the same field of endeavor as the present application and also discloses adding new operations to an existing workflow based on a global characteristic of an image label. More specifically, Liu, Yaoyong teach and the processing circuitry is configured to add a new operation step to the workflow, on a basis of the analyzed global characteristic of the labeled result (Liu, Yaoyong: Operation 108 first determines an image label (a global characteristic) according to the target scene or subject label, and then performs image processing on the to-be-detected image on the basis of the image label. (Examiner notes that the processing to be performed based on the image label implies adding a new step based on that label) [¶0041-0045; Fig. 1]). Liu, Yaoyong also discloses that implementing different types of processing based on the different image labels improves the overall processing efficiency of the image processing system and enables a degree of automation during the workflow [¶0048]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application take the annotation wizard and labeling workflow provided by Zhao et al, and implement the workflow optimization provided by Liu, Yaoyong for improved processing efficiency to arrive at the invention of the present application. Regarding claim 16, Zhao et al teach the medical image processing apparatus according to claim 13 (as previously described), wherein the workflow is structured with a plurality of operation steps (a process line or workflow 903 is illustrated at the top of the GUI in Fig. 7A [¶0081], comprising a series of image processing steps), but does not explicitly teach that the processing circuitry is configured to replace steps in the workflow. Liu, Yaoyong, on the other hand teaches and the processing circuitry is configured to replace a part of the operation steps in the workflow with one or more new operation steps (Liu, Yaoyong: Operation 108 first determines an image label (a global characteristic) according to the target scene or subject label, and then performs image processing on the to-be-detected image on the basis of the image label. (performing image processing specific to the image label implies replacing particular image processing steps) [¶0041-0045; Fig. 1]). Liu, Yaoyong also discloses that implementing different types of processing based on the different image labels improves the overall processing efficiency of the image processing system and enables a degree of automation during the workflow [¶0048]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application take the annotation wizard and labeling workflow provided by Zhao et al, and implement the workflow optimization provided by Liu, Yaoyong for improved processing efficiency to arrive at the invention of the present application. Regarding claim 17, Zhao et al in view of Liu, Yaoyong teach the medical image processing apparatus according to claim 16 (as described above), wherein the workflow includes an operation step for adjusting a display range (Zhao et al: camera tools available in the GUI during the colonoscopy flythrough allow a user to adjust the display view [¶0095-98; Fig. H]), and the processing circuitry is configured to replace the operation step for adjusting the display range, with an operation step for identifying a display range by detecting a landmark (Zhao et al: when polyps (a detected landmark) are automatically detected in the colonoscopy flythrough, a view of the detected polyp (polyp #1) is automatically displayed to the user [¶0088-90; Fig. 8A]). Claims 18 is rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al (US 2018/0330525 A1) in view of Dai et al (US 2024/0177299). Regarding claim 18, Zhao et al teach a medical image processing method (image processing method 600 [¶0078; Fig. 6]) comprising: an obtaining step of obtaining a medical image subject to a labeling process (medical data store 206 obtains and stores medical images and data for subsequent image processing [¶0048; Fig. 2] via a variety of image processing tools [¶0042; illustrated in Fig. 8H] including tools to label images); a receiving step of receiving a labeling step in a labeling task performed on the medical image (during image process 722, polyps in a digestive track are labeled [¶0089; Fig. A]); and a tool set generating step of generating a usable tool set corresponding to the labeling task performed on the medical image, on a basis of the local characteristic (during image process 722 (the colonoscopy flythrough) a “pick polyp” tool is presented that allows a user to both identify and measure the volume of individual polyps (labeled as polyps #1-3 based on their position) that have been identified during the colonoscopy flythrough [¶0070; Fig. 8H]). Zhao et al do not teach the analyzing step of analyzing a local characteristic of a target structure. Dai et al, however, are analogous art in the same field of endeavor as the present application and also disclose an analyzing step of analyzing a local characteristic of a target structure. More specifically, Dai et al teach an analyzing step of analyzing, while the labeling step in the labeling task is received at the receiving step, a local characteristic of a target structure serving as a labeling target in the medical image; (under 35 § U.S.C. 112(f), the analyzing step is being interpreted as extracting gradation values of blood vessels – Dai et al: acquiring average image brightness individual blood vessel segments [¶0099; step S200 & S300 in Fig. 1]). Thus, in accordance with KSR rationales (see MPEP § 2143), the prior art includes all of the claimed elements in the present application, with the only difference being the lack of combination. Furthermore, one of ordinary skill in the art could have easily combined the elements by known methods and that each element would merely perform the same function as it does separately. For example, including the extraction brightness values of blood vessel segments allows for accurate blood vessel identification proposed by Dai et al, and merely adds an additional mode of functionality for blood vessel segmentation and analysis to the generalized image processing wizard proposed by Zhao et al. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to take the analysis of blood vessels proposed by Dai et al and incorporate it into the analyses provided by Zhao et al to arrive at the invention of the present application. Allowable Subject Matter Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reason for the indication of allowable subject matter: Regarding claim 3, the primary reason for indication of allowable subject matter is that the prior art fails to teach or reasonably suggest analyzing a global characteristic of a target structure after labeling has been conducted, and then updating usable tools based on the analysis of a global characteristic. The closest prior art used in the present office action, Zhao et al (US 2018/0330525 A1), discloses providing a set of usable tools based on a labeling task for a given workflow which can correspond to a global characteristic of the target structure (ie. an image type or tag), however these tools are made available prior to the start of labeling and are not optimized or updated as a result of the labeling. Conclusion The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure: Hannes et al (WO 2019/170493 A1) disclose an interactive self-improving annotation system for assessing plaque burden in blood vessels. Vaillant et al (US 2023/0021332 A1) disclose a system and method for dynamically annotating a range of variety of different types of medical images. Baker et al (US 2021/0398650 A1) disclose a medical imaging detection workflow and AI management system for analyzing medical image data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael M. Sofroniou whose telephone number is (571)272-0287. The examiner can normally be reached M-F: 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M. Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL M SOFRONIOU/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jun 14, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §102, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month