Prosecution Insights
Last updated: April 18, 2026
Application No. 18/658,365

MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD, AND MEMORY MEDIUM

Non-Final OA §101§103
Filed
May 08, 2024
Examiner
BARNES JR, CARL E
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Canon Medical Systems Corporation
OA Round
1 (Non-Final)
32%
Grant Probability
At Risk
1-2
OA Rounds
4y 4m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
65 granted / 202 resolved
-22.8% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
234
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
62.6%
+22.6% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 202 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2023-077961, filed on 05/10/2023. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/08/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. Claim 12 recites the limitation of “memory medium” and is not being limited to non-transitory computer readable medium. Page 27, recites that program can be distributed over a network such as the internet, and does not disavow the transitory type. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Miyasa (US PGPUB: US 20160239950 A1, Pub Date: Aug. 18, 2016) in view of HATSUTANI ( US PGPUB: US 20240096086 A1, Filed Date: Sep. 21, 2022).. Regarding independent claim 1, Miyasa teaches: A medical image processing apparatus comprising a processing circuitry that obtains a first-type medical image, (Miyasa − [0032] The first and second images may be, for example, images captured at the same time using different modalities or imaging modes. The images may be obtained by capturing the same patient in the same position using the same modality at different dates/times for follow-up. Examiner Note; first image capture prior to the follow-up, the prior image capture is a first-type medical image, at different time than second image.) and obtains a second-type medical image (Miyasa − [0032] The first and second images may be, for example, images captured at the same time using different modalities or imaging modes. The images may be obtained by capturing the same patient in the same position using the same modality at different dates/times for follow-up. Examiner Note; second image capture after the first image, the second image at time after is a second-type medical image,) which is taken at a different timing than the first-type medical image and which includes a region captured in the first-type medical image, (Miyasa − [0032] The images may be obtained by capturing the same patient in the same position using the same modality at different dates/times for follow-up.) extracts a first-type feature point from the first-type medical image, and extracts a second-type feature point from the second-type medical image, (Miyasa − [0036-0037] [0036] The data obtaining unit 102 outputs the first image and the second image to a feature point extraction unit 104. [0037] The feature point extraction unit 104 processes the first image and the second image, and extracts feature points in the first image and the second image. The feature point extraction unit 104 also obtains sets of feature points (sets of corresponding point coordinates) defined by associating the feature points between the images. A feature point extracted by the feature point extraction unit 104 will be referred to as an extracted feature point. A set of extracted feature points associated between the two images will be referred to as an extracted feature point pair.) based on result of position adjustment between the first-type medical image and the second-type medical image, (Miyasa − [0052] On each cross-sectional image displayed on the display unit 190, the operator inputs the position of a feature point by operation input (for example, mouse click) on the input unit. The operation input of the operator is input to the feature point obtaining unit 106 via the operation unit 180. Examiner Note: adjustment to feature points from operation on display unit 190) associates a first-type feature point and a second-type feature point captured in corresponding part in the first-type medical image and in the second-type medical image, (Miyasa − [0052] The feature point obtaining unit 106 converts the positions of feature points input (designated) on each cross-sectional image into 3D position information (3D coordinates) using the position and orientation of the cross-sectional image in the image. The feature point obtaining unit 106 performs this processing for the corresponding feature points between the first image and the second image, and obtains them as input feature point pairs) receives a correction input that, from among pairs of the first-type feature point and the second-type feature point associated with each other, is meant for associating the first-type feature point with other second-type feature point, (Miyasa − [0159] FIG. 14 is a view exemplifying a display screen 300 of a display unit 190 used by the operator to set a priority for the input feature points. Examiner Note: correction input to priority feature point pairs and lower priority feature point for other feature pairs..) calculates position difference between pre-correction position and post-correction position of second-type feature point captured in the second-type medical image (Miyasa − [0049] Next, the feature point extraction unit 104 performs processing of generating sets of corresponding extracted feature points (extracted feature point pairs) between the images by associating the extracted feature points extracted from the first image and the second image in a one-to-one correspondence. The feature point extraction unit 104 can use a known method, for example, Sum of Squared Difference (SSD), Sum of Absolute Difference (SAD), or cross-correlation function as processing of calculating the image similarity. SAD calculate position difference between feature points) Miyasa does not explicitly teach: corrects result of position adjustment between the first-type medical image and the second-type medical image However, HATSUTANI teaches: and specified in the correction input, (HATSUTANI − [0018] correction instruction) and based on the position difference, corrects result of position adjustment between the first-type medical image and the second-type medical image. (HATSUTANI − [0018] displaying, on a display, a figure indicating a first region of interest included in the image in a superimposed manner on the image; receiving a correction instruction for at least a part of the figure; and specifying a second region of interest that at least partially overlaps with the first region of interest based on an image feature of the image and the correction instruction) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teaching of Miyasa, and HATSUTANI as each invention in the same field of image processing of determining feature points of imagery data. One of ordinary skill in the art would have been motivated to improve professional interpretation of medical image for diagnostic of the human anatomy. Regarding dependent claim 2, depends on claim 1, Miyasa teaches: wherein the processing circuitry associates the first-type feature point with the second-type feature point based on the corrected result of position adjustment, and from among the pairs subjected to association based on the corrected result of position adjustment, presents, as candidates for correction in association, the pairs in which combination of the first-type feature point and the second-type feature point is different than the pairs in which the first-type feature point and the second-type feature point are associated based on result of position adjustment. (Miyasa − [0052] On each cross-sectional image displayed on the display unit 190, the operator inputs the position of a feature point by operation input (for example, mouse click) on the input unit. The operation input of the operator is input to the feature point obtaining unit 106 via the operation unit 180. Examiner Note: adjustment to feature points from operation on display unit 190) Regarding dependent claim 3, depends on claim 2, Miyasa teaches: wherein the processing circuitry displays, in a display unit, the extracted first-type feature point, the extracted second-type feature point, and association information indicating the pairs subjected to association, (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. [0052] On each cross-sectional image displayed on the display unit 190, the operator inputs the position of a feature point by operation input (for example, mouse click) on the input unit. The operation input of the operator is input to the feature point obtaining unit 106 via the operation unit 180. Examiner Note: adjustment to feature points from operation on display unit 190) and after presenting the candidates for correction, displays, in the display unit, the association information equivalent to the presented candidates for correction in a distinguishable manner from the association information not corresponding to the candidates for correction. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. [0052] On each cross-sectional image displayed on the display unit 190, the operator inputs the position of a feature point by operation input (for example, mouse click) on the input unit. The operation input of the operator is input to the feature point obtaining unit 106 via the operation unit 180. Examiner Note: adjustment to feature points from operation on display unit 190) Regarding dependent claim 4, depends on claim 3, Miyasa teaches: wherein the processing circuitry displays the association information in the display unit by displaying the first-type feature point and the second-type feature point, which corresponds to the first-type feature point, either one above other or side by side. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point.) Regarding dependent claim 5, depends on claim 4, Miyasa teaches: wherein the processing circuitry displays, in the display unit and in order based on positions of first-type feature points, the first-type feature point and the second-type feature point along with displaying the association information, and associates, with the second-type feature point, only the first-type feature point lower in the order than the first-type feature point specified in the correction input. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. Order of priority feature points and non-priority feature points) Regarding dependent claim 6, depends on claim 4, Miyasa teaches: wherein the processing circuitry displays the association information in the display unit by enclosing the first-type feature point and the second-type feature point, which corresponds to the first-type feature point, in a frame. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. Priority information in each frame) Regarding dependent claim 7, depends on claim 4, Miyasa teaches: wherein the processing circuitry displays the association information in the display unit by joining the first-type feature point and the second-type feature point, which corresponds to the first-type feature point, by a line. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. Dash-line shown in Figures) Regarding dependent claim 8, depends on claim 3, Miyasa teaches: wherein, after presenting the candidates for correction, the processing circuitry displays, in the display unit, only the association information equivalent to the presented candidates for correction. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. Display priority feature points and nonpriority feature points) Regarding dependent claim 9, depends on claim 3, Miyasa teaches: wherein, for each target body part of test subject to be examined, the processing circuitry displays the extracted first-type feature point, the extracted second-type feature point, and the association information in the display unit. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. [0036-0037] [0036] The data obtaining unit 102 outputs the first image and the second image to a feature point extraction unit 104. [0037] The feature point extraction unit 104 processes the first image and the second image, and extracts feature points in the first image and the second image. The feature point extraction unit 104 also obtains sets of feature points (sets of corresponding point coordinates) defined by associating the feature points between the images. A feature point extracted by the feature point extraction unit 104 will be referred to as an extracted feature point. A set of extracted feature points associated between the two images will be referred to as an extracted feature point pair.) Regarding dependent claim 10, depends on claim 3, Miyasa: wherein the processing circuitry displays, in a tiled manner, the first-type medical image and the second-type medical image that have the target body part captured therein, (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. Display frame 1 and frame 2 as tile in Fig. 3 and Fig. 14) and displays the association information in the display unit by joining the first-type feature point, which is displayed in the first-type medical image, and the second-type feature point, which is displayed in the second-type medical image and which corresponds to the first-type feature point, by a line. (Miyasa − [0016] FIG. 3 is a view showing a display screen used by an operator to input feature points; [0027] FIG. 14 is a view showing a screen used to set a priority for an input feature point. Display frame 1 and frame 2 as tile in Fig. 3 and Fig. 14, dash lines) Regarding independent claim 11, Miyasa teaches: A medical image processing method implemented in a medical image processing apparatus, comprising: obtaining a first-type medical image, (Miyasa − [0032] The first and second images may be, for example, images captured at the same time using different modalities or imaging modes. The images may be obtained by capturing the same patient in the same position using the same modality at different dates/times for follow-up. Examiner Note; first image capture prior to the follow-up, the prior image capture is a first-type medical image, at different time than second image.) and obtaining a second-type medical image (Miyasa − [0032] The first and second images may be, for example, images captured at the same time using different modalities or imaging modes. The images may be obtained by capturing the same patient in the same position using the same modality at different dates/times for follow-up. Examiner Note; second image capture after the first image, the second image at time after is a second-type medical image,) which is taken at a different timing than the first-type medical image and which includes a region captured in the first-type medical image, (Miyasa − [0032] The images may be obtained by capturing the same patient in the same position using the same modality at different dates/times for follow-up.) extracting a first-type feature point from the first-type medical image, and extracting a second-type feature point from the second-type medical image, (Miyasa − [0036-0037] [0036] The data obtaining unit 102 outputs the first image and the second image to a feature point extraction unit 104. [0037] The feature point extraction unit 104 processes the first image and the second image, and extracts feature points in the first image and the second image. The feature point extraction unit 104 also obtains sets of feature points (sets of corresponding point coordinates) defined by associating the feature points between the images. A feature point extracted by the feature point extraction unit 104 will be referred to as an extracted feature point. A set of extracted feature points associated between the two images will be referred to as an extracted feature point pair.) based on result of position adjustment between the first-type medical image and the second-type medical image, (Miyasa − [0052] On each cross-sectional image displayed on the display unit 190, the operator inputs the position of a feature point by operation input (for example, mouse click) on the input unit. The operation input of the operator is input to the feature point obtaining unit 106 via the operation unit 180. Examiner Note: adjustment to feature points from operation on display unit 190) associating a first-type feature point and a second-type feature point captured in corresponding part in the first-type medical image and in the second-type medical image, (Miyasa − [0052] The feature point obtaining unit 106 converts the positions of feature points input (designated) on each cross-sectional image into 3D position information (3D coordinates) using the position and orientation of the cross-sectional image in the image. The feature point obtaining unit 106 performs this processing for the corresponding feature points between the first image and the second image, and obtains them as input feature point pairs) receiving a correction input that, from among pairs of the first-type feature point and the second-type feature point associated with each other, is meant for associating the first-type feature point with other second-type feature point, (Miyasa − [0159] FIG. 14 is a view exemplifying a display screen 300 of a display unit 190 used by the operator to set a priority for the input feature points. Examiner Note: correction input to priority feature point pairs and lower priority feature point for other feature pairs..) and calculating position difference between pre-movement position and post-movement position of the other second-type feature point, (Miyasa − [0049] Next, the feature point extraction unit 104 performs processing of generating sets of corresponding extracted feature points (extracted feature point pairs) between the images by associating the extracted feature points extracted from the first image and the second image in a one-to-one correspondence. The feature point extraction unit 104 can use a known method, for example, Sum of Squared Difference (SSD), Sum of Absolute Difference (SAD), or cross-correlation function as processing of calculating the image similarity. SAD calculate position difference between feature points) Miyasa does not explicitly teach: the correction input, to a position that is in the second-type medical image and that corresponds to the first-type feature point specified in the correction input However, HATSUTANI teaches: calculating that includes moving the other second-type feature point, which is specified in the correction input, to a position that is in the second-type medical image and that corresponds to the first-type feature point specified in the correction input, (HATSUTANI − [0018] [0061-0062] [0062] The reception unit 34 receives a correction instruction for at least a part of the figure (bounding box B1) indicating the first region of interest A1. Specifically, the reception unit 34 may receive, as a correction instruction, correction of at least one point of points forming the figure indicating the first region of interest A1.) and correcting the result of position adjustment based on the position difference. (HATSUTANI − [0061-0062] [0018] displaying, on a display, a figure indicating a first region of interest included in the image in a superimposed manner on the image; receiving a correction instruction for at least a part of the figure; and specifying a second region of interest that at least partially overlaps with the first region of interest based on an image feature of the image and the correction instruction) Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teaching of Miyasa, and HATSUTANI as each invention in the same field of image processing of determining feature points of imagery data. One of ordinary skill in the art would have been motivated to improve professional interpretation of medical image for diagnostic of the human anatomy. Regarding independent claim 12, is direct to an memory medium. Claim 12 have similar/same technical features/limitations as claim 11. Claim 12 is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. STEHLE THOMAS HEIKO WO 2014155299 A1, locating feature points of anatomy using visualization by overlaying past and present imagery. Chouno, US 20100074475 A1, medical diagnostic device for the body Zhang, US 20100166319 A1, obtaining section obtains corresponding points within the other images that correspond to the extracted feature points. Tsukagoshi, US 20180184997 A1, medical diagnostic device for the body IGARASHI, US 20200118265 A1, applying correction processing to the primary teaching data by an operator as the teaching data. AOYAGI, US 20200380680 A1, perform inference of a disease by using the second medical image and the auxiliary information using Machine Learning. KOZUKA, US 20210133231 A1, controlling an information terminal for searching for similar medical images that are similar to a medical image. ISHII, US 20230363731 A1, performs radiographic image interpretation with respect to a medical image an interval between specific positions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CARL E BARNES JR/Examiner, Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

May 08, 2024
Application Filed
Apr 03, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584932
SLIDE IMAGING APPARATUS AND A METHOD FOR IMAGING A SLIDE
2y 5m to grant Granted Mar 24, 2026
Patent 12541640
COMPUTING DEVICE FOR MULTIPLE CELL LINKING
2y 5m to grant Granted Feb 03, 2026
Patent 12536464
SYSTEM FOR CONSTRUCTING EFFECTIVE MACHINE-LEARNING PIPELINES WITH OPTIMIZED OUTCOMES
2y 5m to grant Granted Jan 27, 2026
Patent 12530765
SYSTEMS AND METHODS FOR CALCIUM-FREE COMPUTED TOMOGRAPHY ANGIOGRAPHY
2y 5m to grant Granted Jan 20, 2026
Patent 12530523
METHOD, APPARATUS, SYSTEM, AND COMPUTER PROGRAM FOR CORRECTING TABLE COORDINATE INFORMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
32%
Grant Probability
57%
With Interview (+25.2%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 202 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month