Prosecution Insights
Last updated: April 19, 2026
Application No. 18/494,371

ULTRASOUND IMAGE PROCESSING APPARATUS, ULTRASOUND IMAGE DIAGNOSIS SYSTEM, ULTRASOUND IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING ULTRASONIC IMAGE PROCESSING PROGRAM

Final Rejection §102§112
Filed
Oct 25, 2023
Examiner
MCDONALD, JAMES F
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Konica Minolta Inc.
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
42 granted / 76 resolved
-14.7% vs TC avg
Strong +44% interview lift
Without
With
+44.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
33 currently pending
Career history
109
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
32.1%
-7.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 76 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to Applicant’s remarks, filed on 10/14/2025. The amendments to claim(s) 1, 5-6, and 8-10 have been entered. Claim(s) 5-13 is/are withdrawn by Applicant. New claim(s) 14 has been entered. Accordingly, claim(s) 1-4 and 14 remain pending for examination. Response to Arguments Applicant’s arguments, see p. 8-10, with respect to the rejection of claim(s) 1-4 and regarding new claim 14 have been fully considered. After review of the amendments to the claim(s) in view of the rejections under 35 U.S.C. §112, Examiner respectfully disagrees with the Applicant, and the rejection of claim 3 under 35 U.S.C. §112 has been maintained. Regarding the rejection(s) under 35 U.S.C. § 102, Examiner respectfully disagrees with the remarks and does not find Applicant’s arguments persuasive. New grounds of rejection are made in view of the following: new amendments provided by Applicant and attached remarks; updated search and review of pertinent, eligible prior art; newly added claims; and/or different interpretation of the previously applied references. Regarding the rejection of claim(s) 1-4 under 35 U.S.C. § 102, Applicant provides the following: Thus, the claimed invention selects one of the discriminators, which is used to acquire the discrimination result. Cho fails to disclose the above limitations because Cho does not teach or suggest selecting a discriminator to use. Cho discloses an apparatus 30 for visualizing anatomical elements including an image receiver 31 for receiving a medical image, and an anatomical element detector 33 and an analyzer 35 that detect anatomical elements by analyzing a medical image based on anatomical context information (paragraphs [0083]-[0087] of Cho). In one embodiment, individual detectors 61, 63, 65, 67 detect different anatomic elements (paragraph [0110] of Cho). Cho discloses one anatomical element detector 33 or a plurality of detectors 61, 63, 65, 67. However, in each case the detectors all are used. In view of the above remarks, independent claim 1 is not anticipated by and is allowable over Cho. Examiner respectfully disagrees with Applicant, and maintains that Cho teaches the limitations recited in the claim language, as drafted. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., ‘a selection’) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The determination of a discriminator and the application of the determined discriminator are not the same as the selection and application of a single determined discriminator. Accordingly, Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. Cho teaches the implementation of anatomical element detectors which are individually trained (e.g., using deep learning) to identify respective anatomical elements within an original input image [see claim 1 rejection]. As provided by Cho: “Referring to FIG. 6, multiple detectors 60 including a plurality of individual detectors 61, 63, 65, and 67 are an example of the anatomical element detector 33 shown in FIG. 3. The skin detector 61 detects only an area of skin 11 from an original image 10. The subcutaneous fat detector 63 detects only an area of subcutaneous fat 12 from the original image 10. The glandular tissue detector 65 detects only an area of glandular 13 tissue from the original image 10. The pectoralis muscle detector 67 detects only an area of a pectoralis muscle 15 from the original image 10. In this example, each of the individual detectors 61, 63, 65, 67 may use any one of various deep earning techniques known to one of ordinary skill in the art.” Cho [0110], emphasis added Cho further teaches that “the anatomical elements are analyzed based on anatomical context information including domain knowledge indicating location relationships between the anatomical elements, probability information, and adjacent image information” in response to a received image (Cho [0147]). Upon receipt of a medical image and anatomical context information, the system automatically determines the appropriate anatomical element detector(s) for the image. Cho also teaches that “the lesion verifier 38 receives an original image from the image receiver 31 and detects one or more ROIs from the original image. Then, with reference to anatomical element information determined by the analyzer 35, the lesion verifier 38 determines in which anatomical element each of the detected ROIs is located” (Cho [0101]). The detector identifying the anatomical element that has a high probability that a specific lesion is present (e.g., an individual detector trained to identify glandular tissue) is a determined discriminator whose output information may then be used by the lesion verifier to detect the presence of a lesion (Cho [0094-0110], [fig. 3]). Applicant further argues the following: Dependent claims 2-4 and 14 are allowable for the same reasons as is independent claim 1, as well as for the additional recitations contained therein. New claim 14 recites "each of the plurality of discriminators being trained through machine learning for identifying respective discrimination targets using a specific ultrasound probe, a specific ultrasonic device, or a specific combination of ultrasound probe and ultrasonic device that is different from others of the plurality of discriminators." Cho does not teach or suggest training a detector with a specific probe or other equipment. Accordingly, claim 14 should be allowable for at least these additional reasons. Examiner respectfully disagrees with the Applicant regarding the assertions concerning new claim 14. As discussed in the rejection to claim 14 below, Cho clearly discloses the use of a medical imaging device for the acquisition of ultrasound images. Specifically, Cho discloses: “The image receiver 31 is a component for receiving a medical image. A medical image may be, for example, an ultrasound image of a human breast that is captured using ultrasonic waves as shown in FIG. 1. The medical image may be received from a medical image diagnostic/capturing device, from an imaging apparatus that captures a specific part of a human body using ultrasonic waves or X-rays, or from a storage device that stores medical images captured by the medical image diagnostic/capturing device or the imaging apparatus.” Cho [0084], emphasis added Examiner respectfully interprets the imaging apparatus that captures a specific part of a human body using ultrasonic waves, to therefore generate an ultrasound image, is a ‘specific ultrasonic device’, e.g., an ultrasound probe. Examiner respectfully notes that Applicant’s arguments only address independent claim(s) 1, and no remarks regarding the subject matter of the dependent claim(s) have been presented. Accordingly, the rejections to dependent claims 2-4 are modified to address Applicant’s amendments and the new rejection to claim(s) 14 and are sustained. The rejections of claim(s) 1-4 and 14 under 35 U.S.C. § 102 are maintained. Claim Rejections - 35 USC § 112 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 3, the claim recites the limitations “wherein the discriminator determination information is the information on the external apparatus, and the information on the external apparatus is information on a manufacturer which manufactures the external apparatus” which renders the claim indefinite. As discussed in the previous office action the claim language is convoluted and unclear, and does not particularly define what the “information on the external apparatus” actually is. The claim appears to define the ‘information’ on the external apparatus as both the ‘discriminator determination information’ and ‘manufacturer information’. It is suggested to rewrite the claim to better conform with current U.S. practice and to clearly indicate the information which is contained within the external apparatus. For the purposes of examination, the broadest reasonable interpretation of the ‘information’ is applied to the claim limitations. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-4 and 14 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Cho et al. (US2015/0265251 A1; 2015-09-24, hereinafter “Cho”). Regarding claim 1, Cho teaches an ultrasound image processing apparatus (“An apparatus for visualizing anatomical elements in a medical image,” [clm 1]; “wherein the medical image is a breast ultrasound image of a human breast captured using ultrasonic waves;” [clm 2]; The apparatus and method implement computer-aided diagnosis to visualize anatomical elements in medical images generated from ultrasound [0067], [fig. 1, 3-6]), comprising: a receiver configured to receive an ultrasound image and discriminator determination information from an external apparatus (“an image receiver configured to receive a medical image; […] an analyzer configured to verify a location of each of the plurality of anatomical elements based on anatomical context information comprising location relationships between the plurality of anatomical elements” [clm 1]; “The image receiver 31 is a component for receiving a medical image. A medical image may be, for example, an ultrasound image of a human breast that is captured using ultrasonic waves […] The medical image may be received from a medical image diagnostic/capturing device, from an imaging apparatus that captures a specific part of a human body using ultrasonic waves or X-rays, or from a storage device that stores medical images captured by the medical image diagnostic/capturing device or the imaging apparatus.” [0084]; “A computing device may be any of various devices, […] The computing device may be a single stand-alone device, or a plurality of computing devices that interoperate with each other in a distributed environment.” [0163]; The image receiver receives an ultrasound image captured by a medical image capturing device using ultrasound waves and further receives anatomical context information [0080-0085], [fig. 1, 3-6]); a storage device capable of writing and reading information (“a computing device that includes a processor, a memory, a user input device, and a presentation device” [0161]; The memory is a storage device [fig. 3-6]), the storage device storing a plurality of discriminators, each of the plurality of discriminators being an algorithm trained through machine learning for identifying respective discrimination targets (“the anatomical element detector further comprises a plurality of individual detectors each configured to detect a respective one of the plurality of anatomical elements from the medical image” [clm 3]; “each of the plurality of individual detectors is further configured to detect the respective one of the plurality of anatomical elements using any one of a deep learning technique” [clm 4]; “multiple detectors 60 including a plurality of individual detectors 61, 63, 65, and 67 are an example of the anatomical element detector 33 […] The pectoralis muscle detector 67 detects only an area of a pectoralis muscle 15 from the original image 10. In this example, each of the individual detectors 61, 63, 65, 67 may use any one of various deep earning techniques” [0110]; “The memory is a medium that stores computer-readable software, […] or instructions each capable of performing a specific task” [0161]; The individual detectors (i.e., plurality of discriminators) are individually trained with and apply deep learning techniques for detecting specific anatomical features within an image [0070-0110], [fig. 1, 3-6]); and one or more first hardware processors (“An apparatus for visualizing anatomical elements in a medical image may be implemented by a computing device that includes a processor, a memory, a user input device, and a presentation device.” [0161]; “The processor may read and execute computer-readable software, applications, program modules, routines, or instructions that are stored in the memory.” [0162]; A computing device comprising a processor/processing device may implement the CAD for visualizing anatomical elements and cysts in a medical image [0067, 0161-0172], [fig. 1, 3-6]), wherein the one or more first hardware processors are configured to determine, from among the plurality of the discriminators, a determined discriminator based on the discriminator determination information received from the external apparatus (“a lesion verifier configured to verify whether a region of interest (ROI) detected from the medical image is a lesion based on a lesion detection probability of an anatomical element in which the ROI is located” [clm 17]; “Anatomical elements detected in this manner, such as skin, fat, muscle, and bone, may be analyzed, i.e., verified and adjusted, based on anatomical context information indicating location relationships between the anatomical elements.” [0074]; “The analyzer 35 verifies and adjusts the anatomical elements detected by the anatomical element detector 33 based on the anatomical context information 37. The anatomical context information 37 includes information indicating location relationships between anatomical elements. […] Based on the anatomical context information indicating the location relationships between the anatomical elements, the analyzer 35 verifies a location of each anatomical element and adjusts location relationships between the anatomical elements throughout the entire area of the medical image.” [0087]; “the lesion verifier 38 verifies whether an ROI is a lesion with respect to anatomical elements detected from an original image” [0100]; The anatomical element detector(s) and analyzer process the ultrasound image provided by image receiver, wherein the analyzer adjusts the individual detector(s) results based on the anatomical context information (i.e., determined discriminator) to highlight the desired anatomical element for a lesion verifier [0072, 0109-0112], [fig. 1, 3-6]), the one or more first hardware processors are configured to input the ultrasound image to the determined discriminator (“a method 160 of visualizing anatomical elements includes receiving in 161 a medical image input to the medical image receiver 31 of the apparatus 30 […] The medical image may be input from a medical image diagnostic device capturing a specific part of human body, a capturing device, or a storage device storing images” [0142]; “The received image is analyzed to detect at least one anatomical element […] The detection of anatomical elements may be performed by a plurality of detectors that respectively detect different anatomical elements” [0143]; The received image is separately analyzed by the individual detector to distinguish the anatomical element from the image using the anatomical context information [0080-0112], [fig. 3-6, 16-20]) and the one or more first hardware processors are configured to cause the determined discriminator to perform automatic recognition on the ultrasound image and to acquire a discrimination result output from the determined discriminator (“an analyzer configured to verify a location of each of the plurality of anatomical elements based on anatomical context information comprising location relationships between the plurality of anatomical elements, and adjust the location relationships between the plurality of anatomical elements; and” [clm 1]; “various deep learning techniques […] may be used. In this example, a different detector may be used for each anatomical element, for example, a skin detector for detecting skin, a fat detector for detecting fat, a glandular tissue detector for detecting glandular tissue, a muscle detector for detecting muscle, and a bone detector for detecting bone,” [0072]; “The anatomical element detector 33 and the analyzer 35 detect anatomical elements by analyzing a medical image to identify the anatomical elements by verifying and adjusting the anatomical elements based on anatomical context information.” [0085]; The ultrasound image is input to the anatomical element detector and analyzer to detect, verify and adjust the anatomical elements present in the ultrasound image using deep learning techniques, wherein a single detector trained for the respective anatomical element automatically defines the anatomical element within the image [0161-0172], [fig. 1, 3-6]). Regarding claim 2, Cho teaches the ultrasound image processing apparatus according to claim 1, Cho further teaching wherein the discriminator determination information is at least one of information on the ultrasound probe, information on the ultrasound image, information on the discriminator, and information on the external apparatus (“the anatomical context information may include location relationships between the any two or more of skin, fat, glandular tissue, muscle, and bone;” [0008]; “The anatomical context information may include domain knowledge including, for example, “a specific part of a human body has anatomical elements with a predefined anatomical structure thereof” and “identical anatomical elements are gathered”, […] In addition, the anatomical context information may include a probability distribution of a location at which each anatomical element is located in a medical image of a specific body part. The probability distribution may be acquired based on pre-established training data. Further, if a medical image is one of a plurality of continuous two-dimensional (2D) frames or one of a plurality of three-dimensional (3D) images, the anatomical context information may include location information of an anatomical element acquired from an adjacent frame or an adjacent cross-section.” [0075]; Anatomical context information may describe information about the anatomical elements contained within the ultrasound image [0071-0092], [fig. 1, 3-6]). Regarding claim 3, Cho teaches the ultrasound image processing apparatus according to claim 2, Cho further teaching wherein the discriminator determination information is the information on the external apparatus, and the information on the external apparatus is information on a manufacturer which manufactures the external apparatus (“the anatomical context information may further include adjacent image information including location information of anatomical elements detected from an adjacent frame or an adjacent cross-section.” [0033]; “The image receiver 31, the anatomical element detector 33, the analyzer 35, the anatomical context information 37, […] may be implemented using one or more hardware components, one or more software components, or a combination of one or more hardware components and one or more software components.” [0167]; “A processing device may be implemented using one or more general-purpose or special-purpose computers, […], or any other device capable of running software or executing instructions. The processing device may run an operating system (OS), and may run one or more software applications that operate under the OS.” [0170]; The apparatus for visualizing anatomical elements in ultrasound images may be implemented with a processing device running an operating system (i.e., information on a manufacturer) [0161-0172]). Regarding claim 4, Cho teaches the ultrasound image processing apparatus according to claim 1, Cho further teaching wherein the discrimination result is at least one of a classification result of a measurement item, an automatic measurement result, and a region of interest (“The received image is analyzed to detect at least one anatomical element existing at a specific location within the image in 163. The detection of anatomical elements may be performed by a plurality of detectors that respectively detect different anatomical elements.” [0143]; The received ultrasound image is analyzed to detect anatomical elements (i.e., regions of interest) within the image [fig. 1, 3-6], [see claim 1 rejection]). Regarding claim 14, Cho teaches the ultrasound image processing apparatus according to claim 1, Cho further teaching wherein each of the plurality of discriminators being trained through machine learning for identifying respective discrimination targets using a specific ultrasound probe, a specific ultrasonic device, or a specific combination of ultrasound probe and ultrasonic device that is different from others of the plurality of discriminators (“The image receiver 31 is a component for receiving a medical image. […] The medical image may be received from a medical image diagnostic/capturing device, from an imaging apparatus that captures a specific part of a human body using ultrasonic waves or X-rays, or from a storage device” [0084]; The image receiver may be an ultrasound probe or other device configured to generate ultrasound images from ultrasonic waves [fig. 1, 3-6], [see claim 1 rejection]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to James F. McDonald III whose telephone number is (571)272-7296. The examiner can normally be reached M-F; 8AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at 5712727230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAMES FRANKLIN MCDONALD III Examiner Art Unit 3797 /BONIFACE N NGANGA/ Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Oct 25, 2023
Application Filed
Jul 03, 2025
Non-Final Rejection — §102, §112
Oct 14, 2025
Response Filed
Dec 20, 2025
Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588809
Systems and Methods for Determining Tissue Inflammation Levels of the Eye from Blood Vessel Characteristics
2y 5m to grant Granted Mar 31, 2026
Patent 12582378
METHODS AND SYSTEMS FOR AN INVASIVE DEPLOYABLE DEVICE USING A SHAPE MEMORY MATERIAL TO RECONFIGURE TRANSDUCER ELEMENTS IN RESPONSE TO STIMULI
2y 5m to grant Granted Mar 24, 2026
Patent 12564388
Phase Change Insert for Ultrasound Imaging Probe
2y 5m to grant Granted Mar 03, 2026
Patent 12544003
SYSTEM, METHOD, AND APPARATUS FOR TEMPERATURE ASYMMETRY MEASUREMENT OF BODY PARTS
2y 5m to grant Granted Feb 10, 2026
Patent 12527542
ULTRASOUND IMAGING APPARATUS FOR BIPLANE IMAGING AND CONTROL METHOD THEREOF
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
99%
With Interview (+44.3%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 76 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month