Prosecution Insights
Last updated: April 19, 2026
Application No. 18/678,435

ULTRASONIC DIAGNOSTIC APPARATUS, DATA MANAGEMENT SYSTEM, DATA ESTIMATION METHOD, AND RECORDING MEDIUM

Non-Final OA §101§102§112
Filed
May 30, 2024
Examiner
FARAG, AMAL ALY
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Konica Minolta Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
131 granted / 197 resolved
-3.5% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
30 currently pending
Career history
227
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 197 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The abstract of the disclosure is objected to because claim language is used. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claim 14 is objected to because of the following informalities: Claim 14 is written as an independent claim that depends on claim 1 limitations. Claim 14 is best written as independent claims with the necessary claim limitations from claim 1. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Independent claim 1 recites, for example, the following abstract ideas: “…to convert third data based on a reception signal for image generation received by the first ultrasonic probe into fourth data similar to the second data and output the converted data…”, falls within mental process. There is no specific machine or device that is not known or generic recited in the claim limitations, see MPEP § 2106.05(b). The limitations have no specifics to the algorithmic foundation or dimensionality associated with the echo signal and as such can be considered computations that can be performed in the mind using visual inspection or simple pen and paper. Further the usage of a learned model machine-learned does not provide the process/algorithm specifics in the recited claims and thus can be considered a process/algorithm that is generic and of the simplest form that one can be performed by hand. Further, limitations “…a first ultrasonic probe that transmits and receives ultrasonic waves to and from a subject…” and “…second data captured under a predetermined condition…” is considered an extra solution activity recited at a high level of generality with no specific machine or device disclosed that is not generic or known to perform the limitations. Further, analogous limitations are found in claims 15 and 16. The judicial exceptions are not integrated into a “practical application” as defined by the Subject Matter Eligibility Analysis documented in Federal Register 84(4), issued on 07 January 2019, and MPEP § 2106. The limitation of “…a hardware processor…” in claim 1, simply represents implementing the abstract ideas with a computer. Further, a mobile device is also considered a computer processing device and thus also represents implementing the abstract ideas with a computer. MPEP § 2106.05(f) notes that “using a computer as a tool to perform the abstract idea” is not sufficient to integrate a judicial exception into a practical application as interpreted by the court(s). Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972) “held that simply implementing a mathematical principle on a physical machine, namely a computer, was not a patentable application of that principle and Intellectual Ventures LLC v. Symantec Corp., 838 F.3d 1307, 1318 (Fed. Cir. 2016) established that mental processes encompass acts which, absent anything beyond generic computer components, may be “performed by a human, mentally or with pen and paper.” Intellectual Ventures additionally established that if a claim, under its broadest reasonable interpretation, covers performance in the mind but for the recitation of generic computer components, then it is still in the mental processes category of abstract ideas unless the step(s) cannot be practically performed in the mind. Therefore, a positive recitation of the associated computer would not necessarily result in patent eligible subject matter. The dependent claims 2-14 do not sufficiently link the subject matter to a practical application or recite element(s) which constitute significantly more than the abstract ideas identified. The depending claims are directed to additional limitations which encompass abstract ideas consistent with those identified above that are well-understood, routine and/or conventional activity. Further, dependent claims 2-14 merely include limitations that either further define the abstract idea (and thus don’t make the abstract idea any less abstract) or amount to no more than generally linking the use of the abstract idea to a particular technological environment or field of use because they’re merely incidental or token additions to the claims that do not alter or affect how the process steps are performed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1 and analogous limitations found in claims 15-16, with regard to limitation “…a hardware processor that uses a learned model machine-learned by using learning data including first data based on a reception signal for image generation received by a second ultrasonic probe of the same type as that of the first ultrasonic probe, and second data captured under a predetermined condition, so as to convert third data based on a reception signal for image generation received by the first ultrasonic probe into fourth data similar to the second data and output the converted data.”, it is unclear the connection of the second data, if any, to the first and/or second probe. There is lack of antecedent basis for the third data and it is unclear what are the specifics with regards to the third data and the connection, if any to the first and second probe. The term “convert” in plain meaning is to adapt or change, it is unclear what is entailed to “convert” the third data, such as processing or algorithm or transformation or something else in the converting process and also what the connection of the third data with the second data for the conversion to occur. Examiner, using the broadest reasonable will interpret “convert” for examination as converting image information into diagnosis mode such as B-mode, doppler mode, etc. The term “similar to” in claims 1, 11 and 15-16 is a relative term which renders the claim indefinite. The term “similar” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. It is unclear how to ascertain that the data is in fact “similar to” another data, it is unclear what parameters, system conditions or information is considered to determine that the two sets of data are similar to another. Regarding Claim 2, with regards to limitation “…wherein the first data and the third data are B-mode image data based on a predetermined reception signal or the reception signals.”, is unclear the connection of second ultrasound probe, if any, to the third data with respect to claim 1. It is unclear if the first data is also a converted data and if so how is it being obtained by the second probe. The metes and bounds of the claims are unclear as a whole. Regarding claim 4, limitation “…wherein the fourth data is ultrasonic data, MRI-style image data, or CT-style image data.” is unclear the metes and bounds of “MRI-style” and “CT-style”, it is unclear how data can be of the essence of a data type but not specifically that type of data. It is unclear how the fourth data which is connected to the first ultrasonic probe and second data in claim 1 can be a hybrid (i.e. in between) an MRI or CT when connected to both a first ultrasound probe and second data in claim 1. The metes and bounds of the claims are unclear as a whole. Regarding claim 6, limitation “…wherein the hardware processor outputs information indicating that the fourth data is a processed image…”, is unclear what is meant by “processed image” as all data obtained by a probe or imager formed into an image undergoes computer processing of the beams/waveforms/etc. It is unclear how the processor is able to indicate that the fourth data that is connected to third data in claim 1 that is “converted data” connected to the first ultrasonic probe is processed. It is unclear the metes and bounds of the claim. Regarding Claim 8, limitation “…wherein the hardware processor outputs information indicating that the third data is an original image…” is unclear what is meant by “original image” with respect to the third data, which is converted in claim 1. It is unclear how the processor is intended to provide the indication, it is unclear if there is intended to be a connection with the machine-learning model to perform the indication of the data or something else. The metes and bounds of the claim are unclear. Regarding Claim 11, it is unclear the connection of the first and second probe, if any, from claim 1 with respect to the fifth and sixth data. It is unclear the connections, if any, of the first, second, third and fourth data with the fifth and sixth data. The second learned model with respect to fifth data and sixth data is unclear how the model is connected to the first learned model and how the processor is intended to switch between the models. It is unclear the conditions of which the processor is basing or using to determine which of the models to use and the connections of the various data generated and or converted or obtained by one of the various probes. The metes and bounds of the claim are unclear. Regarding claim 12, it is unclear what the processor is discriminating the third data from. It is unclear the metes and bounds of the “switching sections” are with respect to the processor, it is unclear if this is a hardware component within the processor or a software based switiching with respect to the machine learning and if with respect to the machine learning what are the various elements or layers that are being shifted through and the connections to the first and second probes and the first through fourth data in claim 1. The metes and bounds of the claim are unclear. Regarding claim 13, the limitation “…wherein the hardware processor outputs fifth data from the fourth data by using a discriminator that is machine-learned by using learning data in which the fourth data obtained by converting the first data using the learned model, and a predetermined correct label are made into a data set.” is unclear the algorithm or approach entailed in the used machine-learning to utilize a discriminator as it has not been established in the previous claims. A discriminator is one aspect in a neural network approach, such as a generative adversarial network, which is unclear if this is the intended with respect to claim 13 in light of claim 1 from which it depends that only recites using a machine-learned model of first and second data. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5 and 15-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Heo et. al. (U.S. 20180168550, June 21, 2018)(hereinafter, “Heo”). Regarding Claim 1, as provided in the 112(b) rejections above, the interpretations for the various limitations are mapped accordingly. Heo teaches: An ultrasonic diagnostic apparatus (Fig. 1) comprising: a first ultrasonic probe that transmits and receives ultrasonic waves to and from a subject (“FIG. 1, an ultrasound imaging apparatus 1 includes an ultrasound probe P configured to transmit ultrasonic waves to an object, receive ultrasonic echo signals from the object…” [0046]); and a hardware processor that uses a learned model machine-learned by using learning data including first data based on a reception signal for image generation received by a second ultrasonic probe of the same type as that of the first ultrasonic probe (“The controller 500 may recommend the most efficient ultrasound probe for the user to diagnose a disease of a patient…the controller 500 may learn information with respect to an organ of a patient diagnosed by the user and may recommended the second ultrasound probe based on the information.” [0082]), and second data captured under a predetermined condition, so as to convert third data based on a reception signal for image generation received by the first ultrasonic probe into fourth data similar to the second data and output the converted data (“…an ultrasound imaging apparatus 1 includes an ultrasound probe P configured to transmit ultrasonic waves to an object, receive ultrasonic echo signals from the object, and convert the ultrasonic echo signals into electrical signals and a main body M connected to the ultrasound probe P and configured to include an input portion 540 and a display portion 550 and display an ultrasonic image.” [0046]; “The signal processor 533 converts coherent image information formed by the imager former 531 into ultrasonic image information according to a diagnosis mode such as a brightness mode (B-mode), a Doppler mode (D-mode) or the like.” [0054]). Regarding Claim 2, Heo teaches the claim limitations as noted above. Heo further teaches: wherein the first data and the third data are B-mode image data based on a predetermined reception signal or the reception signals (“The signal processor 533 converts coherent image information formed by the imager former 531 into ultrasonic image information according to a diagnosis mode such as a brightness mode (B-mode), a Doppler mode (D-mode) or the like.” [0054]). Regarding Claim 3, Heo teaches the claim limitations as noted above. Heo further teaches: wherein the second data includes ultrasonic data (“The image processor 530 generates an ultrasonic image of a target part inside the object based on ultrasonic signals focused by the receiver 200.” [0051]). Regarding Claim 5, Heo teaches the claim limitations as noted above. Heo further teaches: the first data is data based on the reception signal for image generation received by the second ultrasonic probe, and the second data is data based on a reception signal for image generation received by a third ultrasonic probe of a different type from that of the second ultrasonic probe (“…an ultrasound imaging apparatus 1 includes an ultrasound probe P configured to transmit ultrasonic waves to an object, receive ultrasonic echo echo signals from the object, and convert the ultrasonic echo signals into electrical signals and a main body M connected to the ultrasound probe P and configured to include an input portion 540 and a display portion 550 and display an ultrasonic image.” [0046]; “The signal processor 533 converts coherent image information formed by the imager former 531 into ultrasonic image information according to a diagnosis mode such as a brightness mode (B-mode), a Doppler mode (D-mode) or the like.” [0054]). “FIGS. 7A and 7B are views illustrating ultrasonic images of a liver taken with different types of ultrasound probes.” [0094]). Regarding Claim 15, as provided in the 112(b) rejections above, the interpretations for the various limitations are mapped accordingly. A data estimation method using a learned model machine-learned by using learning data (Figs. 1 and 13-15) that includes: first data based on a reception signal for image generation received by a second ultrasonic probe of the same type as that of a first ultrasonic probe transmitting and receiving ultrasonic waves to and from a subject (“FIG. 1, an ultrasound imaging apparatus 1 includes an ultrasound probe P configured to transmit ultrasonic waves to an object, receive ultrasonic echo signals from the object…” [0046]); and second data captured under a predetermined condition so as to convert third data based on a reception signal for image generation received by the first ultrasonic probe into fourth data similar to the second data and output the converted data probe (“The signal processor 533 converts coherent image information formed by the imager former 531 into ultrasonic image information according to a diagnosis mode such as a brightness mode (B-mode), a Doppler mode (D-mode) or the like.” [0054]; “The controller 500 may recommend the most efficient ultrasound probe for the user to diagnose a disease of a patient…the controller 500 may learn information with respect to an organ of a patient diagnosed by the user and may recommended the second ultrasound probe based on the information.” [0082]). Regarding Claim 16, as provided in the 112(b) rejections above, the interpretations for the various limitations are mapped accordingly. A non-transitory computer-readable recording medium storing a program that causes a computer to use a learned model machine-learned by using learning data (Figs. 1 and 13-15, “…a read-only memory (ROM), a random-access memory (RAM), a magnetic tape, a magnetic disc, a flash memory, an optical data storage and the like.” [0115]) including: first data based on a reception signal for image generation received by a second ultrasonic probe of the same type as that of a first ultrasonic probe transmitting and receiving ultrasonic waves to and from a subject (“FIG. 1, an ultrasound imaging apparatus 1 includes an ultrasound probe P configured to transmit ultrasonic waves to an object, receive ultrasonic echo signals from the object…” [0046]); and second data captured under a predetermined condition so as to convert third data based on a reception signal for image generation received by the first ultrasonic probe into fourth data similar to the second data and output the converted data (“The signal processor 533 converts coherent image information formed by the imager former 531 into ultrasonic image information according to a diagnosis mode such as a brightness mode (B-mode), a Doppler mode (D-mode) or the like.” [0054]; “The controller 500 may recommend the most efficient ultrasound probe for the user to diagnose a disease of a patient…the controller 500 may learn information with respect to an organ of a patient diagnosed by the user and may recommended the second ultrasound probe based on the information.” [0082]). Conclusion Examiner notes, prior art rejections are provided to claims from which a reasonable interpretation could be made to apply a reference to the broadest reasonable interpretation of one of ordinary skill in the art. This does not mean claims not rejected under 102/103 can not be rejected under prior art, but the deficiencies as reflected in the 35 U.S.C. 112(b) rejections and compounded issues when a claim is examined with respect to claims from which it depends not being reasonable for examination are as such the reasons main claims are void of a prior art rejection. Claims 1-16 are rejected under 35 U.S.C. 101 and 112(b). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Honjo et. al. U.S. 20200294230 teaches an ultrasound diagnostic system where a trained model allows probe type correspondence to generated images to be made. Sato et. al. U.S. 20200281570 teaches al ultrasound system with trained models for image and data generation. Mailhe et. al. U.S. 20170372193 teaches a generative adversarial imaging system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMAL FARAG whose telephone number is (571)270-3432. The examiner can normally be reached 8:30 - 5:30 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at (571) 270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMAL ALY FARAG/Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Sep 25, 2025
Non-Final Rejection — §101, §102, §112
Mar 27, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12575744
DATA PROCESSING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569220
BLOOD FLOW MEASUREMENT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12564373
Spatially Aware Medical Device Configured for Performance of Insertion Pathway Approximation
2y 5m to grant Granted Mar 03, 2026
Patent 12564386
PROCESSING APPARATUS AND CONTROL METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12564387
ULTRASOUND DIAGNOSTIC APPARATUS AND ULTRASOUND DIAGNOSTIC SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+38.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 197 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month