Prosecution Insights
Last updated: April 19, 2026
Application No. 18/348,592

LEARNING MODEL, STORAGE MEDIUM STORING DIAGNOSTIC PROGRAM, ULTRASONIC DIAGNOSTIC APPARATUS, ULTRASONIC DIAGNOSTIC SYSTEM, IMAGE DIAGNOSTIC APPARATUS, MACHINE LEARNING APPARATUS, LEARNING DATA CREATION APPARATUS, LEARNING DATA CREATION METHOD, AND STORAGE MEDIUM STORING LEARNING DATA CREATION PROGRAM

Non-Final OA §101§103§112
Filed
Jul 07, 2023
Examiner
KIM, KAITLYN EUNJI
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Konica Minolta Inc.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-11.7% vs TC avg
Strong +66% interview lift
Without
With
+65.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
37 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
21.4%
-18.6% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 5, 16, 18, and 19 are amended. Claims 1-20 are pending in this application. Claims 1-12, 14 and 16-20 have been examined on the merits. Due to the applicant’s new amendments that the withdrawn independent claims 1 and 16 are now amended to include limitations similar to independent claim 5. Therefore, claims 1 and 16 are reinstated and examined on their merits since they do not pose burden in examination. It is also noted that the claims 13 and 15 which are withdrawn due to the species nature of the claims. Claim Objections Claim 1 is objected to because of the following informalities: In Claim 1, line 2, “to execute outputting that is outputting a first inference result” should be “to output a first inference result” Appropriate correction is required. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “P1” has been used to designate both “first ultrasonic image” and “third ultrasonic image”. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 5 and 16 recite performing inverse transformation of coordinate transformation on first correct answer data and inference result includes a probability distribution image representing the probability of being a range of a structure, the probability distribution image. The limitation of an ultrasonic probe that transmits and receives ultrasonic waves, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. For example, but for the “by a processor”, language, “transmits and receives ultrasonic waves” in the context of this claim encompasses generic steps an ultrasound probe completes with the user manually using the probe, which transmits and receives ultrasound waves. Similarly, first ultrasonic image data based on a reception signal for image generation received by the ultrasonic probe, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the “by a processor”, language, “image generation” in the context of this claim encompasses the user viewing or obtaining the ultrasonic image. If a Claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element — using one or more generic processors for execution. The processors are recited at a high-level of generality (i.e., a diagnostic apparatus, an ultrasonic probe, etc.) such that it amounts no more than mere instructions to apply the exception using processors. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using processors to perform the identifying and determining steps amounts to no more than mere instructions to apply the exception using generic processors. Mere instructions to apply an exception using generic processors and an ultrasonic probe that transmits and receives ultrasonic waves and first ultrasonic image data based on a reception signal for image generation received by the ultrasonic probe cannot provide an inventive concept. The claim is not patent eligible. The other independent claims 1 and 16 also recite similar limitations as claims 1, 5 and 16, which is also found to be not patent eligible at least for the reasons noted above. The dependent claims 2-4, 6-15, and 17-20 are also directed to an abstract idea as the depending claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The elements in those claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Therefore, the depending claims, are, also not patent eligible. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 5, and 16 recite the limitation of “the third ultrasonic image data”. It is unclear what the third ultrasonic image data is, and if it is the same as the first ultrasonic image data, as the Specifications notate the first ultrasonic image data as P1. For purposes of examination, the limitation will be construed as the first ultrasonic image data, which is input into the learning model. However, further clarification is required. Dependent claims 2-4, 6-15, and 17-20 are rejected due to their dependency. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Osumi (US 20200234461 A1) in view of Zalev (US 20140036091 A1) and in further view of Stergiopoulos (US20210251610A1). Regarding Claim 1, Osumi teaches a non-transitory storage medium storing a computer- readable diagnostic program that causes a computer to execute outputting that is outputting a first inference result from third ultrasonic image data using a learning model (corresponding disclosure in at least [0060], where there is a learning model (machine learning), which out puts an inference result using the first image data (third ultrasonic data that is not manipulated) “The trained model 402 is a model generated through the learning using the first image data 401 obtained during the previously executed ultrasound scan and the mask data (learning mask data) representing the area including the object in the first image data”), the third ultrasonic image data being based on a reception signal received by the ultrasonic probe and is intermediate image data that has not been subjected to processing including coordinate transformation (corresponding disclosure in at least [0074], where it’s disclosed that the first image data (third ultrasonic image data) is data that has not been processed, including coordinate conversion (coordinate transformation) “the first image data 501 is image data obtained during the previously executed ultrasound scan on the object and is image data before the coordinate conversion corresponding to the format of the ultrasound scan”) wherein the learning model is machine-learned using learning data formed with a pair of: first ultrasonic image data based on a reception signal for image generation received by the ultrasonic probe (corresponding disclosure in at least Figure 4 and [0060], where there is a trained model (first ultrasonic image data, which goes into the learning model/estimation function) “The trained model 402 is a model generated through the learning using the first image data 401 obtained during the previously executed ultrasound scan and the mask data”); and second correct answer data obtained by performing coordinate transformation on first correct answer data determined by a user viewing second ultrasonic image data, the second correct answer data being obtained by performing processing including coordinate transformation on the first ultrasonic image data the first correct answer data being determined by a user viewing the second ultrasonic image data (corresponding disclosure in at least [0058]-[0060] and Figure 4, where a second correct answer is determined through coordinate transformation (coordinate conversion) through viewing the second ultrasonic image (denoted in the Figure) “ PNG media_image1.png 443 661 media_image1.png Greyscale Figure 4 of Osumi Osumi does not teach the use of performing inverse transformation. Zalev, in a similar field of endeavor, teaches a similar concept (image manipulation and conversion) of the application of an inverse function to a previously completed image transformation (corresponding disclosure in at least [0232], where inverse transformation is applied to the image frame that was previously transformed “iii. (If an image transform was applied to frame in, apply the inverse transform to the results of the previous calculation”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the combination above to include the inverse transformation. One of the ordinary skill in the art would have been motivated to incorporate this because it doing so would allow for further processing the diagnostic image which is advantageous because the original function or image can then be observed after undergoing the other transformations The combined references do not teach wherein the hardware processor outputs the first inference result including a probability distribution image representing the probability of being a range of a structure, subjects the probability distribution image to coordinate transformation, and calculates a characteristic value of the structure from the probability distribution image after the coordinate transformation. Stergiopoulos, in a similar field of endeavor, teaches a similar concept (ultrasound diagnostics) of an ultrasonic probe that transmits and receives ultrasonic waves (corresponding disclosure in at least [0084], where there is an ultrasound probe for transmission/receiving “This is done with four (4) transmit operations—transmitting from all transducer elements—each followed by a receive operation”), a hardware processor for outputting an inference result from an ultrasound image through a learning model (corresponding disclosure in at least [0143], where an inference result is taken (classified organ of interest) from the ultrasound image using a learning model (neural network) “A 3D ultrasound image is provided at 1001. The image resolution is adjusted at 1002. At 1003, low level features are extracted. At 1004, region specific neural networks are used to classify the data to detect organs of interest”), ultrasound image data taken as one input of the machine learning model (corresponding disclosure in at least [0108] “Providing computer aided automated or semi-automated diagnosis using 3D Images from 3D Ultrasound, CT and/or MRI output data as input data”) and a transformed correct second answer data (corresponding disclosure in at least [0145], where an input is an input image of the region of interest with transformation completed “centroid of sub-volumes are high-level features, and each extracted high-level feature in an input image is paired with its corresponding sub-volume's centroid in a reference organ's alignment. Once high-level features are extracted for each resolution, a projective transformation is calculated to map the centroids in an input image to their corresponding centroids within the reference organ's alignment”). Further, Stergiopoulos also teaches wherein the hardware processor outputs the first inference result including a probability distribution image representing the probability of being a range of a structure (corresponding disclosure in at least [0150], where there is a probability provided of the result being in range of the structure (iterative steps of determining the likelihood of the structure) “ Once registration is performed with a neural network having a given resolution, a Bayes classifier is applied to validate that a registered shape is actually the organ shape of interest. If the Bayes classifier validates the organ's shape, the registration result is passed to the next iteration at a next, finer resolution, and the iterative process continues. But if the Bayes classifier does not validate that the organ shape exists, the iterations relating to that object of interest and class stop; the branch of the iterative tree is pruned as a likelihood of the object of interest being present is too low”), subjects the probability distribution image to coordinate transformation, and calculates a characteristic value of the structure from the probability distribution image after the coordinate transformation (corresponding disclosure in at least [0158], where the image undergoes a coordinate transformation (transformed into the reference coordinate) “Regions are first specified as multi-resolution divisions in the reference objects, {Φr,j ref}. It is important to correctly delineate multi-resolution regions in training objects, according to the specified regions in the reference object. Otherwise, it results in a huge training error. Thus, the training volumes are first transformed into the reference coordinate, {Φreg i }, and then, their multi-resolutional regions, {Φr,j reg i }, are outlined according to the reference object regions” and further values are calculated from the image (affine transformation is calculated) “The process of finding transformations to register training volumes on the reference alignment relies upon a human operator to provide input data. First, a set of landmarks are manually specified on the reference image. Then, for each training image, corresponding landmarks are manually selected on the training image. Afterwards, an affine transformation is calculated which maps the corresponding landmarks to the reference ones”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the processor to output an inference result with a probability distribution for the probability of a structure being in range with a characteristic value from the structure after coordinate transformation as taught by Stergiopoulos. One of the ordinary skill in the art would have been motivated to incorporate this because predicting the structure of interest using a probability distribution provides a more automated method of diagnostic and detection with high confidence. Regarding Claim 2, the combined references noted above teach the limitations of Claim 1, and Osumi further teaches wherein the second ultrasonic image data is a B-mode image (corresponding disclosure in at least [0029], where there is a b-mode processing circuitry (B-mode images) “The B-mode processing circuitry 120 performs various types of signal processing on reflected wave data generated from a reflected wave signal by the transmission/reception circuitry”). Regarding Claim 3, the combined references noted above teach the limitations of Claim 1, and Zalev further teaches wherein the coordinate transformation includes interpolation between pixels (corresponding disclosure in at least [0204], where interpolation can be completed on the images “ if the sampling rate is too low to support the image resolution, then, in an embodiment, the sinogram can be upsampled and interpolated so to produce a higher quality images. While the two dimensional image can be any resolution, in an exemplary embodiment, the image can comprise 512×512 pixels. In an another exemplary embodiment, the image can comprise 1280×720 pixels”). Regarding Claim 4, the combined references noted above teach the limitations of Claim 1, and Zalev further teaches wherein the first correct answer data is inversely transformed based on transmission direction information of an ultrasonic wave (corresponding disclosure in at least [0195], where the transformation is based on the transmission direction information “ an envelope can be extracted from the non-analytic reconstructed real image post-reconstruction. The envelope may be extracted by taking the envelope of a monogenic representation of the image, which may be carried out by computing the Hilbert-transformed directional derivative surfaces of the horizontal and vertical lines of the image and then computing for each pixel the square root of the square of the horizontal component plus square of the vertical component plus the square of the original image”). Regarding Claim 5, Osumi teaches an ultrasonic diagnostic apparatus comprising: an ultrasonic probe that transmits and receives ultrasonic waves to and from a subject (corresponding disclosure in at least [0021], where there is an ultrasound probe for transmitting and receiving US waves”) and a hardware processor that outputs a first inference result from third ultrasonic image data by using a learning model (corresponding disclosure in at least [0060], where there is a learning model (machine learning), which out puts an inference result using the first image data (third ultrasonic data that is not manipulated) “The trained model 402 is a model generated through the learning using the first image data 401 obtained during the previously executed ultrasound scan and the mask data (learning mask data) representing the area including the object in the first image data”) the third ultrasonic image data being based on a reception signal received by the ultrasonic probe and is intermediate image data that has not been subjected to processing including coordinate transformation (corresponding disclosure in at least [0074], where it’s disclosed that the first image data (third ultrasonic image data) is data that has not been processed, including coordinate conversion (coordinate transformation) “the first image data 501 is image data obtained during the previously executed ultrasound scan on the object and is image data before the coordinate conversion corresponding to the format of the ultrasound scan”) wherein the learning model is machine-learned using learning data formed with a pair of: first ultrasonic image data based on a reception signal for image generation received by the ultrasonic probe (corresponding disclosure in at least Figure 4 and [0060], where there is a trained model (first ultrasonic image data, which goes into the learning model/estimation function) “The trained model 402 is a model generated through the learning using the first image data 401 obtained during the previously executed ultrasound scan and the mask data”). and second correct answer data obtained by performing coordinate transformation on first correct answer data determined by a user viewing second ultrasonic image data, the second correct answer data being obtained by performing processing including coordinate transformation on the first ultrasonic image data the first correct answer data being determined by a user viewing the second ultrasonic image data (corresponding disclosure in at least [0058]-[0060] and Figure 4, where a second correct answer is determined through coordinate transformation (coordinate conversion) through viewing the second ultrasonic image (denoted in the Figure) “ PNG media_image1.png 443 661 media_image1.png Greyscale Figure 4 of Osumi Osumi does not teach the use of performing inverse transformation. Zalev, in a similar field of endeavor, teaches a similar concept (image manipulation and conversion) of the application of an inverse function to a previously completed image transformation (corresponding disclosure in at least [0232], where inverse transformation is applied to the image frame that was previously transformed “iii. (If an image transform was applied to frame_in, apply the inverse transform to the results of the previous calculation”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the combination above to include the inverse transformation. One of the ordinary skill in the art would have been motivated to incorporate this because it doing so would allow for further processing the diagnostic image which is advantageous because the original function or image can then be observed after undergoing the other transformations The combined references do not teach wherein the hardware processor outputs the first inference result including a probability distribution image representing the probability of being a range of a structure, subjects the probability distribution image to coordinate transformation, and calculates a characteristic value of the structure from the probability distribution image after the coordinate transformation. Stergiopoulos, in a similar field of endeavor, teaches a similar concept (ultrasound diagnostics) of an ultrasonic probe that transmits and receives ultrasonic waves (corresponding disclosure in at least [0084], where there is an ultrasound probe for transmission/receiving “This is done with four (4) transmit operations—transmitting from all transducer elements—each followed by a receive operation”), a hardware processor for outputting an inference result from an ultrasound image through a learning model (corresponding disclosure in at least [0143], where an inference result is taken (classified organ of interest) from the ultrasound image using a learning model (neural network) “A 3D ultrasound image is provided at 1001. The image resolution is adjusted at 1002. At 1003, low level features are extracted. At 1004, region specific neural networks are used to classify the data to detect organs of interest”), ultrasound image data taken as one input of the machine learning model (corresponding disclosure in at least [0108] “Providing computer aided automated or semi-automated diagnosis using 3D Images from 3D Ultrasound, CT and/or MRI output data as input data”) and a transformed correct second answer data (corresponding disclosure in at least [0145], where an input is an input image of the region of interest with transformation completed “centroid of sub-volumes are high-level features, and each extracted high-level feature in an input image is paired with its corresponding sub-volume's centroid in a reference organ's alignment. Once high-level features are extracted for each resolution, a projective transformation is calculated to map the centroids in an input image to their corresponding centroids within the reference organ's alignment”). Further, Stergiopoulos also teaches wherein the hardware processor outputs the first inference result including a probability distribution image representing the probability of being a range of a structure (corresponding disclosure in at least [0150], where there is a probability provided of the result being in range of the structure (iterative steps of determining the likelihood of the structure) “ Once registration is performed with a neural network having a given resolution, a Bayes classifier is applied to validate that a registered shape is actually the organ shape of interest. If the Bayes classifier validates the organ's shape, the registration result is passed to the next iteration at a next, finer resolution, and the iterative process continues. But if the Bayes classifier does not validate that the organ shape exists, the iterations relating to that object of interest and class stop; the branch of the iterative tree is pruned as a likelihood of the object of interest being present is too low”), subjects the probability distribution image to coordinate transformation, and calculates a characteristic value of the structure from the probability distribution image after the coordinate transformation (corresponding disclosure in at least [0158], where the image undergoes a coordinate transformation (transformed into the reference coordinate) “Regions are first specified as multi-resolution divisions in the reference objects, {Φr,j ref}. It is important to correctly delineate multi-resolution regions in training objects, according to the specified regions in the reference object. Otherwise, it results in a huge training error. Thus, the training volumes are first transformed into the reference coordinate, {Φreg i }, and then, their multi-resolutional regions, {Φr,j reg i }, are outlined according to the reference object regions” and further values are calculated from the image (affine transformation is calculated) “The process of finding transformations to register training volumes on the reference alignment relies upon a human operator to provide input data. First, a set of landmarks are manually specified on the reference image. Then, for each training image, corresponding landmarks are manually selected on the training image. Afterwards, an affine transformation is calculated which maps the corresponding landmarks to the reference ones”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the processor to output an inference result with a probability distribution for the probability of a structure being in range with a characteristic value from the structure after coordinate transformation as taught by Stergiopoulos. One of the ordinary skill in the art would have been motivated to incorporate this because predicting the structure of interest using a probability distribution provides a more automated method of diagnostic and detection with high confidence. Regarding Claim 6, the combination of Osumi and Zalev teaches all the claimed limitations of Claim 5 and Osumi further teaches the ultrasonic diagnostic apparatus 1, wherein the hardware processor (i.e., image processing circuitry 140 [0018]) performs the coordinate transformation on the first inference result to obtain a second inference result, and outputs the second inference result after the coordinate transformation (corresponding disclosure in at least [0041] and [0018], where the processing circuitry 140 uses the training model through the learning using the first image data from the ultrasound scan, which performs various functions, including coordinate conversion, and [0063], where “the coordinate conversion function 144 executes the coordinate conversion on the second image data (second inference result) 453”). Regarding Claim 7, the combined references of Osumi and Zalev teach the limitations of Claim 5, and the ultrasonic diagnostic apparatus 1, wherein the second ultrasound image generated is a B-mode image (corresponding disclosure in at least [0034] of Osumi). Regarding Claim 8, the combined references of Osumi and Zalev teach the limitations of Claim 5, and the coordinate transformation (i.e., coordinate conversion [0053] of Osumi) including linear interpolation between pixels (corresponding disclosure in at least [325] of Zalev, where the color of each pixel is set for intermediate values using linear interpolation). Regarding Claim 9, the combined references of Osumi and Zalev teach the limitations of Claim 5, and the first correct answer data (Examiner notes that the present application discloses the first correct answer data to be a position range of the diagnostic ultrasound image [0045] of Osumi) is inversely transformed (corresponding disclosure in at least [0232] of Zalev, ”If an image transform was applied to frame_in, apply the inverse transform to the results of the previous calculation”) transmission direction information of an ultrasonic wave (corresponding disclosure in at least [0027] of Osumi, where a transmission/reception circuitry 110 exists for controlling the transmission of the ultrasound waves and [0035] of Osumi, where coordinates are obtained from the ultrasound images). Regarding Claim 10, the combination of Osumi, Zalev, and Choi teaches the claimed limitation of Claim 6 and Osumi further teaches the ultrasonic diagnostic apparatus 1, wherein the hardware processor (i.e. image processing circuitry 140) performs the coordinate transformation (i.e. coordinate conversion) that is determined based on transmission direction information of an ultrasonic wave (corresponding disclosure in at least [0035], where ” the image processing circuitry 140 conducts coordinate conversion (scan conversion) “, and [0027], where a transmission/reception circuitry 110 exists for controlling the transmission of the ultrasound waves) on the first inference result to obtain a second inference result, and outputs the second inference result after the coordinate transformation (corresponding disclosure in at least [0041] and [0018], where the processing circuitry 140 uses the training model through the learning using the first image data from the ultrasound scan, which performs various functions, including coordinate conversion, and [0063], where “the coordinate conversion function 144 executes the coordinate conversion on the second image data (second inference result) 453”). Regarding Claim 11, the combination of Osumi and Zalev teaches all the claimed limitations of Claim 10 and Osumi further teaches transmission direction information of an ultrasonic wave (corresponding disclosure in at least [0027], where a transmission/reception circuitry 110 exists for controlling the transmission of the ultrasound waves, and [0028], where the transmission/reception circuitry performs operations and receives “a reflected component in the direction corresponding to the receive directional characteristics of a reflected wave signal is enhanced“ and the data for the ultrasound transmission/reception are stored in a memory circuitry 160 [0046]). Regarding Claim 12, the combination of Osumi and Zalev teaches all the claimed limitations of Claim 6, and Osumi teaches the ultrasonic diagnostic apparatus 1, further comprising a display part 103 capable of displaying the second inference result. (corresponding disclosure in at least [0024] and further in [0064], where image data is displayed “causes the display 103 to display the third image data 406.”). Regarding Claim 16, Osumi teaches an ultrasonic diagnostic system comprising: an ultrasonic probe that transmits and receives ultrasonic waves to and from a subject (corresponding disclosure in at least [0021], where there is an ultrasound probe for transmitting and receiving US waves”) and a hardware processor that outputs a first inference result from third ultrasonic image data by using a learning model (corresponding disclosure in at least [0060], where there is a learning model (machine learning), which out puts an inference result using the first image data (third ultrasonic data that is not manipulated) “The trained model 402 is a model generated through the learning using the first image data 401 obtained during the previously executed ultrasound scan and the mask data (learning mask data) representing the area including the object in the first image data”) the third ultrasonic image data being based on a reception signal received by the ultrasonic probe and is intermediate image data that has not been subjected to processing including coordinate transformation (corresponding disclosure in at least [0074], where it’s disclosed that the first image data (third ultrasonic image data) is data that has not been processed, including coordinate conversion (coordinate transformation) “the first image data 501 is image data obtained during the previously executed ultrasound scan on the object and is image data before the coordinate conversion corresponding to the format of the ultrasound scan”) wherein the learning model is machine-learned using learning data formed with a pair of: first ultrasonic image data based on a reception signal for image generation received by the ultrasonic probe (corresponding disclosure in at least Figure 4 and [0060], where there is a trained model (first ultrasonic image data, which goes into the learning model/estimation function) “The trained model 402 is a model generated through the learning using the first image data 401 obtained during the previously executed ultrasound scan and the mask data”). and second correct answer data obtained by performing coordinate transformation on first correct answer data determined by a user viewing second ultrasonic image data, the second correct answer data being obtained by performing processing including coordinate transformation on the first ultrasonic image data the first correct answer data being determined by a user viewing the second ultrasonic image data (corresponding disclosure in at least [0058]-[0060] and Figure 4, where a second correct answer is determined through coordinate transformation (coordinate conversion) through viewing the second ultrasonic image (denoted in the Figure) “ PNG media_image1.png 443 661 media_image1.png Greyscale Figure 4 of Osumi Osumi does not teach the use of performing inverse transformation. Zalev, in a similar field of endeavor, teaches a similar concept (image manipulation and conversion) of the application of an inverse function to a previously completed image transformation (corresponding disclosure in at least [0232], where inverse transformation is applied to the image frame that was previously transformed “iii. (If an image transform was applied to frame_in, apply the inverse transform to the results of the previous calculation”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the combination above to include the inverse transformation. One of the ordinary skill in the art would have been motivated to incorporate this because it doing so would allow for further processing the diagnostic image which is advantageous because the original function or image can then be observed after undergoing the other transformations The combined references do not teach wherein the hardware processor outputs the first inference result including a probability distribution image representing the probability of being a range of a structure, subjects the probability distribution image to coordinate transformation, and calculates a characteristic value of the structure from the probability distribution image after the coordinate transformation. Stergiopoulos, in a similar field of endeavor, teaches a similar concept (ultrasound diagnostics) of an ultrasonic probe that transmits and receives ultrasonic waves (corresponding disclosure in at least [0084], where there is an ultrasound probe for transmission/receiving “This is done with four (4) transmit operations—transmitting from all transducer elements—each followed by a receive operation”), a hardware processor for outputting an inference result from an ultrasound image through a learning model (corresponding disclosure in at least [0143], where an inference result is taken (classified organ of interest) from the ultrasound image using a learning model (neural network) “A 3D ultrasound image is provided at 1001. The image resolution is adjusted at 1002. At 1003, low level features are extracted. At 1004, region specific neural networks are used to classify the data to detect organs of interest”), ultrasound image data taken as one input of the machine learning model (corresponding disclosure in at least [0108] “Providing computer aided automated or semi-automated diagnosis using 3D Images from 3D Ultrasound, CT and/or MRI output data as input data”) and a transformed correct second answer data (corresponding disclosure in at least [0145], where an input is an input image of the region of interest with transformation completed “centroid of sub-volumes are high-level features, and each extracted high-level feature in an input image is paired with its corresponding sub-volume's centroid in a reference organ's alignment. Once high-level features are extracted for each resolution, a projective transformation is calculated to map the centroids in an input image to their corresponding centroids within the reference organ's alignment”). Further, Stergiopoulos also teaches wherein the hardware processor outputs the first inference result including a probability distribution image representing the probability of being a range of a structure (corresponding disclosure in at least [0150], where there is a probability provided of the result being in range of the structure (iterative steps of determining the likelihood of the structure) “ Once registration is performed with a neural network having a given resolution, a Bayes classifier is applied to validate that a registered shape is actually the organ shape of interest. If the Bayes classifier validates the organ's shape, the registration result is passed to the next iteration at a next, finer resolution, and the iterative process continues. But if the Bayes classifier does not validate that the organ shape exists, the iterations relating to that object of interest and class stop; the branch of the iterative tree is pruned as a likelihood of the object of interest being present is too low”), subjects the probability distribution image to coordinate transformation, and calculates a characteristic value of the structure from the probability distribution image after the coordinate transformation (corresponding disclosure in at least [0158], where the image undergoes a coordinate transformation (transformed into the reference coordinate) “Regions are first specified as multi-resolution divisions in the reference objects, {Φr,j ref}. It is important to correctly delineate multi-resolution regions in training objects, according to the specified regions in the reference object. Otherwise, it results in a huge training error. Thus, the training volumes are first transformed into the reference coordinate, {Φreg i }, and then, their multi-resolutional regions, {Φr,j reg i }, are outlined according to the reference object regions” and further values are calculated from the image (affine transformation is calculated) “The process of finding transformations to register training volumes on the reference alignment relies upon a human operator to provide input data. First, a set of landmarks are manually specified on the reference image. Then, for each training image, corresponding landmarks are manually selected on the training image. Afterwards, an affine transformation is calculated which maps the corresponding landmarks to the reference ones”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the processor to output an inference result with a probability distribution for the probability of a structure being in range with a characteristic value from the structure after coordinate transformation as taught by Stergiopoulos. One of the ordinary skill in the art would have been motivated to incorporate this because predicting the structure of interest using a probability distribution provides a more automated method of diagnostic and detection with high confidence. Regarding Claim 17, the combined references of Osumi and Zalev teach the limitations of claim 5 and wherein inverse transformation ([0232] of Zalev) of other than the coordinate transformation is not performed on the first correct answer data ([0074] of Osumi, where coordinate transformation (conversion) is performed, and there are no other transformations being completed “the first image data 501 is image data obtained during the previously executed ultrasound scan on the object and is image data before the coordinate conversion corresponding to the format of the ultrasound scan”). Claims 14 is rejected under 35 U.S.C. 103 as being unpatentable over Osumi (US 20200234461 A1) in view of Zalev (US 20140036091 A1) and, as applied to Claim 5, and further in view of Choi (US 20180330518 A1). Regarding Claim 14, the combination of Osumi andZalev teaches all the claimed limitations of Claim 6 and the second inference result (corresponding disclosure in at least [0063] of Osumi) but does not teach the binarizing. Choi, in a similar field of endeavor, teaches binarizing (corresponding disclosure in at least [0047], where a binarization probability algorithm is incorporated, “base unit 120 may use the probability map to segment the target region via a binarization process”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the binarization. One of the ordinary skill in the art would have been motivated to incorporate this because binarization simplifies the data, making it easier for analysis. Claims 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Osumi (US 20200234461 A1), Zalev (US 20140036091 A1), and Stergiopoulos (US20210251610A1) as applied in Claim 5, and in view of Toyonaga (US20210219961A1) in further view of Kim (US20140212014A1). Regarding Claim 18, the combined references of Osumi and Zalev teach the limitations of Claim 5, and wherein the learning data is formed with a plurality of pairs of the first ultrasonic image data and each of a plurality of the second correct answer data (corresponding disclosure in at least [0073], where there are a plurality of ultrasonic image data (the trained mode images, which are inputted) and second correct answer data (there are plurality of data sets, which the second correct answer data would come from) “wherein the learning data is formed with a plurality of pairs of the first ultrasonic image data and each of a plurality of the second correct answer data”) and the plurality of the second correct answer data is obtained by performing: generating a plurality of ultrasonic image data by processing a plurality of types of processes including coordinate transformation on the first ultrasonic image data (corresponding disclosure in at least [0098] and Figure 4, where there are a plurality of ultrasound image data, which undergo coordinate transformation “Then, at Step S204, the coordinate conversion function 144 executes the coordinate conversion on the first image data 701 acquired by the acquisition function 141 to obtain second image data as post-coordinate conversion image data.”); Osumi and Zalev do not teach merging the plurality of ultrasonic image data to generate the second ultrasonic image data; and processing a plurality of types of inverse transformation of coordinate transformation on the first correct answer data for the second ultrasonic image data. Toyonaga, in a similar field of endeavor, teaches a similar concept (ultrasound imaging) of merging the plurality of ultrasonic image data to generate the second ultrasonic image data (corresponding disclosure in at least [0042], where spatial compounding, or a merging of ultrasound images is taught “A set of ultrasound images for spatial compounding can be identified and selected from a larger set of ultrasound images… identifying and selecting a set of the ultrasound images can significantly decrease the amount of computation and time to perform spatial compounding to generate a compounded ultrasound image”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated merging of ultrasound images as taught by Toyonaga. One of the ordinary skill in the art would have been motivated to incorporate this because averaging or compounding ultrasound images together provides a more robust ultrasound image. The combined references noted above do not teach the plurality of types of inverse transformation of coordinate transformation on the first correct answer data for the second ultrasonic image data. Kim, in a similar field of endeavor, teaches a similar concept (transforming medical images) of a plurality of different inverse transformations of coordinate transformation on a first correct answer (a position) (corresponding disclosure in at least [0077] and [0085], where there are two different transform functions completing either an inverse or coordinate transformation “ The first coordinate transform unit 340 transforms or inverse transforms coordinates of the reference image 551 to coordinates of the second medical image 540 by using the first transform function”; “the second coordinate transform unit 430 transforms or inverse transforms coordinates of the reference image 551 to coordinates of the other first medical images 552, 553, and 554 by using the second transform function”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have a plurality of types of inverse transformation as taught by Kim. One of the ordinary skill in the art would have been motivated to incorporate this because having multiple types of inverse transformation allows the user to choose the most suitable transformation for the dataset. Regarding Claim 19, the combined references of Osumi and Zalev teach the limitations of Claim 5 and wherein generating a plurality of ultrasonic image data by processing a plurality of types of processes including coordinate transformation on the first ultrasonic image data; Osumi and Zalev do not teach merging the plurality of ultrasonic image data to generate the second ultrasonic image data; and processing a plurality of different inverse transformations of coordinate transformation on the first correct answer data for the second ultrasonic image data; and selecting an answer data among the plurality of the second correct answer data. Toyonaga, in a similar field of endeavor, teaches a similar concept (ultrasound imaging) of merging the plurality of ultrasonic image data to generate the second ultrasonic image data (corresponding disclosure in at least [0042], where spatial compounding, or a merging of ultrasound images is taught “A set of ultrasound images for spatial compounding can be identified and selected from a larger set of ultrasound images… identifying and selecting a set of the ultrasound images can significantly decrease the amount of computation and time to perform spatial compounding to generate a compounded ultrasound image”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated merging of ultrasound images as taught by Toyonaga. One of the ordinary skill in the art would have been motivated to incorporate this because averaging or compounding ultrasound images together provides a more robust ultrasound image. The combined references noted above do not teach processing a plurality of types of inverse transformation of coordinate transformation on the first correct answer data for the second ultrasonic image data to generate a plurality of the second correct answer data; and selecting an answer data among the plurality of the second correct answer data. Kim, in a similar field of endeavor, teaches a similar concept (transforming medical images) of a plurality of types of inverse transformation of coordinate transformation on a first correct answer (a position) (corresponding disclosure in at least [0077] and [0085], where there are two different transform functions completing either an inverse or coordinate transformation “ The first coordinate transform unit 340 transforms or inverse transforms coordinates of the reference image 551 to coordinates of the second medical image 540 by using the first transform function”; “the second coordinate transform unit 430 transforms or inverse transforms coordinates of the reference image 551 to coordinates of the other first medical images 552, 553, and 554 by using the second transform function”) and selecting an answer data among the plurality of the second correct answer data (corresponding disclosure in at least [0042], where ultrasound images are selected and identified among the data “identifying and selecting a set of the ultrasound images can significantly decrease the amount of computation and time to perform spatial compounding to generate a compounded ultrasound image. Identifying and selecting a set of ultrasound images for spatial compounding can decrease the likelihood of incorrect registrations”) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have a plurality of types of inverse transformation and to select an answer data (ultrasound data) as taught by Kim. One of the ordinary skill in the art would have been motivated to incorporate this because having multiple types of inverse transformation allows the user to choose the most suitable transformation for the dataset, and selecting the data decreases the likelihood of artifacts in the images. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Osumi (US 20200234461 A1), Zalev (US 20140036091 A1), and Stergiopoulos (US20210251610A1) as applied in Claim 5, and in view of Huang (US20230086332A1). Regarding Claim 20, the combined references of Osumi and Zalev teach the limitations of Claim 5, and the hardware processor (corresponding disclosure in at least [0041] of Osumi) but do not teach a look-up table to the second inference result to change a luminance value distribution of the second inference result to a luminance value distribution that is easily viewable on a display screen. Huang, in a similar field of endeavor, teaches a similar concept (ultrasound transducers), of a look-up table to the second inference result to change a luminance value distribution of the second inference result to a luminance value distribution that is easily viewable on a display screen (corresponding disclosure in at least [0059], where a look-up table is applied with scaling factors (luminance values) for ultrasound to better display the tissue “A look-up table of scaling factors can also be established for different imaging settings and ultrasound transducers for different targeted tissues, which can enable real-time implementations”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a look-up table of luminance values for better visibility on a display as taught by Huang. One of the ordinary skill in the art would have been motivated to incorporate this because the user can adjust the scaling/light to better view the ultrasound image for the ultrasound scans. Response to Arguments In regards to the Claim objections, the updated Claims filed 12/08/2025 have been fully considered, and the objections have been withdrawn. In regards to the 35 U.S.C. 112b rejection, the updated Claims filed 12/08/2025 have been fully considered, but the rejection is maintained per new issues as identified in the office action above. Applicant's arguments filed 12/08/2025 with respect to the rejections under 35 U.S.C. 103 have been fully considered but they are not persuasive. Regarding Claim 5, Applicant’s arguments have been considered but are moot in view of new grounds of rejection. With respect to Applicant’s arguments to the remaining claims, see page 10, regarding claims 6-12, 14, and 17-20, these claims are not allowable based on their dependence to the independent claim 5 for at least the reasonings provided above. Conclusion A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN KIM whose telephone number is (571)272-1821. The examiner can normally be reached Monday-Friday 6-2 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.E.K./Examiner, Art Unit 3797 /SERKAN AKAR/Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Jul 07, 2023
Application Filed
Mar 06, 2025
Non-Final Rejection — §101, §103, §112
Jun 11, 2025
Response after Non-Final Action
Jun 11, 2025
Response Filed
Jun 20, 2025
Response Filed
Sep 05, 2025
Final Rejection — §101, §103, §112
Dec 08, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+65.7%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month