Prosecution Insights
Last updated: April 19, 2026
Application No. 18/387,500

IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD FOR ENABLING ESTIMATION ACCURACY OF A REFERENCE CROSS-SECTION OF A THREE-DIMENSIONAL IMAGE

Non-Final OA §103
Filed
Nov 07, 2023
Examiner
LE, MICHAEL
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Canon Medical Systems Corporation
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+3.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/23/2026 has been entered. Response to Amendment 3. Applicant’s amendments filed on 02/23/2026 have been entered. Claims 1, 8-13, 15-18, and 21 have been amended. Claim 4 has been canceled. Claims 1-3, 5-13 and 15-22 are pending in this application, with claims 1 and 21 being independent. Response to Arguments 4. Applicant’s arguments, see page 10, filed 02/23/2026, with respect to the claim objection has been fully considered and are persuasive. The amendments to the claim is sufficient to overcome the informalities of the previous claims; thus the objection to this claim has been withdrawn. 5. Applicant's arguments filed on 10/01/2025, with respect to the 103 rejection have been fully considered but are moot in view of the new grounds of rejection. Examiner notes that independent claims 1 and 21 have been amended to include new limitation. Examiner finds these limitations to be unpatentable as can be found in below detail action. In light of the current Office Action, the Examiner respectfully submits that independent claims 1 and 21 are rejected in view of newly discovered reference(s) to Sakaguchi et al., (US-2020/0375565-A1). On page 13 of Applicant's Remarks, the Applicant argues that the dependent claims are not taught by the prior art, insomuch as they depend from claims that are not taught by the prior art. Examiner respectfully disagrees with these arguments, for the reasons discussed below. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1, 3, 5-13, 15-19 and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al., (“Lu”) [US-2009/0074280-A1] in view of Nitta et al. (“Nitta”) [US-2015/0185302-A1], further in view of Sakaguchi et al. (“Sakaguchi”) [US-2020/0375565-A1] Regarding claim 1, Lu discloses an image processing apparatus comprising at least one memory and at least one processor (Lu- ¶0009, at least disclose a system is provided for detecting plane positions for standard planes of a multiplanar reconstruction of a heart volume. A memory is operable to store ultrasound data representing the heart volume. A processor is operable to calculate first planar features for each of a plurality of translated plane positions; Fig. 1 and ¶0017-0018, at least disclose a medical diagnostic imaging system 10 for detecting a plane position of a desired view […] The system 10 includes a processor 12, a memory 14, a display 16, and a transducer 18) which function as: an image acquiring unit configured to acquire a three-dimensional image of an object (Lu- Fig. 1 and ¶0020-0021, at least disclose The system 10 uses the transducer 18 to scan a volume […] Ultrasound data representing a volume is provided in response to the scanning. The ultrasound data is beamformed, detected, and/or scan converted. The ultrasound data may be in any format, such as polar coordinate, Cartesian coordinate, a three-dimensional grid, two-dimensional planes in Cartesian coordinate with polar coordinate spacing between planes, or other format); an initial parameter estimating unit configured to reduce a resolution of the three-dimensional image (Lu- ¶0009, at least discloses A processor is operable to calculate first planar features for each of a plurality of translated plane positions; ¶0027, at least discloses the processor 12 receives acquired ultrasound data during or after scanning and determines locations of one or more planes relative to the volume represented by the data; ¶0053, at least discloses Volumetric or planar features may be used with the 3D echocardiographic data; ¶0061, at least discloses The features are calculated from the echocardiographic data representing the volume […] features are calculated from the data at different resolutions. A volume pyramid is provided, such that the data set is down sampled to different resolutions. For example, one set of data has fine resolution, such as the scan resolution, and another set of data has a coarse resolution, such as the fine set decimated by ¼ in each dimension (i.e., down sample by a factor of 4)) and estimate, as a plurality of initial parameters, initial values to define a reference cross section (Lu- ¶0047, at least discloses a limited set of hypotheses may be used based on any desired criteria, such as relative expected positions of different planes [a plurality of initial parameters]. By training a series of detectors that estimate plane or pose parameters at a number of sequential stages, the number of calculations may be reduced; ¶0064, at least discloses the plane detector determines if a given sub-volume sample (data for a possible plane position) is positive or negative. Positive and negative samples correspond to correct and incorrect plane parameters (positions) [a plurality of initial parameters], respectively; ¶0072, at least discloses The plane position for the A4C view is used to limit the search region for fine or coarse plane parameter estimation. An initial position of another of the standard view planes is determined as a function of the position of the A4C view. The initial plane parameters (position, orientation, and scale) [a plurality of initial parameters] for other views (e.g., A2C, A3C, SAXB, SAXM, and SAXA) with respect to the A4C view are based on empirical statistics); a cross sectional image acquiring unit configured to acquire a plurality of cross sectional images including a first cross sectional image and a second cross sectional image based on the three-dimensional image and the plurality of initial parameters (Lu- Fig. 1 and ¶0017, at least disclose a medical diagnostic imaging system 10 for detecting a plane position of a desired view; Fig. 4 shows example medical images of standard echocardiographic views and represents the relative plane positions for the views (corresponds to a plurality of cross sectional images including a first cross sectional image and a second cross sectional image); ¶0042, at least disclose detection of a plane, such as a standard multi-planar reconstruction plane, from three-dimensional echocardiographic data; Fig. 2 and ¶0064, at least disclose In act 36, a position of a plane is detected. The position associated with the desired view is detected. For example, one or more standard view planes are detected as a function of the output of the classifiers [...] The plane detectors are discriminative classifiers trained on the 3D echocardiographic volumes. The plane detector determines if a given sub-volume sample (data for a possible plane position) is positive or negative. Positive and negative samples correspond to correct and incorrect plane parameters (positions) [a plurality of initial parameters], respectively; ¶0072, at least discloses The plane position for the A4C view is used to limit the search region for fine or coarse plane parameter estimation. An initial position of another of the standard view planes is determined as a function of the position of the A4C view. The initial plane parameters (position, orientation, and scale) [a plurality of initial parameters] for other views (e.g., A2C, A3C, SAXB, SAXM, and SAXA) with respect to the A4C view are based on empirical statistics); an indicator estimating unit configured to estimate a first indicator which is used to correct a first parameter of the plurality of initial parameters from the first cross sectional image (Lu- ¶0009, at least discloses a processor is operable to calculate first planar features for each of a plurality of translated plane positions, rule out hypotheses corresponding to the translated plane positions with a translation classifier; ¶0032, at least discloses The translation classifier outputs a probability [indicator] of a given possible plane position being the correct or desired view based on the feature values. If the probability is above a threshold, the associated hypothesis is maintained If the probability is below a threshold, the associated hypothesis is ruled out and discarded from the pool of hypotheses; ¶0064, at least discloses The plane detector determines if a given sub-volume sample (data for a possible plane position) is positive or negative. Positive and negative samples correspond to correct and incorrect plane parameters (positions), respectively); and a correcting unit configured to correct the first parameter of the plurality of initial parameters based on the first indicator estimated by the indicator estimating unit(Lu- ¶0009, at least discloses a processor is operable to calculate first planar features for each of a plurality of translated plane positions, rule out hypotheses corresponding to the translated plane positions with a translation classifier; ¶0032, at least discloses The translation classifier outputs a probability [indicator] of a given possible plane position being the correct or desired view based on the feature values [correct the initial parameter based on the indicator]. If the probability is above a threshold, the associated hypothesis is maintained If the probability is below a threshold, the associated hypothesis is ruled out and discarded from the pool of hypotheses). Lu does not explicitly disclose estimate, as a plurality of initial parameters, initial values to be corrected to define a reference cross section; estimate a second indicator which is used to correct a second parameter of the plurality of initial parameters from the second cross sectional image, wherein the first parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the first cross sectional image, and wherein the second parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the second cross sectional image; correct the second parameter of the plurality of initial parameters based on the second indicator estimated by the indicator estimating unit. However, Nitta discloses estimate initial values to be corrected to define a reference cross section (Nitta- ¶0112, at least discloses the generating module 133 c may normally display a display image in which the initial value of the cross-sectional position of the second cross-sectional image is superimposed on the first cross-sectional image, and may display a display image in which the corrected cross-sectional position of the second cross-sectional image is superimposed on the first cross-sectional image only when it has been corrected. Thus, each time the first cross-sectional image is acquired in imaging scans, the correcting module 133 d detects the cross-sectional position of the second cross-sectional image before acquisition by using the acquired first cross-sectional image, and corrects the initial value of the cross-sectional position of the second cross-sectional image. Furthermore, when the initial value is corrected, the generating module 133 c generates a display image in which the cross-sectional position of the second cross-sectional image after the correction is superimposed on the first cross-sectional image as necessary); a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images (Nitta- Figs. 13A-13C show diagrams for explaining the correction of cross-sectional positions; ¶0037, at least discloses When a correction operation concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position [parameter] of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images [cross sectional images] relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images [cross sectional images]; ¶0043, at least discloses The correcting module 133 d receives a correction operation concerning the cross-sectional position of the second cross-sectional image from the operator performed on the above-described display image via the input device 134, and corrects the cross-sectional position of the second cross-sectional image. The correcting module 133 d further overwrites the cross-sectional position of the second cross-sectional image stored in the storage 132 with the cross-sectional position after the correction; Fig. 6 and ¶0056, at least disclose The detecting module 133 a generates, based on the detected cross-sectional positions [parameters] of reference cross-sectional images, reference cross-sectional images from the volume data by multi-planar reconstruction (MPR) processing. As illustrated in FIG. 6, the detecting module 133 a further displays the detected feature regions of heart (such as rhombuses, triangles, and x marks in FIG. 6), and cross lines with the cross-sectional positions [parameters] of other reference cross-sectional images (such as solid lines and dotted lines in FIG. 6), superimposed on the generated respective reference cross-sectional images [cross sectional images]. Such a display is effective for checking the detection result by the operator, and the operator can perform the correction of cross-sectional positions on the display screen as appropriate; Figs. 13A-13C and ¶0097-0098, at least disclose as illustrated in FIG. 13B, the operator corrects the position and angle [parameters] of the aorta valve image (“H”) via a mouse or the like of the input device 134, for example. The correcting module 133 d then receives the correction operation concerning the cross-sectional position of the aorta valve image by the modification operation to modify at least one of the position and the angle [parameters] thereof, calculates the location parameters of the aorta valve image after the correction, and overwrites the location parameters of the aorta valve image stored in the storage 132. The correcting module 133 d can further correct, based on the spatial correlation of a plurality of cross-sectional positions, the cross-sectional position of the other second cross-sectional image relevant to the cross-sectional position corrected by the operator (for example, the reference cross-sectional image that is in a crossing relation), in conjunction with the correction […] The correction of cross-sectional positions is not limited to the above-described method. For example, when the feature region of heart is also displayed superimposed on the display image, the correcting module 133 d can receive the modification operation of the position of the feature region of heart displayed superimposed, and in accordance with the feature region after the correction, can automatically correct (recalculate again) all of the location parameters of the reference cross-sectional images that are planned to be acquired at later stages). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu to incorporate the teachings of Nitta, and apply the correction all of the location parameters into first indicator, as taught by Lu, in order to estimate, as a plurality of initial parameters, initial values to be corrected to define a reference cross section; and to estimate a first indicator which is used to correct a first parameter of the plurality of initial parameters from the first cross sectional image. Doing so would provide the accuracy of automatically detected cross-sectional position. Nitta further discloses a first indicator which is used to correct a first parameter of the plurality of initial parameters (Nitta- ¶0037, at least discloses When a correction operation [an indicator] concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images; ¶0043, at least discloses The correcting module 133 d receives a correction operation [indicator] concerning the cross-sectional position of the second cross-sectional image from the operator performed on the above-described display image via the input device 134, and corrects the cross-sectional position of the second cross-sectional image; Fig. 3 and ¶0095-0096, at least disclose Step S108: returning to FIG. 3, the correcting module 133 d then determines whether a correction operation from the operator is received on the display image displayed at Step S107 […] Step S109: meanwhile, if the correction operation [indicator] is received (Yes at Step S108), the correcting module 133 d corrects the cross-sectional position of the second cross-sectional image, and overwrites the cross-sectional position of the cross-sectional image information concerning the second cross-sectional image stored in the storage 132); The prior art does not explicitly disclose, but Sakaguchi discloses estimate a first indicator which is used to correct a first parameter of the plurality of initial parameters from the first cross sectional image, and to estimate a second indicator which is used to correct a second parameter of the plurality of initial parameters from the second cross sectional image, wherein the first parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the first cross sectional image, and wherein the second parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the second cross sectional image (Sakaguchi- Fig. 6 shows marker 50 (corresponds to a first indicator) which is used to set and change to different positions 61 to 67 (corresponds to parameters) in the blood vessel indicated in the CPR image and the SPR image. The first parameter is different from the second parameter. The display 340 display cross-sectional images (corresponds to the plurality of cross sectional images) of the blood vessel; ¶0040, at least discloses the X-ray CT apparatus 100 generates pieces of three-dimensional CT image data in a time series on the basis of the acquired projection data; ¶0052, at least discloses the calculating function 352 extracts pieces of blood vessel shape data in a time series indicating the shape of the blood vessel, from three-dimensional CT image data. For example, the calculating function 352 extracts the pieces of blood vessel shape data in the time series by reading [Wingdings font/0xE0] “extracts the pieces of blood vessel shape data in the time series” suggests “acquiring one other cross sectional image of the plurality of cross sectional images”; Fig 6 and ¶0081, at least disclose the display controlling function 353 causes the display 340 to display cross-sectional images [cross sectional image] of the blood vessel, separately from the representative FFR values of the blood vessel branches. In the present example, the images illustrated in FIG. 6 are a CPR image, an SPR image, and short-axis cross-sectional images (images of cross-sections that are each orthogonal to the central line) that are generated from the CT image data by the controlling function 351 […] The short-axis cross-sectional images illustrated on the far right of FIG. 6 are cross-sections taken in positions 61 to 67 [a first parameter of the plurality of initial parameters and a second indicator which is used to correct a second parameter of the plurality of initial parameters] indicated in the CPR image and the SPR image; Fig. 11B and ¶0123-0125, at least disclose the display controlling function 353 displays “FFR: 0.7; ΔFFR: 0.1” together with the short-axis cross-sectional image corresponding to a position 61 [the first parameter] in the blood vessel, “FFR: 0.88; ΔFFR: 0.05” together with the short-axis cross-sectional image corresponding to a position 62 [the second parameter], and “FFR: 0.81; ΔFFR: 0.15” together with the short-axis cross-sectional image corresponding to a position 63 […] the display controlling function 353 is also able to display an FFR value and the supplementary information (a ΔFFR value and/or a percentage diameter stenosis value) together with a short-axis cross-sectional image in the position indicated by the marker 50 [indicator]. In that situation, in response to an operation to move the marker 50 realized via the input interface 330, the display controlling function 353 changes the display of the short-axis cross-sectional image, the FFR value, and the supplementary information in conjunction with the moving operation [Wingdings font/0xE0] moving or changing to different positions suggests correcting a first parameter or a second parameter; ¶0157, at least discloses the display controlling function 353 arranges the marker 50 [estimate an indicator] to be displayed in a display image and has received a position designating operation as a result of the operator performing an operation to move the marker 50. In that situation, when the marker 50 is positioned in one of the abovementioned predetermined sites, the display controlling function 353 arranges the index value exhibited in the position designated by the marker 50 to be in a non-display state; ¶0170, at least discloses in accordance with the positions of the marker 50, the display controlling function 353 is able to change the display modes of the marker 50 [estimate indicator which is used to correct a parameter] and/or the numerals indicating the index values. For example, as illustrated in FIG. 18B, the display controlling function 353 changes the display modes of the marker 50 used for receiving designating operations and the index values […] the display controlling function 353 is able to vary the shape and/or the color of the marker 50 between positions in which the pressure wire is able to perform the measuring process and positions in which the pressure wire is unable to perform the measuring process, but an FFR examination using a fluid analysis is able to perform the measuring process); and correct the first parameter of the plurality of initial parameters based on the first indicator estimated by the indicator estimating unit, and to correct the second parameter of the plurality of initial parameters based on the second indicator estimated by the indicator estimating unit (Sakaguchi- Fig. 6 shows marker 50 (corresponds to a first indicator) which is used to set and change to different positions 61 to 67 (corresponds to parameters) in the blood vessel indicated in the CPR image and the SPR image; Fig. 2 and ¶0056, at least disclose the calculating function 352 calculates, for example, the indices [the plurality of initial parameters] indicating the pressure, the blood flow rate, the blood flow speed, the vector, the shearing stress, and the like, for each of the predetermined positions along the central line from the boundary at the entrance to the boundary at the exit of the target region LAD; Fig 6 and ¶0081, at least disclose the short-axis cross-sectional images illustrated on the far right of FIG. 6 are cross-sections taken in positions 61 to 67 [the plurality of initial parameters] indicated in the CPR image and the SPR image; Fig. 11B and ¶0123-0125, at least disclose the display controlling function 353 displays “FFR: 0.7; ΔFFR: 0.1” together with the short-axis cross-sectional image corresponding to a position 61 [the first parameter] in the blood vessel, “FFR: 0.88; ΔFFR: 0.05” together with the short-axis cross-sectional image corresponding to a position 62 [the second parameter], and “FFR: 0.81; ΔFFR: 0.15” together with the short-axis cross-sectional image corresponding to a position 63 […] the display controlling function 353 is also able to display an FFR value and the supplementary information (a ΔFFR value and/or a percentage diameter stenosis value) together with a short-axis cross-sectional image in the position indicated by the marker 50 [indicator]. In that situation, in response to an operation to move the marker 50 realized via the input interface 330, the display controlling function 353 changes the display of the short-axis cross-sectional image, the FFR value, and the supplementary information in conjunction with the moving operation [Wingdings font/0xE0] moving or changing to different positions suggests correcting a first parameter and a second parameter; ¶0157, at least discloses the display controlling function 353 arranges the marker 50 [indicator estimated] to be displayed in a display image and has received a position designating operation as a result of the operator performing an operation to move the marker 50. In that situation, when the marker 50 is positioned in one of the abovementioned predetermined sites, the display controlling function 353 arranges the index value exhibited in the position designated by the marker 50 to be in a non-display state; ¶0170, at least discloses in accordance with the positions of the marker 50, the display controlling function 353 is able to change the display modes of the marker 50 [correct the parameter of the plurality of initial parameters based on the indicator] and/or the numerals indicating the index values. For example, as illustrated in FIG. 18B, the display controlling function 353 changes the display modes of the marker 50 used for receiving designating operations and the index values […] the display controlling function 353 is able to vary the shape and/or the color of the marker 50 between positions in which the pressure wire is able to perform the measuring process and positions in which the pressure wire is unable to perform the measuring process, but an FFR examination using a fluid analysis is able to perform the measuring process). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Nitta to incorporate the teachings of Sakaguchi, and apply the positions of the marker 50 into Lu/Nitta’s teachings, in order to estimate a first indicator which is used to correct a first parameter of the plurality of initial parameters from the first cross sectional image, and to estimate a second indicator which is used to correct a second parameter of the plurality of initial parameters from the second cross sectional image, wherein the first parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the first cross sectional image, and wherein the second parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the second cross sectional image; and to correct the first parameter of the plurality of initial parameters based on the first indicator estimated by the indicator estimating unit, and to correct the second parameter of the plurality of initial parameters based on the second indicator. Doing so would improve efficiency of diagnosing processes related to a blood flow. Regarding claim 3, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein a resolution of the plurality of cross sectional images acquired by the cross sectional image acquiring unit (see Claim 1 rejection for detailed analysis) is higher than a resolution of the three-dimensional image after reducing a resolution thereof (Lu- ¶0071, at least discloses An A4C detector is learned and applied at a coarse level in a low-resolution volume; ¶0073, at least discloses since the initial position limits the search space, higher resolution data may be used. At higher resolutions, a plane detector for more accurate parameter estimation trained for each plane is applied to search the best candidate only in a small neighborhood around their initial detection results. A different or the same A4C detector may be applied to the fine dataset to refine the A4C position; Nitta- ¶0020, at least discloses The memory stores processor-executable instructions that cause the processor to detect cross-sectional positions of a plurality of cross-sectional images to be acquired in an imaging scan from volume data). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Sakaguchi to incorporate the teachings of Nitta, and apply the detecting cross-sectional positions of a plurality of cross-sectional images to be acquired into Lu/Sakaguchi’s teachings in order a resolution of the plurality of cross sectional images acquired by the cross sectional image acquiring unit is higher than a resolution of the three-dimensional image after reducing a resolution thereof. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 5, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein the initial parameter estimating unit is configured to estimate the plurality of initial parameters (see Claim 1 rejection for detailed analysis) based on the three-dimensional image reduced in the resolution (Lu- ¶0070, at least discloses to detect two or more (e.g., 6 standard) planes, a coarse-to-fine strategy is applied through a multi-scale hierarchy. A position of an apical four chamber view is detected with a down sampled set of the data (e.g., ¼ resolution). Because the target MPR planes have anatomic regularities with each other and with respect to the left ventricle (LV), an initial position of the possible plane positions for the other views is set based on the A4C plane position. An A4C detector is learned and applied at a coarse level in a low-resolution volume. Other views may be detected for the original or base position). Regarding claim 6, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein the at least one memory and the at least one processor (see Claim 1 rejection for detailed analysis) further function as: a learning model acquiring unit configured to acquire a learning model (Lu- ¶0031, at least discloses The machine learning process may operate to determine a desired subset or set of features to be used for a given classification task; ¶0061, at least discloses The machine learning may determine the determinative features. For each determinative feature, a data set at the corresponding resolution is provided), the learning model is at least any one of a learning model for initial parameter estimation created by learning processing using information including a set of a three- dimensional image for learning and a parameter for obtaining the reference cross section in the three-dimensional image for learning and a learning model for indicator estimation created by learning processing using information including a set of a cross sectional image for learning and an indicator a parameter for obtaining the reference cross section in the cross sectional image for learning (Lu- ¶0028, at least discloses the processor 12 performs machine learning and/or applies a machine-learnt algorithm. For application, the processor 12 calculates features for sequential classification. The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities. Multiple hypotheses are maintained between algorithm stages. Each stage, such as a translation stage, an orientation stage, and a scale stage, quickly removes false hypotheses remaining from any earlier stages. The correct or remaining hypotheses propagate to the final stage. Only one hypothesis is selected as the final detection result or a plane position is detected from information for a combination of hypotheses (e.g., average of the remaining hypotheses after the final stage); ¶0031, at least discloses The machine learning process [learning model] may operate to determine a desired subset or set of features to be used for a given classification task; ¶0061, at least discloses The features are calculated from the echocardiographic data representing the volume […] features are calculated from the data at different resolutions. A volume pyramid is provided, such that the data set is down sampled to different resolutions […] The machine learning may determine the determinative features. For each determinative feature, a data set at the corresponding resolution is provided; Nitta- ¶0037, at least discloses upon starting of the execution of imaging scans, the MRI apparatus 100 acquires reference cross-sectional images in sequence in accordance with the set order of imaging. The MRI apparatus 100 further generates, each time a reference cross-sectional image (first cross-sectional image) is acquired, a display image in which the cross-sectional position (that has previously been detected automatically) of a reference cross-sectional image planned to be acquired at a later stage (second cross-sectional image) is displayed superimposed on the acquired reference cross-sectional image (first cross-sectional image) in accordance with the set combination, and displays the generated display image on the display 135. When a correction operation [an indicator] concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images; ¶0043, at least discloses The correcting module 133 d receives a correction operation [indicator] concerning the cross-sectional position of the second cross-sectional image from the operator performed on the above-described display image via the input device 134, and corrects the cross-sectional position of the second cross-sectional image; Fig. 3 and ¶0095-0096, at least disclose Step S108: returning to FIG. 3, the correcting module 133 d then determines whether a correction operation from the operator is received on the display image displayed at Step S107 […] Step S109: meanwhile, if the correction operation [indicator] is received (Yes at Step S108), the correcting module 133 d corrects the cross-sectional position of the second cross-sectional image, and overwrites the cross-sectional position of the cross-sectional image information concerning the second cross-sectional image stored in the storage 132), and at least any one of the initial parameter estimating unit and the indicator estimating unit performs the estimation based on the learning model (Lu- ¶0009, at least discloses A processor is operable to calculate first planar features for each of a plurality of translated plane positions). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Sakaguchi to incorporate the teachings of Nitta, and apply the receiving a correction operation into Lu/Sakaguchi’s teachings in order the learning model is at least any one of a learning model for initial parameter estimation created by learning processing using information including a set of a three- dimensional image for learning and a parameter for obtaining the reference cross section in the three-dimensional image for learning and a learning model for indicator estimation created by learning processing using information including a set of a cross sectional image for learning and an indicator a parameter for obtaining the reference cross section in the cross sectional image for learning. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 7, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein the plurality of initial parameters are information related to a position or an attitude of the reference cross sections in the three-dimensional image (Lu- ¶0009, at least discloses a system is provided for detecting plane positions for standard planes of a multiplanar reconstruction of a heart volume […] A processor is operable to calculate first planar features for each of a plurality of translated plane positions, rule out hypotheses corresponding to the translated plane positions with a translation classifier and as a function of the first planar features, leaving first remaining hypotheses, to calculate second planar features for each of a plurality of rotated plane positions associated with the first remaining hypotheses; ¶0047, at least discloses a limited set of hypotheses may be used based on any desired criteria, such as relative expected positions of different planes. By training a series of detectors that estimate plane or pose parameters at a number of sequential stages, the number of calculations may be reduced. The stages are applied in the order of complexity as the parameter degrees of freedom increase (e.g., translation, then orientation, and then scale), but other orders may be used [Wingdings font/0xE0] ("orientation" and "pose" are equated to "attitude")). Regarding claim 8, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 7, and further discloses wherein each of the first indicator and the second indicator estimated by the indicator estimating unit (see Claim 1 rejection for detailed analysis) is an indicator to be used in correction in an image plane of at least one of the plurality of cross sectional images (Lu- ¶0028, at least discloses The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities. Multiple hypotheses are maintained between algorithm stages. Each stage, such as a translation stage, an orientation stage, and a scale stage, quickly removes false hypotheses remaining from any earlier stages. The correct or remaining hypotheses propagate to the final stage. Only one hypothesis is selected as the final detection result or a plane position is detected from information for a combination of hypotheses (e.g., average of the remaining hypotheses after the final stage); ¶0072, at least discloses The plane position for the A4C view is used to limit the search region for fine or coarse plane parameter estimation. An initial position of another of the standard view planes is determined as a function of the position of the A4C view. The initial plane parameters (position, orientation, and scale) for other views (e.g., A2C, A3C, SAXB, SAXM, and SAXA) with respect to the A4C view are based on empirical statistics. For example, the average relative position from the training data set is used. The initial position sets the search region. The possible plane positions may be limited in translation, rotation, and/or scale relative to the initial position; Nitta- ¶0037, at least discloses upon starting of the execution of imaging scans, the MRI apparatus 100 acquires reference cross-sectional images in sequence in accordance with the set order of imaging. The MRI apparatus 100 further generates, each time a reference cross-sectional image (first cross-sectional image) is acquired, a display image in which the cross-sectional position (that has previously been detected automatically) of a reference cross-sectional image planned to be acquired at a later stage (second cross-sectional image) is displayed superimposed on the acquired reference cross-sectional image (first cross-sectional image) in accordance with the set combination, and displays the generated display image on the display 135. When a correction operation [an indicator] concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images; ¶0043, at least discloses The correcting module 133 d receives a correction operation [indicator] concerning the cross-sectional position of the second cross-sectional image from the operator performed on the above-described display image via the input device 134, and corrects the cross-sectional position of the second cross-sectional image; Sakaguchi- Fig. 6 shows marker 50 (corresponds to a first indicator) which is used to set and change to different positions 61 to 67 (corresponds to parameters) in the blood vessel indicated in the CPR image and the SPR image. The first parameter is different from the second parameter. The display 340 display cross-sectional images (corresponds to the plurality of cross sectional images) of the blood vessel). Regarding claim 9, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein each of the first indicator and the second indicator estimated by the indicator estimating unit (see Claim 1 rejection for detailed analysis) is a correction amount for moving at least one of the plurality of cross sections images to obtain at least one of the reference cross sections (Lu- ¶0028, at least discloses The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities. Multiple hypotheses are maintained between algorithm stages. Each stage, such as a translation stage, an orientation stage, and a scale stage, quickly removes false hypotheses remaining from any earlier stages. The correct or remaining hypotheses propagate to the final stage. Only one hypothesis is selected as the final detection result or a plane position is detected from information for a combination of hypotheses (e.g., average of the remaining hypotheses after the final stage); ¶0072, at least discloses The plane position for the A4C view is used to limit the search region for fine or coarse plane parameter estimation. An initial position of another of the standard view planes is determined as a function of the position of the A4C view. The initial plane parameters (position, orientation, and scale) for other views (e.g., A2C, A3C, SAXB, SAXM, and SAXA) with respect to the A4C view are based on empirical statistics. For example, the average relative position from the training data set is used. The initial position sets the search region. The possible plane positions may be limited in translation, rotation, and/or scale relative to the initial position; Nitta- Fig. 6 and ¶0056, at least disclose The detecting module 133 a generates, based on the detected cross-sectional positions of reference cross-sectional images, reference cross-sectional images from the volume data by multi-planar reconstruction (MPR) processing. As illustrated in FIG. 6, the detecting module 133 a further displays the detected feature regions of heart (such as rhombuses, triangles, and x marks in FIG. 6), and cross lines with the cross-sectional positions of other reference cross-sectional images (such as solid lines and dotted lines in FIG. 6), superimposed on the generated respective reference cross-sectional images [cross sectional images]. Such a display is effective for checking the detection result by the operator, and the operator can perform the correction of cross-sectional positions on the display screen as appropriate; Sakaguchi- Fig. 6 shows marker 50 (corresponds to a first indicator) which is used to set and change to different positions 61 to 67 (corresponds to parameters) in the blood vessel indicated in the CPR image and the SPR image. The first parameter is different from the second parameter. The display 340 display cross-sectional images (corresponds to the plurality of cross sectional images) of the blood vessel). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu to incorporate the teachings of Nitta and Sakaguchi, and apply the reference cross-sectional images and the plurality of cross sectional images into Lu’s teachings in order the indicator estimated by the indicator estimating unit is a correction amount for moving at least one of the plurality of cross sections images to obtain at least one of the reference cross sections. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 10, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 9, and further discloses wherein each of the first correction amount and the second correction amount is at least one of (i) an amount of displacement for moving a position of at least one of the plurality of cross sectional images to a target position of at least one of the reference cross sections and (ii) a rotation amount for rotating an attitude of at least one of the plurality of cross sectional images to a target attitude of at least one of the reference cross sections (Lu- ¶0028, at least discloses The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities. Multiple hypotheses are maintained between algorithm stages. Each stage, such as a translation stage, an orientation stage, and a scale stage, quickly removes false hypotheses remaining from any earlier stages. The correct or remaining hypotheses propagate to the final stage. Only one hypothesis is selected as the final detection result or a plane position is detected from information for a combination of hypotheses (e.g., average of the remaining hypotheses after the final stage); ¶0072, at least discloses the plane position for the A4C view is used to limit the search region for fine or coarse plane parameter estimation. An initial position of another of the standard view planes is determined as a function of the position of the A4C view. The initial plane parameters (position, orientation, and scale) for other views (e.g., A2C, A3C, SAXB, SAXM, and SAXA) with respect to the A4C view are based on empirical statistics. For example, the average relative position from the training data set is used. The initial position sets the search region. The possible plane positions may be limited in translation, rotation, and/or scale relative to the initial position; Nitta- Fig. 6 and ¶0056, at least disclose The detecting module 133 a generates, based on the detected cross-sectional positions of reference cross-sectional images, reference cross-sectional images from the volume data by multi-planar reconstruction (MPR) processing. As illustrated in FIG. 6, the detecting module 133 a further displays the detected feature regions of heart (such as rhombuses, triangles, and x marks in FIG. 6), and cross lines with the cross-sectional positions of other reference cross-sectional images (such as solid lines and dotted lines in FIG. 6), superimposed on the generated respective reference cross-sectional images [cross sectional images]. Such a display is effective for checking the detection result by the operator, and the operator can perform the correction of cross-sectional positions on the display screen as appropriate; Sakaguchi- Fig. 11B and ¶0123-0125, at least disclose the display controlling function 353 displays “FFR: 0.7; ΔFFR: 0.1” together with the short-axis cross-sectional image corresponding to a position 61 [the first parameter] in the blood vessel, “FFR: 0.88; ΔFFR: 0.05” together with the short-axis cross-sectional image corresponding to a position 62 [the second parameter], and “FFR: 0.81; ΔFFR: 0.15” together with the short-axis cross-sectional image corresponding to a position 63 […] the display controlling function 353 is also able to display an FFR value and the supplementary information (a ΔFFR value and/or a percentage diameter stenosis value) together with a short-axis cross-sectional image in the position indicated by the marker 50 [indicator]. In that situation, in response to an operation to move the marker 50 realized via the input interface 330, the display controlling function 353 changes the display of the short-axis cross-sectional image, the FFR value, and the supplementary information in conjunction with the moving operation [Wingdings font/0xE0] moving marker 50 at different positions suggests “an amount of displacement for moving a position of at least one of the plurality of cross sectional images to a target position of at least one of the reference cross sections”). Regarding claim 11, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein each of the first indicator and the second indicator estimated by the indicator estimating unit (see Claim 1 rejection for detailed analysis) is a parameter for obtaining the reference cross sections (Lu- ¶0072, at least discloses the plane position for the A4C view is used to limit the search region for fine or coarse plane parameter estimation. An initial position of another of the standard view planes is determined as a function of the position of the A4C view. The initial plane parameters (position, orientation, and scale) for other views (e.g., A2C, A3C, SAXB, SAXM, and SAXA) with respect to the A4C view are based on empirical statistics; Nitta- ¶0020, at least discloses The memory stores processor-executable instructions that cause the processor to detect cross-sectional positions of a plurality of cross-sectional images to be acquired in an imaging scan from volume data; Sakaguchi- Fig. 6 shows marker 50 (corresponds to a first indicator) which is used to set and change to different positions 61 to 67 (corresponds to parameters) in the blood vessel indicated in the CPR image and the SPR image. The first parameter is different from the second parameter. The display 340 display cross-sectional images (corresponds to the plurality of cross sectional images) of the blood vessel), and the indicator estimating unit estimates a parameter for obtaining the reference cross sections based on the plurality of initial parameters (Lu- ¶0047, at least discloses a limited set of hypotheses may be used based on any desired criteria, such as relative expected positions of different planes. By training a series of detectors that estimate plane or pose parameters at a number of sequential stages, the number of calculations may be reduced; ¶0064, at least discloses the plane detector determines if a given sub-volume sample (data for a possible plane position) is positive or negative. Positive and negative samples correspond to correct and incorrect plane parameters (positions), respectively). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Nitta to incorporate the teachings of Sakaguchi, and apply the marker at different positions into Lu/Nitta’s teachings in order each of the first indicator and the second indicator estimated by the indicator estimating unit is a parameter for obtaining the reference cross sections. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 12, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 11, and further discloses wherein the parameter for obtaining the reference cross sections is at least one of an estimated value of a target position of at least one of the reference cross sections and an estimated value of an axis vector to be a target at least one of the reference cross section (Lu- ¶0066, at least discloses The different possible plane positions correspond to translation along different axes. Any step size or search strategy may be used, such as a coarse search with a fine search at the locations identified as likely in the coarse search; ¶0071, at least discloses the target MPR planes have anatomic regularities with each other). Regarding claim 13, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 11, and further discloses wherein the indicator estimating unit performs an estimation of positions of a plurality of prescribed landmarks on at least one of the cross sectional image and the second cross sectional image and estimates a parameter to be used to define the reference cross sections based on a result of the estimation of the positions (Lu- ¶0066, at least discloses The different possible plane positions correspond to translation along different axes. Any step size or search strategy may be used, such as a coarse search with a fine search at the locations identified as likely in the coarse search; Nitta- ¶0112, at least discloses the generating module 133 c may normally display a display image in which the initial value of the cross-sectional position of the second cross-sectional image is superimposed on the first cross-sectional image, and may display a display image in which the corrected cross-sectional position of the second cross-sectional image is superimposed on the first cross-sectional image only when it has been corrected. Thus, each time the first cross-sectional image is acquired in imaging scans, the correcting module 133 d detects the cross-sectional position of the second cross-sectional image before acquisition by using the acquired first cross-sectional image, and corrects the initial value of the cross-sectional position of the second cross-sectional image. Furthermore, when the initial value is corrected, the generating module 133 c generates a display image in which the cross-sectional position of the second cross-sectional image after the correction is superimposed on the first cross-sectional image as necessary; Sakaguchi- Fig. 6 shows marker 50 which is used to set and change to different positions 61 to 67 (positions of a plurality of prescribed landmarks) in the blood vessel indicated in the CPR image and the SPR image; ¶0082, at least discloses The input interface 330 is configured to receive a moving operation to move the marker 50. After that, the display controlling function 353 causes the FFR values corresponding to the positions of the marker 50 to be displayed in the upper left section of the display 340. In one example, the display controlling function 353 arranges the marker 50 to be positioned at the distal-side end of the target region when starting the display and causes the display 340 to display the FFR value exhibited at the distal-side end of the target region. After that, the input interface 330 receives a moving operation to move the marker 50 along the LAD. The display controlling function 353 then displays the FFR values corresponding to the positions of the marker 50 moved via the input interface 330, in conjunction with the moving of the marker 50). Regarding claim 15, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein the correcting unit determines whether or not to perform a correction of the initial parameters based on the indicator estimated by the indicator estimating unit, and performs the correction of the initial parameter based on a result of the determination (Lu- ¶0009, at least discloses a processor is operable to calculate first planar features for each of a plurality of translated plane positions, rule out hypotheses corresponding to the translated plane positions with a translation classifier; ¶0032, at least discloses The translation classifier outputs a probability [indicator] of a given possible plane position being the correct or desired view based on the feature values. If the probability is above a threshold, the associated hypothesis is maintained If the probability is below a threshold, the associated hypothesis is ruled out and discarded from the pool of hypotheses [the indicator estimated]; ¶0040, at least discloses The processor 12 determines the plane position of one of the standard or other planes as a function of the remaining hypotheses. The detected view is a common or standard view (e.g., apical four chamber, apical two chamber, left parasternal, or sub-coastal), but other views may be recognized. The output of the classifier, such as the probabilistic boosting tree, is used to determine the plane position. The plane position associated with a highest probability is selected; ¶0074, at least discloses In act 38, an image is generated as a function of the detected plane position. Images are generated for each of the determined views. Data corresponding to the position of the plane is extracted from the volume. The data is used to generate an image for the view. For example, multi-planar reconstruction images are generated from the ultrasound data. The planes define the data to be used for imaging. Data associated with locations intersecting each plane or adjacent to each plane is used to generate a two-dimensional image. Data may be interpolated to provide spatial alignment to the plane, or a nearest neighbor selection may be used. The resulting images are generated as a function of the orientation of the multi-planar reconstruction and provide the desired views; Nitta- Fig. 6 and ¶0056, at least disclose The detecting module 133 a generates, based on the detected cross-sectional positions [parameters] of reference cross-sectional images, reference cross-sectional images from the volume data by multi-planar reconstruction (MPR) processing. As illustrated in FIG. 6, the detecting module 133 a further displays the detected feature regions of heart (such as rhombuses, triangles, and x marks in FIG. 6), and cross lines with the cross-sectional positions [parameters] of other reference cross-sectional images (such as solid lines and dotted lines in FIG. 6), superimposed on the generated respective reference cross-sectional images [cross sectional images]. Such a display is effective for checking the detection result by the operator, and the operator can perform the correction of cross-sectional positions on the display screen as appropriate; Sakaguchi- Fig. 6 shows marker 50 (corresponds to a first indicator) which is used to set and change to different positions 61 to 67 (corresponds to parameters) in the blood vessel indicated in the CPR image and the SPR image. The first parameter is different from the second parameter. The display 340 display cross-sectional images (corresponds to the plurality of cross sectional images) of the blood vessel;). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu to incorporate the teachings of Nitta and Sakaguchi, and apply the detecting cross-sectional positions of reference cross-sectional images and changing marker to different positions in the blood vessel into Lu’s teachings in order to determine whether or not to perform a correction of the plurality of initial parameters based on the indicators estimated by the indicator estimating unit, and performs the correction of the plurality of initial parameters based on a result of the determination. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 16, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 15, and further discloses wherein the correcting unit determines, for each indicator, whether or not to perform the correction (Nitta- ¶0095, at least discloses Step S108: returning to FIG. 3, the correcting module 133 d then determines whether a correction operation from the operator is received on the display image displayed at Step S107). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Sakaguchi to incorporate the teachings of Nitta, and apply the determining whether a correction operation from the operator is received into Lu/Sakaguchi’s teachings in order the correcting unit determines, for each indicator, whether or not to perform the correction. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 17, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 16, and further discloses wherein the at least one memory and the at least one processor (see Claim 1 rejection for detailed analysis) further function as: a display processing unit configured to perform processing of displaying the three-dimensional image, the plurality of cross sectional images, and the indicators estimated by the indicator estimating unit (Lu- Fig. 1 and ¶0041, at least disclose the display 16 displays an image of the detected plane, such as an image of the detected standard plane (e.g., A4C). The data representing the volume is used for generating the image; ¶0032, at least discloses The translation classifier outputs a probability of a given possible plane position being the correct or desired view based on the feature values; Nitta- ¶0037, at least discloses When a correction operation concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images [cross sectional images] relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images [cross sectional images]), wherein the correcting unit is configured to acquire a selection result by a user with respect to an indicator among the indicators estimated by the indicator estimating unit based on which a correction of the plurality of initial parameters is to be performed, and performs the correction of the plurality of initial parameters based on the selection result (Lu- ¶0028, at least discloses Only one hypothesis is selected as the final detection result [acquire a selection result by a user] or a plane position is detected from information for a combination of hypotheses (e.g., average of the remaining hypotheses after the final stage); ¶0052, at least discloses Features that are relevant to the MPRs are extracted and learned in a machine algorithm based on the experts' annotations, resulting in a probabilistic model for MPRs; ¶0066, at least discloses features are calculated for different possible plane positions. The different possible plane positions correspond to translation along different axes. Any step size or search strategy may be used, such as a coarse search with a fine search at the locations identified as likely in the coarse search. The detector provides a probability for each possible position. The possible positions associated with sufficient probability are maintained in the hypotheses pool. Sufficient probability is determined by a threshold, by selecting the top X (where X is one or more) probabilities, or other test). Regarding claim 18, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 16, and further discloses wherein the at least one memory and the at least one processor (see Claim 1 rejection for detailed analysis) further function as: a display processing unit configured to perform processing of displaying the three-dimensional image, the plurality of cross sectional images, and the indicators estimated by the indicator estimating unit (Lu- Fig. 1 and ¶0041, at least disclose the display 16 displays an image of the detected plane, such as an image of the detected standard plane (e.g., A4C). The data representing the volume is used for generating the image; ¶0032, at least discloses The translation classifier outputs a probability of a given possible plane position being the correct or desired view based on the feature values; Nitta- ¶0037, at least discloses When a correction operation concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images [cross sectional images] relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images [cross sectional images]), wherein the correcting unit corrects, based on an instruction from a user, the indicators estimated by the indicator estimating unit and performs a correction of the plurality of initial parameters based on the corrected indicators (Lu- ¶0028, at least discloses the processor 12 calculates features for sequential classification. The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities [indicators estimated]; ¶0032, at least discloses The translation classifier outputs a probability [indicator] of a given possible plane position being the correct or desired view based on the feature values. If the probability is above a threshold, the associated hypothesis is maintained If the probability is below a threshold, the associated hypothesis is ruled out and discarded from the pool of hypotheses [the indicator estimated]; ¶0052, at least discloses Features that are relevant to the MPRs are extracted and learned in a machine algorithm based on the experts' annotations, resulting in a probabilistic model for MPRs; ¶0064, at least discloses In act 36, a position of a plane is detected. The position associated with the desired view is detected. For example, one or more standard view planes are detected as a function of the output of the classifiers […] The plane detector determines if a given sub-volume sample (data for a possible plane position) is positive or negative. Positive and negative samples correspond to correct and incorrect plane parameters (positions), respectively; ¶0066, at least discloses features are calculated for different possible plane positions. The different possible plane positions correspond to translation along different axes. Any step size or search strategy may be used, such as a coarse search with a fine search at the locations identified as likely in the coarse search. The detector provides a probability for each possible position. The possible positions associated with sufficient probability are maintained in the hypotheses pool. Sufficient probability is determined by a threshold, by selecting the top X (where X is one or more) probabilities, or other test). Regarding claim 19, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein the plurality of cross sectional images are made up of a plurality of images of which positions and attitudes among the images have a prescribed relationship (Lu- Fig. 4 shows medical images of standard echocardiographic views and represents the relative plane positions for the views; ¶0016, at least discloses An automated supervised learning method detects standard or other multiplanar reformatted planes (MPRs) from a 3D echocardiographic volume in order to achieve fast, accurate, and consistent MPR detection. For example, a computer detects six major or standard MPR planes: A4C-apical four-chamber plane; A2C-apical two chamber plane; A3C-apical three chamber plane; SAXB-short axis basal plane; SAXM-short axis middle plane; and SAXA-short axis apex plane; Nitta- ¶0037, at least discloses When a correction operation concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images [cross sectional images] relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images [cross sectional images]; Nitta- ¶0037, at least discloses When a correction operation concerning the cross-sectional position by the operator is received on the display image, the MRI apparatus 100 corrects the cross-sectional position of the reference cross-sectional image planned to be acquired (second cross-sectional image) and the cross-sectional positions of other reference cross-sectional images [cross sectional images] relevant thereto, and moves on to the acquisition of subsequent reference cross-sectional images [cross sectional images]), the cross sectional image acquiring unit is configured to acquire a cross sectional image after correction from the three-dimensional image based on a sectional parameter corrected by the correcting unit (Lu- Fig. 4 shows medical images of standard echocardiographic views and represents the relative plane positions for the views; ¶0028, at least discloses The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities. Multiple hypotheses are maintained between algorithm stages. Each stage, such as a translation stage, an orientation stage, and a scale stage, quickly removes false hypotheses remaining from any earlier stages. The correct or remaining hypotheses propagate to the final stage. Only one hypothesis is selected as the final detection result or a plane position is detected from information for a combination of hypotheses (e.g., average of the remaining hypotheses after the final stage); ¶0064, at least discloses In act 36, a position of a plane is detected. The position associated with the desired view is detected. For example, one or more standard view planes are detected as a function of the output of the classifiers. The features are used to determine the most likely position of the plane for the view. The plane detectors are discriminative classifiers trained on the 3D echocardiographic volumes. The plane detector determines if a given sub-volume sample (data for a possible plane position) is positive or negative. Positive and negative samples correspond to correct and incorrect plane parameters (positions), respectively.), and the indicator estimating unit further performs estimation of the indicators based on the cross sectional image after correction (Lu- Fig. 1 and ¶0017, at least disclose a medical diagnostic imaging system 10 for detecting a plane position of a desired view; ¶0028, at least discloses the processor 12 calculates features for sequential classification. The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities [estimation of the indicators]). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu to incorporate the teachings of Nitta and Sakaguchi, and apply the plurality of cross sectional images into Lu’s teachings in order the plurality of cross sectional images are made up of a plurality of images of which positions and attitudes among the images have a prescribed relationship. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. Regarding claim 21, Lu in view of Nitta and Sakaguchi, discloses an image processing method (see Claim 1 rejection for detailed analysis), comprising: an image acquiring step of acquiring a three-dimensional image of an object (see Claim 1 rejection for detailed analysis); an estimating step of reducing a resolution of the three-dimensional image (see Claim 1 rejection for detailed analysis) and estimating, as a plurality of initial parameter parameters, initial values to be corrected to define reference cross sections (see Claim 1 rejection for detailed analysis); a cross sectional image acquiring step of acquiring a plurality of cross sectional images including a first cross sectional image and a second cross sectional image based on the three-dimensional image and the plurality of initial parameters (see Claim 1 rejection for detailed analysis); an indicator estimating step of estimating a first indicator which is used to correct a first parameter of the plurality of initial parameters from the first cross sectional image, and to estimate a second indicator which is used to correct a second parameter of the plurality of initial parameters from the second cross sectional image (see Claim 1 rejection for detailed analysis), wherein the first parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the first cross sectional image, and wherein the second parameter is a parameter of the plurality of initial parameters which is used to acquire one other cross sectional image of the plurality of cross sectional images that is different from the second cross sectional image (see Claim 1 rejection for detailed analysis); and a correcting step of correcting the first parameter of the plurality of initial parameters based on the first indicator estimated by the indicator estimating step (see Claim 1 rejection for detailed analysis), and correcting the second parameter of the plurality of initial parameters based on the second indicator estimated by the indicator estimating step (see Claim 1 rejection for detailed analysis). Regarding claim 22, Lu in view of Nitta and Sakaguchi, discloses a non-transitory computer-readable medium that stores a program for causing a computer to execute each any one of a plurality of steps (Lu- ¶0008, at least discloses a computer readable storage medium has stored therein data representing instructions executable by a programmed processor for detecting standard view planes in a volume represented by three-dimensional echocardiographic data) in the image processing method according to claim 21. 8. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Nitta et al, further in view of Sakaguchi, still further in view of Aben et al. (“Aben”) [US-2021/0035290-A1] Regarding claim 2, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 1, and further discloses wherein the cross sectional image acquiring unit is configured to acquire the plurality of cross sectional images in association with a plurality of sliced volume images (Lu- ¶0064, at least discloses In act 36, a position of a plane is detected. The position associated with the desired view is detected. For example, one or more standard view planes [a plurality of sliced volume images] are detected as a function of the output of the classifiers. The features are used to determine the most likely position of the plane for the view. The plane detectors are discriminative classifiers trained on the 3D echocardiographic volumes. The plane detector determines if a given sub-volume sample (data for a possible plane position) is positive or negative. Positive and negative samples correspond to correct and incorrect plane parameters (positions), respectively). The prior art does not clearly disclose each one of which has a prescribed sliced thickness based on the three-dimensional image and the initial parameter. However, Aben discloses each one of which has a prescribed sliced thickness based on the three-dimensional image and the relating initial parameters (Aben- ¶0120, at least discloses Multi planar reconstruction is a post-processing technique to create new 2D images of arbitrary thickness from a stack of images (3D volumetric dataset) in planes other than that of the original stack […] Since we are using a 3D volumetric dataset, only a slice within this 3D volumetric dataset is shown, the thickness of this slice can be defined and the 2D orthogonal view can be generated by for instance bi-linear interpolation. To allow a different location within this orthogonal view, the user can for instance scroll with the mouse wheel to allow the system to generate a different 2D orthogonal view (at a different depth within the 3D volumetric dataset). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Nitta/Sakaguchi to incorporate the teachings of Aben, and apply the thickness of slices into the three-dimensional image and the initial parameter, as taught by Lu/Nitta/Sakaguchi in order to acquire the plurality of cross sectional images in association with a plurality of sliced volume images, each one of which has a prescribed sliced thickness based on the three-dimensional image and the relating initial parameter. Doing so would provide improved methods and devices for accurate study and characterization of blood flow patterns in cardiac valves with minimal operator interaction. 9. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Nitta et al, further in view of Sakaguchi, still further in view of Yoshibayashi et al. (“Yoshibayashi”) [US-2015/0070469-A1] Regarding claim 20, Lu in view of Nitta and Sakaguchi, discloses the image processing apparatus according to claim 19, and further discloses wherein the indicator estimating unit estimates the indicators (see Claim 19 rejection for detailed analysis) based on a part of the plurality of images that make up the plurality of cross sectional images (Lu- Fig. 1 and ¶0017, at least disclose a medical diagnostic imaging system 10 for detecting a plane position of a desired view; ¶0028, at least discloses the processor 12 calculates features for sequential classification. The detection algorithm implemented by the processor 12 searches through multiple hypotheses to identify the ones with high probabilities [indicators]; ¶0031, at least discloses Haar wavelet-like features represent the difference between different portions of a region), and The prior art does not clearly disclose sequentially switches, in a prescribed order, which an image among the plurality of images is to be used as a basis when estimating the indicators. However, Yoshibayashi discloses sequentially switches, in a prescribed order, which an image among the plurality of images is to be used as a basis (Yoshibayashi- ¶0036, at least discloses the three-dimensional coordinates of a predetermined landmark on the MRI device coordinate system are acquired by selecting the landmark on the image while displaying a cross section image of the three-dimensional medical image by switching a section position; ¶0061, at least discloses When, for example, D represents the interval between the correspondence cross section and its adjacent cross section, αD (where α>1) represents the interval between the adjacent cross section and its adjacent cross section (in a direction away from the correspondence cross section), thereby calculating the intervals to the subsequent cross sections in the same manner. Even if the number of cross sections is fixed, it is only necessary to calculate the value of D so that the set number of cross sections falls within the image generation range. This makes it possible to mainly display a portion which is around a true correspondence cross section at high probability; ¶0072, at least discloses the process returns to step S2025 after step S2060, and the processes in the subsequent steps are executed for a newly acquired ultrasonic image; ¶0080, at least discloses a plurality of cross section images within the image generation range set based on the error estimation value are displayed side by side. However, one cross section image may be generated at each time while changing a cross section position within the image generation range at each time, and the generated images may be switched and displayed). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Nitta/Sakaguchi to incorporate the teachings of Yoshibayashi, and apply displaying a cross section image of the three-dimensional medical image by switching a section position and the generated images may be switched and displayed into Lu/Nitta/Sakaguchi’s teachings in order the indicator estimating unit sequentially switches, in a prescribed order, which an image among the plurality of images is to be used as a basis when estimating the indicators. Doing so would correctly make diagnosis by comparing a plurality of types of medical images such as respective medical images captured by a plurality of modalities or those captured at different dates and times. Conclusion 10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Nov 07, 2023
Application Filed
Jun 28, 2025
Non-Final Rejection — §103
Oct 01, 2025
Response Filed
Dec 17, 2025
Final Rejection — §103
Feb 23, 2026
Response after Non-Final Action
Mar 12, 2026
Request for Continued Examination
Mar 15, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month