Prosecution Insights
Last updated: April 19, 2026
Application No. 18/827,652

ULTRASOUND DIAGNOSTIC APPARATUS AND CONTROL METHOD OF ULTRASOUND DIAGNOSTIC APPARATUS

Final Rejection §102§103
Filed
Sep 06, 2024
Examiner
ALDARRAJI, ZAINAB MOHAMMED
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Fujifilm Corporation
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
83%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
81 granted / 121 resolved
-3.1% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
29 currently pending
Career history
150
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
21.6%
-18.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 121 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The proposed reply filed on 10/15/2025 has been entered. Claims 1-5, 9-11, and 13-20 remain pending in the current application. The amendments to the claims have overcome the 35 USC 112 rejections. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 9-11, 13-17, and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fujisawa et al. (US 2021/0093303). Regarding claim 1, Fujisawa teaches an ultrasound diagnostic apparatus comprising (figure 1, para. 0024): an ultrasound probe (figure 1, element 30, para. 0024; The ultrasonic diagnostic apparatus 1 includes a main body 10 as a medical image processing apparatus, an ultrasonic probe 30 as a scanner); a position and posture sensing device configured to acquire position and posture information of the ultrasound probe (para. 0054; The type of the position sensor 40 attached to the ultrasonic probe 30 detects position data of itself, and outputs the position data to the main body 10. The position data of the position sensor 40 can also be regarded as the position data of the ultrasonic probe 30. The position data of the ultrasonic probe 30 includes a coordinate (X, Y, Z) of the ultrasonic probe 30 and a tilt angle (posture) from each axis); and a processor configured to (para. 0025; The main body 10 of the ultrasonic diagnostic apparatus 1 includes a transmission/reception (T/R) circuit 11, a B-mode processing circuit 12, a Doppler processing circuit 13, an image generating circuit 14, an image memory 15, a network interface 16, processing circuitry 17, a main memory 18, an input interface 19, and a display 20.): acquire an ultrasound image representing a tomogram of a subject by transmitting and receiving an ultrasound beam using the ultrasound probe (paras. 0026 and 0029; The T/R circuit 11 has a transmitting circuit and a receiving circuit (not shown). Under the control of the processing circuitry 17, the T/R circuit 11 controls transmission directivity and reception directivity in transmission and reception of ultrasonic waves. Under the control of the processing circuitry 17, the B-mode processing circuit 12 receives the echo data from the receiving circuit, performs logarithmic amplification, envelope detection processing and the like, thereby generate data (two-dimensional (2D) or three-dimensional (3D) data) which signal intensity is presented by brightness of luminance.); generate three-dimensional ultrasound image data of the subject based on the position and posture information of the ultrasound probe acquired by the position and posture sensing device and the ultrasound image (paras. 0040 and 0104; the acquiring function 171 stores the ultrasonic image data acquired in step ST1 in a three-dimensional arrangement in the 3D memory of the image memory 15 on the basis of the position data of the ultrasonic image data (step ST3). ); extract three-dimensional structure information regarding a three-dimensional structure included in the three-dimensional ultrasound image data from the three-dimensional ultrasound image data (para. 0107; the deriving function 172 derives the organ shape of the entire target organ in the subject and the imaged organ region on the basis of the ultrasonic image data of one or multiple cross-sections arranged in the 3D memory of the image memory 15 (step ST5).); have a position estimation model trained in a position of the three-dimensional structure in the three-dimensional image data obtained by imaging the three-dimensional structure (paras. 0086 and 0107; the deriving function 172 derives the organ shape of the entire target organ in the subject and the imaged organ region on the basis of the ultrasonic image data of one or multiple cross-sections arranged in the 3D memory of the image memory 15 (step ST5). he deriving function 172 inputs a large number of training data and performs learning to sequentially update the parameter data Pa. The training data is made up of a set of multiple ultrasonic image data (e.g., arbitrary cross-section data forming volume data) S1, S2, S3, . . . as training input data and organ shapes T1, T2, T3, . . . corresponding to each arbitrary cross-section data. The multiple ultrasonic image data S1, S2, S3, . . . constitutes a training input data group Ba. The organ shapes T1, T2, T3, . . . constitutes a training output data group Ca. The examiner notes that the deriving unit extracts the position and shape of structure from the three-dimensional images); and estimate a position of an occupied region of the three-dimensional ultrasound image data of the subject by the position estimation model based on the extracted three-dimensional structure information (paras. 0107-0108; the deriving function 172 derives the organ shape of the entire target organ in the subject and the imaged organ region on the basis of the ultrasonic image data of one or multiple cross-sections arranged in the 3D memory of the image memory 15 (step ST5). The determining function 173 determines a three-dimensional unimaged organ region on the basis of the partial organ shape included in the ultrasonic image data of one or multiple cross-sections arranged three-dimensionally in step ST3 and the entire organ shape derived in step ST5 (step ST6). he determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ.), wherein the processor is configured to, upon determining that the position of the occupied region of the three-dimensional ultrasound data is not able to be estimated (paras. 0082, 0095, 0107, and 0130; Note that the data-missing due to insufficient imaging is described, but the present invention is not limited to this case. For example, the present invention can be applied to the case where there is an artifact on the ultrasonic image data and a part of the tissue to be visually recognized cannot be visually recognized even if it is not the data-missing. In this case, the artifact portion on the ultrasonic image data can be detected and set the portion as the data-missing area. The examiner notes that the unimaged target region is a region that the processor cannot identify based on the image data and position data, which can be due to an artifact. Thus, the processor is unable to locate the missing target.), synthesize the three-dimensional ultrasound image data which the position is not able to be estimated with an occupied region that is adjacent to the three- dimensional ultrasound image data and of which a position has already been estimated (paras. 0130-0131; After the unimaged target region is determined, virtual ultrasonic image data may be displayed as a model assuming that the unimaged target region is imaged. The virtual ultrasonic image data may be generated by synthesizing pixel values at positions close to the unimaged target region in the already existing data area, or may be an MPR image generated from the volume data when the CT image volume data and the like already exist. The examiner notes that the processor is unable to estimate the position and locate the region due to an artifact in the image, therefore, the processor synthesize the unimaged region with a region adjacent to it that was located and the position of it was estimated.). Regarding claim 2, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 1, wherein the processor is configured to: extract a two-dimensional tomographic image from the three-dimensional ultrasound image data (para. 0042; in order to generate various 2D image data so as to display the volume data stored in the 3D memory on the display 20, the image generating circuit 14 performs processing for displaying the volume data on the 2D display and processing for displaying the 3D data three-dimensionally, with respect to the volume data. The image generating circuit 14 performs the processing such as volume rendering (VR) processing, surface rendering (SR) processing, MIP (Maximum Intensity Projection) processing, MPR (Multi Planer Reconstruction) processing, etc.); reconstruct the three-dimensional ultrasound image data of the subject based on the extracted two-dimensional tomographic image and the position and posture information of the ultrasound probe acquired by the position and posture sensing device (para. 0078; the deriving function 172 arranges the ultrasonic image data acquired by the acquiring function 171 in the image memory 15 as the 3D memory on the basis of the position data acquired by the position sensor 40); and extract the three-dimensional structure information from the reconstructed three-dimensional ultrasound image data (para. 0077; the deriving function 172 has a function of deriving a target shape and an imaged target region in the subject from multiple ultrasonic image data and multiple position data acquired by the acquiring function 171. For example, the deriving function 172 includes a function of deriving an organ shape and the imaged organ region in the subject from the ultrasonic image data of multiple cross-sections and their position data.). Regarding claim 3, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 1, further comprising: a monitor (figure 1, element 20, para. 0025) and wherein the processor is configured to: display a three-dimensional schema image (para. 0108; Note that the 3D model may be generated by acquiring the organ contour from volume data such as 3D-CT image data acquired in advance from the same subject (same patient), or may be a 3D model showing a general organ shape.); and display, in an emphasized manner in the three-dimensional schema image, the occupied region of the three-dimensional ultrasound image data (para. 0108; he determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ. Then, the determining function 173 determines, as an unimaged organ region, a region acquired by removing the organ contour included in the already existing data area from the organ contour of the 3D model. Note that the 3D model may be generated by acquiring the organ contour from volume data such as 3D-CT image data acquired in advance from the same subject (same patient), or may be a 3D model showing a general organ shape. The examiner notes that the occupied region is emphasized by contouring the structure in the three-dimensional image and overlaying it with a 3D model). Regarding claim 4, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 2, further comprising: a monitor (figure 1, element 20, para. 0025) and wherein the processor is configured to: display a three-dimensional schema image (para. 0108; Note that the 3D model may be generated by acquiring the organ contour from volume data such as 3D-CT image data acquired in advance from the same subject (same patient), or may be a 3D model showing a general organ shape.); and display, in an emphasized manner in the three-dimensional schema image, the occupied region of the three-dimensional ultrasound image data (para. 0108; he determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ. Then, the determining function 173 determines, as an unimaged organ region, a region acquired by removing the organ contour included in the already existing data area from the organ contour of the 3D model. Note that the 3D model may be generated by acquiring the organ contour from volume data such as 3D-CT image data acquired in advance from the same subject (same patient), or may be a 3D model showing a general organ shape. The examiner notes that the occupied region is emphasized by contouring the structure in the three-dimensional image and overlaying it with a 3D model). Regarding claim 5, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 3, wherein the processor is configured to sequentially synthesize the three-dimensional ultrasound image data based on the three-dimensional structure included in the three-dimensional ultrasound image data of which the position of the occupied region is estimated (para. 0108; The determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ.). Regarding claim 9, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 3, wherein the processor is configured to specify a region that is not displayed in an emphasized manner and notify a user of the specified region (paras. 0108-0109; The determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ. Then, the determining function 173 determines, as an unimaged organ region, a region acquired by removing the organ contour included in the already existing data area from the organ contour of the 3D model. Note that the 3D model may be generated by acquiring the organ contour from volume data such as 3D-CT image data acquired in advance from the same subject (same patient), or may be a 3D model showing a general organ shape. The determining function 173 determines whether or not there is an unimaged organ region (step ST7). If it is determined as “YES” in step ST7, that is, if it is determined that there is the unimaged organ region, the display control function 71 displays information regarding the unimaged organ region on the display 20 for the operator (step ST8). The information regarding the unimaged organ region may be the range of the ultrasonic beam determined to fill the unimaged organ region (display example (1) described below), or may include the body surface coordinate and posture (display examples (2) to (4) described later) of the ultrasonic probe 30 for imaging an unimaged organ region. Further, the display control function 71 displays the information for imaging the unimaged target region, thereby displaying the information regarding the unimaged target region, and/or three-dimensionally displays the unimaged target region.). Regarding claim 10, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 4, wherein the processor is configured to specify a region that is not displayed in an emphasized manner and notify a user of the specified region (paras. 0108-0109; The determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ. Then, the determining function 173 determines, as an unimaged organ region, a region acquired by removing the organ contour included in the already existing data area from the organ contour of the 3D model. Note that the 3D model may be generated by acquiring the organ contour from volume data such as 3D-CT image data acquired in advance from the same subject (same patient), or may be a 3D model showing a general organ shape. The determining function 173 determines whether or not there is an unimaged organ region (step ST7). If it is determined as “YES” in step ST7, that is, if it is determined that there is the unimaged organ region, the display control function 71 displays information regarding the unimaged organ region on the display 20 for the operator (step ST8). The information regarding the unimaged organ region may be the range of the ultrasonic beam determined to fill the unimaged organ region (display example (1) described below), or may include the body surface coordinate and posture (display examples (2) to (4) described later) of the ultrasonic probe 30 for imaging an unimaged organ region. Further, the display control function 71 displays the information for imaging the unimaged target region, thereby displaying the information regarding the unimaged target region, and/or three-dimensionally displays the unimaged target region.). Regarding claim 11, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 5, wherein the processor is configured to specify a region that is not displayed in an emphasized manner and notify a user of the specified region (paras. 0108-0109; The determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ. Then, the determining function 173 determines, as an unimaged organ region, a region acquired by removing the organ contour included in the already existing data area from the organ contour of the 3D model. Note that the 3D model may be generated by acquiring the organ contour from volume data such as 3D-CT image data acquired in advance from the same subject (same patient), or may be a 3D model showing a general organ shape. The determining function 173 determines whether or not there is an unimaged organ region (step ST7). If it is determined as “YES” in step ST7, that is, if it is determined that there is the unimaged organ region, the display control function 71 displays information regarding the unimaged organ region on the display 20 for the operator (step ST8). The information regarding the unimaged organ region may be the range of the ultrasonic beam determined to fill the unimaged organ region (display example (1) described below), or may include the body surface coordinate and posture (display examples (2) to (4) described later) of the ultrasonic probe 30 for imaging an unimaged organ region. Further, the display control function 71 displays the information for imaging the unimaged target region, thereby displaying the information regarding the unimaged target region, and/or three-dimensionally displays the unimaged target region.). Regarding claim 13, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 9, wherein the processor is configured to present a reference ultrasound image to be observed in the specified region (para. 0109; The determining function 173 determines whether or not there is an unimaged organ region (step ST7). If it is determined as “YES” in step ST7, that is, if it is determined that there is the unimaged organ region, the display control function 71 displays information regarding the unimaged organ region on the display 20 for the operator (step ST8). The information regarding the unimaged organ region may be the range of the ultrasonic beam determined to fill the unimaged organ region (display example (1) described below), or may include the body surface coordinate and posture (display examples (2) to (4) described later) of the ultrasonic probe 30 for imaging an unimaged organ region. Further, the display control function 71 displays the information for imaging the unimaged target region, thereby displaying the information regarding the unimaged target region, and/or three-dimensionally displays the unimaged target region.). Regarding claim 14, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 1, wherein the three-dimensional structure information includes a shape pattern of the three-dimensional structure (para. 0077; the deriving function 172 has a function of deriving a target shape and an imaged target region in the subject from multiple ultrasonic image data and multiple position data acquired by the acquiring function 171. For example, the deriving function 172 includes a function of deriving an organ shape and the imaged organ region in the subject from the ultrasonic image data of multiple cross-sections and their position data.). Regarding claim 15, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 2, wherein the three-dimensional structure information includes a shape pattern of the three-dimensional structure (para. 0077; the deriving function 172 has a function of deriving a target shape and an imaged target region in the subject from multiple ultrasonic image data and multiple position data acquired by the acquiring function 171. For example, the deriving function 172 includes a function of deriving an organ shape and the imaged organ region in the subject from the ultrasonic image data of multiple cross-sections and their position data.). Regarding claim 16, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 3, wherein the three-dimensional structure information includes a shape pattern of the three-dimensional structure (para. 0077; the deriving function 172 has a function of deriving a target shape and an imaged target region in the subject from multiple ultrasonic image data and multiple position data acquired by the acquiring function 171. For example, the deriving function 172 includes a function of deriving an organ shape and the imaged organ region in the subject from the ultrasonic image data of multiple cross-sections and their position data.). Regarding claim 17, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 4, wherein the three-dimensional structure information includes a shape pattern of the three-dimensional structure (para. 0077; the deriving function 172 has a function of deriving a target shape and an imaged target region in the subject from multiple ultrasonic image data and multiple position data acquired by the acquiring function 171. For example, the deriving function 172 includes a function of deriving an organ shape and the imaged organ region in the subject from the ultrasonic image data of multiple cross-sections and their position data.). Regarding claim 19, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 1, wherein the position and posture sensing device includes an inertial sensing device, a magnetic sensing device, or an optical sensing device (paras. 0054-0055; the posture of the ultrasonic probe 30 can be detected by a magnetic field transmitter (not shown) sequentially transmitting triaxial magnetic fields while the position sensor 40 sequentially receiving the magnetic fields.). Regarding claim 20, Fujisawa teaches a control method of an ultrasound diagnostic apparatus, the control method comprising: acquiring position and posture information of an ultrasound probe (para. 0054; The type of the position sensor 40 attached to the ultrasonic probe 30 detects position data of itself, and outputs the position data to the main body 10. The position data of the position sensor 40 can also be regarded as the position data of the ultrasonic probe 30. The position data of the ultrasonic probe 30 includes a coordinate (X, Y, Z) of the ultrasonic probe 30 and a tilt angle (posture) from each axis); acquiring an ultrasound image representing a tomogram of a subject by transmitting and receiving an ultrasound beam using the ultrasound probe (paras. 0026 and 0029; The T/R circuit 11 has a transmitting circuit and a receiving circuit (not shown). Under the control of the processing circuitry 17, the T/R circuit 11 controls transmission directivity and reception directivity in transmission and reception of ultrasonic waves. Under the control of the processing circuitry 17, the B-mode processing circuit 12 receives the echo data from the receiving circuit, performs logarithmic amplification, envelope detection processing and the like, thereby generate data (two-dimensional (2D) or three-dimensional (3D) data) which signal intensity is presented by brightness of luminance.); generating three-dimensional ultrasound image data of the subject on the basis of the acquired position and posture information of the ultrasound probe and the acquired ultrasound image (paras. 0040 and 0104; the acquiring function 171 stores the ultrasonic image data acquired in step ST1 in a three-dimensional arrangement in the 3D memory of the image memory 15 on the basis of the position data of the ultrasonic image data (step ST3). ); extracting three-dimensional structure information regarding a three-dimensional structure included in the three-dimensional ultrasound image data from the generated three-dimensional ultrasound image data (para. 0107; the deriving function 172 derives the organ shape of the entire target organ in the subject and the imaged organ region on the basis of the ultrasonic image data of one or multiple cross-sections arranged in the 3D memory of the image memory 15 (step ST5).); and estimating a position of a region occupied by the three-dimensional ultrasound image data of the subject by inputting the extracted three-dimensional structure information to a position estimation model trained in a position of the three-dimensional structure in the three-dimensional image data obtained by imaging the three-dimensional structure (paras. 0086 and 0107-0108; the deriving function 172 derives the organ shape of the entire target organ in the subject and the imaged organ region on the basis of the ultrasonic image data of one or multiple cross-sections arranged in the 3D memory of the image memory 15 (step ST5). The determining function 173 determines a three-dimensional unimaged organ region on the basis of the partial organ shape included in the ultrasonic image data of one or multiple cross-sections arranged three-dimensionally in step ST3 and the entire organ shape derived in step ST5 (step ST6). he determining function 173 extracts an organ contour from ultrasonic image data of cross-sections arranged three-dimensionally, arranges the extracted organ contour in a three-dimensional manner, and collates the arranged one with a 3D model of the entire organ. The deriving function 172 inputs a large number of training data and performs learning to sequentially update the parameter data Pa. The training data is made up of a set of multiple ultrasonic image data (e.g., arbitrary cross-section data forming volume data) S1, S2, S3, . . . as training input data and organ shapes T1, T2, T3, . . . corresponding to each arbitrary cross-section data. The multiple ultrasonic image data S1, S2, S3, . . . constitutes a training input data group Ba. The organ shapes T1, T2, T3, . . . constitutes a training output data group Ca. The examiner notes that the deriving unit extracts the position and shape of structure from the three-dimensional images); and upon determining that the position of the occupied region of the three-dimensional ultrasound data is not able to be estimated (paras. 0082, 0095, 0107, and 0130; Note that the data-missing due to insufficient imaging is described, but the present invention is not limited to this case. For example, the present invention can be applied to the case where there is an artifact on the ultrasonic image data and a part of the tissue to be visually recognized cannot be visually recognized even if it is not the data-missing. In this case, the artifact portion on the ultrasonic image data can be detected and set the portion as the data-missing area. The examiner notes that the unimaged target region is a region that the processor cannot identify based on the image data and position data, which can be due to an artifact. Thus, the processor is unable to locate the missing target.), synthesizing the three-dimensional ultrasound image data which the position is not able to be estimated with an occupied region that is adjacent to the three- dimensional ultrasound image data and of which a position has already been estimated (paras. 0130-0131; After the unimaged target region is determined, virtual ultrasonic image data may be displayed as a model assuming that the unimaged target region is imaged. The virtual ultrasonic image data may be generated by synthesizing pixel values at positions close to the unimaged target region in the already existing data area, or may be an MPR image generated from the volume data when the CT image volume data and the like already exist. The examiner notes that the processor is unable to estimate the position and locate the region due to an artifact in the image, therefore, the processor synthesize the unimaged region with a region adjacent to it that was located and the position of it was estimated.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Fujisawa et al. (US 2021/0093303) in the view of Yang et al. (US 2020/0229796). Regarding claim 18, Fujisawa teaches the ultrasound diagnostic apparatus according to claim 14, however, fails to explicitly teach wherein the three-dimensional structure information includes a brightness pattern of the three-dimensional structure. Yang, in the same field of endeavor, teaches three-dimensional structure information includes a brightness pattern of the three-dimensional structure (paras. 0023-0024; 3D ultrasound can be used to measure aorta boundaries, such as estimate the AAA diameter perpendicular to the centering as well as the AAA volume. The systems and methods may perform 3D abdominal aorta segmentation based on a 3D vascular shape model and intensity model. The intensity model can also be defined by analyzing the ultrasound image brightness inside and outside the aorta structures.). It would have been obvious to an ordinary skilled in the art before the invention was made to modify the three-dimensional structure information of Fujisawa to incorporate the teaching Yang to include a brightness pattern. Doing so will help in accurate segmentation of the structure based on intensity model as disclosed within Yang in para. 0023. Response to Arguments Applicant's arguments filed 10/15/2025 have been fully considered but they are not persuasive. The applicant argues that the unimaged target region is not the same as the three dimensional ultrasound image which the position is not able to be estimated. The examiner respectfully disagrees. Fujisawa teaches a deriving function 172 performs a process of deriving a shape of the organ such as the liver in the subject from the ultrasonic image data of cross-sections arranged three-dimensionally and from position data, thereby derives the organ shape and the imaged organ region. Then, the determining function 173, which will be described later, determines a region excluding the derived imaged organ region from the entire derived liver region R1 as an unimaged target region. The unimaged target region is a portion of the entire target region that cannot be visualized based on the data-missing area where the data areas P are insufficient. The unimaged target region means a region that should be continuously imaged in the present examination. The determining function 173 determines a three-dimensional unimaged organ region on the basis of the partial organ shape included in the ultrasonic image data of one or multiple cross-sections arranged three-dimensionally in step ST3 and the entire organ shape derived in step ST5 (step ST6). Note that the data-missing due to insufficient imaging is described, but the present invention is not limited to this case. For example, the present invention can be applied to the case where there is an artifact on the ultrasonic image data and a part of the tissue to be visually recognized cannot be visually recognized even if it is not the data-missing. In this case, the artifact portion on the ultrasonic image data can be detected and set the portion as the data-missing area [see paragraphs 0082, 0107, and 0130]. The examiner notes that the processor determines the position of an occupied region in the three dimensional ultrasound image and contours the organ and then determine a region that cannot be contoured or localized within the three dimensional image due to the organ being partially imaged or has an artifact as an unimaged target region. Hence, the unimaged target region is a region in the three-dimensional ultrasound image where its position can not be estimated and cannot be contoured due to missing data. Thus, Fujisawa teaches determining a region in the three dimensional ultrasound image that cannot be contoured or located and define this region as unimaged target region. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAINAB M ALDARRAJI whose telephone number is (571)272-8726. The examiner can normally be reached Monday-Thursday7AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carey Michael can be reached at (571) 270-7235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZAINAB MOHAMMED ALDARRAJI/Patent Examiner, Art Unit 3797 /MICHAEL J CAREY/Supervisory Patent Examiner, Art Unit 3795
Read full office action

Prosecution Timeline

Sep 06, 2024
Application Filed
Jul 23, 2025
Non-Final Rejection — §102, §103
Aug 27, 2025
Interview Requested
Sep 04, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Examiner Interview Summary
Oct 15, 2025
Response Filed
Jan 22, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599331
Hyperspectral Image-Guided Ocular Imager for Alzheimer's Disease Pathologies
2y 5m to grant Granted Apr 14, 2026
Patent 12594038
ESTIMATION OF CONTACT FORCE OF CATHETER EXPANDABLE ASSEMBLY
2y 5m to grant Granted Apr 07, 2026
Patent 12588887
MEDICAL DEVICE POSITION SENSING COMPONENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12582479
METHOD AND SYSTEM FOR AUTOMATIC PLANNING OF A MINIMALLY INVASIVE THERMAL ABLATION AND METHOD FOR TRAINING A NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12569189
DEVICE, METHOD AND COMPUTER PROGRAM FOR DETERMINING SLEEP EVENT USING RADAR
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
83%
With Interview (+16.1%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month