Prosecution Insights
Last updated: April 19, 2026
Application No. 18/827,769

ULTRASOUND DIAGNOSTIC APPARATUS, CONTROL METHOD OF ULTRASOUND DIAGNOSTIC APPARATUS, AND DISTANCE MEASUREMENT DEVICE

Final Rejection §102§103
Filed
Sep 08, 2024
Examiner
SEBASTIAN, KAITLYN E
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Fujifilm Corporation
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
229 granted / 315 resolved
+2.7% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
353
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
52.3%
+12.3% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 315 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP 2022/036083, filed on 03/09/2022. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Acknowledgement of Amendment The following office action is in response to the applicant’s amendment filed on 01/20/2026. Claims 1-20 are pending. Claims 1 and 18-20 are amended. Claims 1-20 are rejected under 35 U.S.C. 102/103 for the reasons stated in the Response to Arguments and 35 U.S.C. 102/103 sections below. Response to Arguments Applicant’s arguments, see Remarks page 8, filed 01/20/2026, with respect to the objections to the specification have been fully considered and are persuasive. The objections to the specification in the non-final rejection of 10/20/2025 has been withdrawn. Applicant’s arguments, see Remarks page 8-10, filed 01/20/2026, with respect to the rejection of the claims under 35 U.S.C. 102 and 35 U.S.C. 103 have been fully considered and are persuasive. Regarding claim 1, the examiner acknowledges that the claim has been amended to recite: “specify an examination position of a subject by an examiner based on first posture information of the examiner and second posture information of the subject, which are acquired by analyzing reflection signals of the examiner and the subject in a case where detection signals are transmitted from a sensor to the examiner and the subject, and associate the examination position of the subject and an ultrasound image of the subject with each other, the ultrasound image being acquired by the examiner; and memory configured to store the ultrasound image of the subject and the examination position”. The examiner acknowledges that the amended claim 1 also specifies that the memory is configured to store the ultrasound image of the subject and the examination position. The examiner notes that claim 18 has been amended to recite similar features. The examiner recognizes that support for the above amendments can be found in paragraphs [0086] and [0092] of the published application (US 2024/423592). Furthermore, claim 19 also recites “a sensor” instead of “a distance measurement sensing device”. Amended claim 19 also specifies that the processor is configured to: analyze the reflection signals received by the sensor to acquire first posture information of the examiner and second posture information of the subject; and specify each of the examiner and the subject and specify an examination position of the subject, based on the first posture information and the second posture information. The examiner acknowledges that paragraphs [0086], [0088] and [0092] of the published application support these amendments. The Applicant notes that Ohtake may describe to calculate a spatial position of an ultrasound probe by using a magnetic sensor. However, the Applicant that Ohtake does not specify an examination position on a subject based on posture information of an examiner and the subject. Information acquired from the magnetic sensor is information of a position and a posture of the ultrasound probe. In this case, if a posture or a position of the subject is changed during an examination of the subject, the Applicant contends that a relative positional relationship between arbitrary position on the subject and a position of the ultrasound probe is also changed and the examination position by the ultrasound probe cannot be calculated accurately. According to the amended claims, because the examination position of a subject is specified based on the first posture information of the examiner and second posture information of the subject, even if a posture or a position of the subject is changed during an examination of the subject, a relative positional relationship between arbitrary position on the subject and a position of the ultrasound probe is unchanged, the examination position by the ultrasound probe can be calculated accurately. Therefore, the Applicant contends that the amended claims are distinguishable over Ohtake. The examiner respectfully notes that Ohtake describes to calculate a spatial position of an ultrasound probe by using a magnetic sensor. However, the examiner acknowledges that Ohtake does not specify an examination position on a subject based on posture information of an examiner and the subject. Therefore, the rejection of claims 1, 18 and 19 and their corresponding dependent claims (i.e. claims 2-17 and 20, respectively) have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Schwab US 2021/0045717 A1 “Schwab” as discussed in the 35 U.S.C. 102/103 sections below. The examiner notes that the amendments to claims 1, 18 and 19 required an updated search which resulted in a new primary reference (i.e. Schwab). These amendments were not previously presented in dependent claims and further limit the scope of the claims, thereby necessitating the new rejections presented below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 11, and 15-20 is/are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Schwab US 2021/0045717 A1 “Schwab”. Regarding claims 1, 18, and 19, Schwab teaches “An ultrasound diagnostic apparatus comprising:” (Claim 1) (Turning now to the figures, FIG. 1 illustrates a block diagram of a system 100 according to one embodiment. In the illustrated embodiment, the system 100 is an imaging system and, more specifically, an ultrasound imaging system.” [0012]. The ultrasound imaging system (i.e. system 100) is included within the exam room 202 as shown in FIG. 2. Therefore, Schwab teaches an ultrasound diagnostic apparatus.); “A control method of an ultrasound diagnostic apparatus, comprising:” (Claim 18) (“In particular, methods and systems are provided for optical camera-aided ultrasound imaging system setup and exam control based on images acquired by the optical camera. The optical camera may capture images of an ultrasound exam room, including of the ultrasound imaging system, a patient, and a user of the ultrasound imaging system, such as illustrated in FIG. 2. An electronic controller of the ultrasound imaging system may analyze the images to identify patient information (e.g., patient identity, patient body mass), user (e.g., sonographer) information, and ultrasound probe position/motion, for example, and build a corresponding spatial exam model in real-time, as illustrated in FIG. 3” [0011]; “Turning now to FIG. 4, a flow chart of an example method 400 for automating ultrasound exam setup and control via camera monitoring of the ultrasound exam is shown. The method 400 may be performed with an imaging system, such as the ultrasound imaging system 100 shown in FIGS. 1-3. More specifically, method 400 may be executed by a controller of the ultrasound imaging system (such as the system controller 116 shown in FIG. 1)” [0036]. Therefore, the method 400 shown in FIG. 4 represents a control method of an ultrasound diagnostic apparatus.); “A distance measurement device comprising:” (Claim 19) (See [0011] above. Therefore, since the optical camera (i.e. 130) in the system shown in FIG. 2 captures images of a patient and user such that a spatial exam model (i.e. indicating distance/relative position between objects in the ultrasound exam room) is built, the system shown in FIG. 2 includes a distance measurement device.); “a sensor configured to transmit detection signals and receive reflection signals with respect to an examiner and a subject” (Claim 19) (“As one example, the image analysis module may compare a real-time image received from the camera 130 to one stored in memory to identify the user, the patient, and the probe 106 within the ultrasound exam environment” [0022]. In this case, the camera 130 represents a sensor configured to transmit detection signals and receive reflection signals with respect to an examiner and a subject.); “a processor configured to: specify an examination position of a subject by an examiner based on first posture information of the examiner and second posture information of the subject, which are acquired by analyzing reflection signals of the examiner and the subject in a case where detection signals are transmitted from a sensor to the examiner and the subject” (Claim 1); “specifying an examination position of a subject by an examiner based on first posture information of the examiner and second posture information of the subject, which are acquired by analyzing reflection signals of the examiner and the subject in a case where detection signals are transmitted from a sensor to the examiner and the subject” (Claim 18); “a processor configured to: analyze the reflection signals received by the sensor to acquire first posture information of the examiner and second posture information of the subject; and specify each of the examiner and the subject and specify an examination position of the subject, based on the first posture information and the second posture information” (Claim 19) (“Specifically, FIG. 3 depicts how the system controller 116 may transform the ultrasound exam environment 200 shown in FIG. 2 into the spatial exam model 300. The spatial exam model 300 includes a user model 304, a probe model 306, and a patient model 308. As an example, the system controller 116 may use the image information received from camera 130 in combination with instructions stored in the optical image processing module 128 shown in FIG. 1 to identify the user 204, the selected probe 106a, and the patient 208 and build the user model 304, the probe model 306, and the patient model 308, respectively. In one example, the user model 304 and the patient model 308 may each include skeletal tracking. The skeletal tracking may identify various skeletal joints of a human subject (e.g., the user 204 or the patient 208), which may correspond to actual joints of the human subject, centroids of various anatomical structures, terminal ends of the human subject's extremities, and/or points without a direct anatomical link to the human subject, and map a virtual skeleton onto the human subject” [0031]; “The system controller 116 may further begin a scanning sequence in response to the spatial exam model 300 showing the user 204 positioning the probe 106a on the patient 208 (e.g., via skeletal tracking of the user model 304 and the patient model 308 and position tracking of the probe model 306)” [0033]. In this case, the camera 130 represents a sensor which is used to receive information about the user 204 and the patient 208 in order to build a user model 304 and a patient model 308, which together represent a spatial exam model 300. Since the user model 304 and a patient model 308 each include skeletal tracking, these models contain first posture information of the examiner (i.e. user) and second posture information of the subject, respectively. Therefore, the ultrasound diagnostic apparatus includes a processor (i.e. system controller 116) configured to: specify an examination position of a subject by an examiner (i.e. spatial exam model 300) based on first posture information of the examiner (i.e. user model 304) and second posture information of the subject (i.e. patient model 308), which are acquired by analyzing reflection signals of the examiner and the subject in a case where detection signals are transmitted from a sensor (i.e. camera 130) to the examiner and the subject. Furthermore, the method involves specifying an examination position of a subject by an examiner (i.e. spatial exam model 300) based on first posture information of the examiner (i.e. user model 304) and second posture information of the subject (i.e. patient model 308), which are acquired by analyzing reflection signals of the examiner and the subject in a case where detection signals are transmitted from a sensor (i.e. camera 130) to the examiner and the subject. Additionally, the distance measurement device includes a processor configured to: analyze the reflection signals received by the sensor (i.e. camera 130) to acquire first posture information of the examiner (i.e. user model 304) and second posture information of the subject (i.e. patient model 308); and specify each of the examiner (i.e. with the user model 304) and the subject (i.e. with the patient model 308) and specify an examination position (i.e. spatial exam model 300) of the subject, based on the first posture information and the second posture information (i.e. received by camera 130).). “associate(ing) the examination position of the subject and an ultrasound image of the subject with each other, the ultrasound image being acquired by the examiner” (Claims 1 and 18); “a memory configured to store the ultrasound image of the subject and the examination position” (Claim 1), “storing the ultrasound image of the subject and the examination position in a memory” (Claim 18) (See [0033] above and “Therefore, the system controller 116 may selectively tag acquired ultrasound images for storage based on the position of the selected probe 106a on the patient 208 (e.g., as determined via the probe model 306 and the patient model 308, respectively, within the spatial exam model 300) and/or a motion/gesture of the user 204 (e.g., as determined via the user model 304). Then, only the tagged images may be saved, or the tagged images may be saved to a different memory than images that do not contain the “save” tag” [0035]. Therefore, the processor is configured to perform the step of associating the examination position of the subject (i.e. spatial exam model 300) and an ultrasound image of the subject with each other, the ultrasound image being acquired by the examiner (i.e. through motion/gesture of the user 204 moving the probe 106). Furthermore, the ultrasound diagnostic apparatus includes a memory configured to store the ultrasound image of the subject (i.e. tagged images) and the examination position (i.e. spatial exam model 300). Furthermore, the method involves storing the ultrasound image of the subject (i.e. tagged images) and the examination position (i.e. spatial exam model 300) in a memory.). Regarding claim 2, Schwab discloses all features of the claimed invention as discussed with respect to claim 1 above, and Schwab further teaches “further comprising: a monitor” (“Ultrasound images of the system 100 may be generated from the acquired data (at the controller 116) and displayed to the operator or user on the display device 118” [0018]. As shown in FIGS. 2 and 2, the ultrasound imaging system 100 includes the display device 118, which represents a monitor.); and “an ultrasound probe” (“In the illustrated embodiment, the ultrasound imaging system 100 includes a transmit beamformer 101 and transmitter 102 that drives an array of elements 104, for example, piezoelectric crystals, within a diagnostic ultrasound probe 106 (or transducer) to emit pulsed ultrasonic signals into a body or volume (not shown) of a subject. […] “The ultrasonic signals are back-scattered from structures in the body, for example, blood vessels and surrounding tissue, to produce echoes that return to the elements 104. The echoes are received by a receiver 108. The received echoes are provided to a receive beamformer 110 that performs beamforming and outputs a radio frequency (RF) signal” [0013]. Therefore, the apparatus further includes an ultrasound probe (i.e. probe 106).); “wherein the processor is configured to: acquire the ultrasound image at the examination position of the subject by performing transmission and reception of an ultrasound beam using the ultrasound probe” (See [0013] as discussed in claim 2 above and [0033] as discussed with respect to claim 1 above. Therefore, the processor is configured to acquire the ultrasound image at the examination position of the subject by performing transmission and reception of an ultrasound beam using the ultrasound probe (i.e. 106).); and “display the ultrasound image on the monitor” (See [0018] above and “For example, the ultrasound image processing module 126 may process the ultrasound signals to generate slices or frames of ultrasound information (e.g., ultrasound images) for displaying to the operator” [0014]. Therefore, the processor is configured to display the ultrasound image on the monitor (i.e. display device 118).). Regarding claim 11, Schwab discloses all features of the claimed invention as discussed with respect to claims 2, above, and Schwab further teaches “wherein the processor is configured to: set an ultrasound image acquisition condition corresponding to the examination position; and acquire the ultrasound image in accordance with the ultrasound image acquisition condition” (“In one embodiment, a method comprises acquiring images of an ultrasound exam via a camera, analyzing the acquired images in real-time to build a spatial exam model, and adjusting settings of the ultrasound exam in real-time based on the spatial exam model” [0003]; “Thus, settings for performing the ultrasound exam may be automatically adjusted based on the analyzed images received from the camera. […] By automatically adjusting the ultrasound exam settings based on the spatial exam model, a number of steps performed by the user prior to initiation of scanning during the ultrasound exam as well as during the scanning may be reduced, thereby saving time during the ultrasound exam process” [0004]; “The spatial exam model may be used to adjust settings of the ultrasound system prior to and during patient scanning, such as according to the example method of FIG. 4” [0011]; “At 422, the method includes adjusting the exam settings based on the spatial exam model. As one example, prior to scanning commencing, the controller may further adjust the ultrasound imaging system settings based on the approximate BMI of the patient determined via the patient model. For example, responsive to a high approximate BMI, the controller may adjust the settings to include pre-sets for obese patients. This may include, for example, adjusting a penetration depth of ultrasound waves produced by the selected ultrasound probe by adjusting the ultrasound frequency setting. As another example, the controller may adjust an amplitude and/or phase of the ultrasound waves produced by the selected ultrasound probe” [0050]. Therefore, since the setting for performing the ultrasound exam may be automatically adjusted based on the analyzed images received from the camera and/or based on the spatial exam model (i.e. 300) and the adjusted settings may be 1) the frequency setting (i.e. to adjust penetration depth of ultrasound waves), 2) amplitude, or 3) phase of the ultrasound waves, the processor is configured to set an ultrasound image acquisition condition corresponding to the examination position; and acquire the ultrasound image in accordance with the ultrasound image acquisition condition.). Regarding claim 15, Schwab discloses all features of the claimed invention as discussed with respect to claim 11 above, and Schwab further teaches “wherein the processor is configured to select the ultrasound image acquisition condition corresponding to the specified examination position from among a plurality of the ultrasound image acquisition conditions preset according to a plurality of the examination positions” (See [0050] as discussed in claim 11 above and “As an example, the controller may determine a type of exam (such as prenatal, gynecological, etc.) based on the patient information and update the ultrasound imaging system settings according to a scan protocol associated with the type of exam and stored in memory, such as by beginning a specific scan workflow. As another example, the controller may additionally or alternatively determine an organ to be scanned during the ultrasound exam (such as ovaries or heart) based on the identified patient and adjust the ultrasound imaging system settings accordingly” [0042]. Therefore, since the controller may determine a type of exam (i.e. prenatal, gynecological, etc.) or an organ to be scanned (i.e. ovaries or heart) and adjust the ultrasound imaging system settings accordingly, the processor is configured to select the ultrasound image acquisition condition corresponding to the specified examination position (i.e. associated with the exam type/organ to be examined) from among a plurality of the ultrasound image acquisition conditions preset (i.e. scan protocols) according to a plurality of the examination positions (i.e. prenatal, gynecological, ovaries, heart, etc.).). Regarding claims 16 and 17, Schwab discloses all features of the claimed invention as discussed with respect to claims 11 and 15 above, and Schwab further teaches “wherein the ultrasound image acquisition condition includes at least one of an ultrasound beam depth, a focus position, or image processing” (See [0050] as discussed with respect to claim 11 above. Therefore, since the penetration depth of the ultrasound waves can be adjusted depending on the approximate BMI of the patient, the ultrasound image acquisition condition includes at least one of an ultrasound beam depth, a focus position, or image processing, specifically ultrasound beam depth.). Regarding claim 20, Schwab discloses all features of the claimed invention as discussed with respect to claim 19 above, and Schwab further teaches “wherein the processor is configured to acquire the first posture information and the second posture information by using a machine learning model that has learned a reflection signal acquired by transmitting a detection signal to a human body by the sensor” (“The image analysis module of the optical image processing module 128 may access images/videos (e.g., an image library) stored in memory and analyze the images received from the camera 130 in real-time to identify one or more features within each of the received image. […] The image analysis module may further use a computer vision model or algorithm stored within a memory of the system controller 116, such as a biometric algorithm, to facially recognize the user (and, in some examples, the patient) in order to positively identify the user (and the patient). […] Further, the image analysis module may use conventional machine learning approaches, such as convolutional neural networks, to reduce pre-processing” [0022]. Therefore, since the image analysis module may access images/videos from the camera to identify one or more features within each received image and may use a computer vision model or algorithm (i.e. to facially recognize the user, or patient) and conventional machine learning approaches such as a convolutional neural network, the processor is configured to acquire the first posture information (i.e. used to generate the user model 304) and the second posture information (i.e. used to generate the patient model 308) by using a machine learning model that has learned a reflection signal acquired by transmitting a detection signal to a human body by the sensor.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3-10, and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable by Schwab US 2021/0045717 A1 “Schwab” and further in view of Ohtake US 2005/0119569 A1 “Ohtake”. Regarding claim 3, Schwab discloses all features of the claimed invention as discussed with respect to claim 2 above, however, Schwab does not teach “wherein the processor is configured to display the specified examination position on the monitor”. Ohtake is within a related field of endeavor to the claimed invention because it involves a medical ultrasound diagnosis apparatus which displays a reference image and a guidance display to provide probe operation support information (see [Abstract]). Ohtake further teaches “wherein the processor is configured to display the specified examination position on the monitor” (“FIG. 5 shows an example of a display screen 64. A living body image 66 and a reference image 68 are shown on the display screen 64. As described above, the reference image 68 includes a body mark 70, a recorded probe mark 73, and a current probe mark 72. […] The current probe mark 72 represents the current position and the current orientation of the probe and is displayed on the body mark 70 at a position based on the current coordinate data and with an orientation based on the current coordinate data. When the contact position of the probe is moved on the subject or the contact orientation of the probe is changed on the subject, the position or the orientation of the current probe mark is changed corresponding to the movement of [t]he probe. With such a structure, the user can change the contact position and contact orientation of the probe to match the current probe mark 72 to the recorded probe mark 73 to easily approximate or match the diagnosis part in the current diagnosis to the diagnosis part in the past diagnosis” [0062]. Therefore, the processor is configured to display the specified examination position (i.e. current probe mark 72 corresponding to the position of the probe as it moves) on the monitor.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ultrasound diagnostic apparatus of Schwab such that the processor is configured to display the specified examination position (i.e. spatial exam model 300 including the user model 304, probe model 306 and patient model 308, see Schwab: [0031]) on the monitor as disclosed in Ohtake in order to allow a user to understand how the patient/user/probe were positioned when an ultrasound image was obtained such that the same configuration can be used in subsequent imaging (i.e. positioning can be replicated). Displaying the specified examination position on the monitor along with an ultrasound image is one of a finite number of techniques which can be used to allow a user to assess conditions at the time an ultrasound image was obtained with a reasonable expectation of success. Thus, modifying the ultrasound diagnostic apparatus of Schwab such that the processor is configured to display the specified examination position (i.e. spatial exam model 300 including the user model 304, probe model 306 and patient model 308, see Schwab: [0031]) on the monitor as disclosed in Ohtake in order to allow a user to understand how the patient/user/probe were positioned when an ultrasound image was obtained such that the same configuration can be used in subsequent imaging. Regarding claim 4, Schwab in view of Ohtake discloses all features of the claimed invention as discussed with respect to claim 3 above, and Ohtake further teaches “wherein the processor is configured to: generate a body mark indicating the specified examination position; and display the body mark on the monitor” (See [0062] as discussed in claim 3 above. Therefore, the processor is configured to: generate a body mark (i.e. body mark 70) indicating the specified examination position (i.e. corresponding to the current probe mark 72) and display the body mark (i.e. 70) on the monitor (i.e. screen 64 in FIG. 5).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ultrasound diagnostic apparatus of Schwab such that the processor is configured to generate a body mark indicating the specified examination position and display the body mark on the monitor as disclosed in Ohtake in order to allow a user to understand how the patient/user/probe were positioned when an ultrasound image was obtained such that the same configuration can be used in subsequent imaging (i.e. positioning can be replicated). Displaying the specified examination position in the form of a body mark on the monitor along with an ultrasound image is one of a finite number of techniques which can be used to allow a user to assess conditions at the time an ultrasound image was obtained with a reasonable expectation of success. Thus, modifying the ultrasound diagnostic apparatus of Schwab such that the processor is configured to generate a body mark indicating the specified examination position and display the body mark on the monitor as disclosed in Ohtake in order to allow a user to understand how the patient/user/probe were positioned when an ultrasound image was obtained such that the same configuration can be used in subsequent imaging. Regarding claim 5, Schwab in view of Ohtake discloses all features of the claimed invention as discussed with respect to claim 4 above, and Ohtake further teaches “wherein the processor is configured to correct a deviation of the examination position on the body mark caused by an individual difference in a physique of the subject” (“As a result, even when the position or the orientation of the patient lying on the bed differ between the past examination and the current examination, it is possible to provide, to the user, probe operation support information in the current examination according to the corrected coordinate system” [0045]; “As described above, the reference image 68 includes a body mark 70, a recorded probe mark 73, and a current probe mark 72. These marks are three-dimensional images having a perceived depth. The recorded probe mark 73 re-creates the position and the orientation of the probe in the past diagnosis and is displayed on the body mark 70 at a position based on the recorded coordinate data and with an orientation based on the recorded coordinate data […] With such a structure, the user can change the contact position and contact orientation of the probe to match the current probe mark 72 to the recorded probe mark 73 to easily approximate or match the diagnosis part in the current diagnosis to the diagnosis part in the past diagnosis” [0062]. Between a past diagnosis and a current diagnosis, the physique of a subject may change. For example, when performing an obstetrics ultrasound, the physique of the woman in the first trimester of pregnancy verses the third trimester of pregnancy is different. Since the user can change the contact position and contact orientation of the probe such that the current probe mark 72 matches the recorded probe mark 73, the processor is configured to correct a deviation of the examination position of the body mark caused by an individual difference in a physique of the subject.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ultrasound diagnostic apparatus of Schwab such that the processor is configured to correct a deviation of the examination position on the body mark caused by an individual difference in a physique of the subject as disclosed in Ohtake in order to allow a user to obtain updated ultrasound images corresponding to the changing physique of a subject. Correcting a deviation in the examination position on the body mark in response to a change in physique of a subject is one of a finite number of techniques which can be used in order to alert a user in how to change the position of an ultrasound probe, for example, such that it can obtain an ultrasound image from a previously imaged region with a reasonable expectation of success. Thus, modifying the ultrasound diagnostic apparatus of Schwab such that the processor is configured to correct a deviation of the examination position on the body mark caused by an individual difference in a physique of the subject as disclosed in Ohtake would yield the predictable result of allow a user to update the positioning of an ultrasound probe (i.e. corresponding to the changing physique of the subject) such that ultrasound images can be obtained from a previously imaged region within the subject. Regarding claims 6 and 7, Schwab in view of Ohtake discloses all features of the claimed invention as discussed with respect to claims 4 and 5 above, and Ohtake further teaches “wherein the processor is configured to, upon that a freeze operation is performed by the examiner, automatically generate the body mark indicating the examination position and display the body mark on the monitor” (“As is known, when a user applies a freeze operation, transmission and reception of the ultrasound is terminated. At this point, the stored content in the cine-memory 28 is frozen. When an ultrasound image is to be displayed in real time, it is possible to employ a configuration in which received data output from the signal processor unit 24 is temporarily stored in the cine-memory 28 and the received data is immediately read from the cine-memory 28” [0035]; “In other words, the present embodiment has an advantage that the body mark and the probe mark can be automatically generated and displayed using the coordinate data” [0037]. Therefore, the processor is configured to upon that a freeze operation is performed by the examiner (see [0035]), automatically generate the body mark (i.e. body mark 70 with the current probe mark 72) indicating the examination position and display the body mark on the monitor (i.e. screen 64, see FIG. 5).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ultrasound diagnostic apparatus of Schwab such that the processor is configured to, upon that a freeze operation is performed by the examiner, automatically generate the body mark indicating the examination position and display the body mark on the monitor as disclosed in Ohtake, in order to allow a user to understand exactly how a patient/probe was positioned when an ultrasound image was obtained such that the same configuration can be used in subsequent imaging (i.e. positioning can be replicated). Displaying the specified examination position in the form of a body mark, upon performing a freeze operation, on the monitor along with an ultrasound image is one of a finite number of techniques which can be used to allow a user to assess conditions at the time an ultrasound image was obtained with a reasonable expectation of success. Thus, modifying the ultrasound diagnostic apparatus of Schwab such that the processor is configured to, upon that a freeze operation is performed by the examiner, automatically generate the body mark indicating the examination position and display the body mark on the monitor as disclosed in Ohtake would yield the predictable result of allowing a to understand exactly how a patient/probe was positioned when an ultrasound image was obtained such that the same configuration can be used in subsequent imaging (i.e. positioning can be replicated). Regarding claims 12-14, Schwab in view of Ohtake discloses all features of the claimed invention as discussed with respect to claims 3, 4 and 5 above, and Schwab further teaches “wherein the processor is configured to: set an ultrasound image acquisition condition corresponding to the examination position; and acquire the ultrasound image in accordance with the ultrasound image acquisition condition” (“In one embodiment, a method comprises acquiring images of an ultrasound exam via a camera, analyzing the acquired images in real-time to build a spatial exam model, and adjusting settings of the ultrasound exam in real-time based on the spatial exam model” [0003]; “Thus, settings for performing the ultrasound exam may be automatically adjusted based on the analyzed images received from the camera. […] By automatically adjusting the ultrasound exam settings based on the spatial exam model, a number of steps performed by the user prior to initiation of scanning during the ultrasound exam as well as during the scanning may be reduced, thereby saving time during the ultrasound exam process” [0004]; “The spatial exam model may be used to adjust settings of the ultrasound system prior to and during patient scanning, such as according to the example method of FIG. 4” [0011]; “At 422, the method includes adjusting the exam settings based on the spatial exam model. As one example, prior to scanning commencing, the controller may further adjust the ultrasound imaging system settings based on the approximate BMI of the patient determined via the patient model. For example, responsive to a high approximate BMI, the controller may adjust the settings to include pre-sets for obese patients. This may include, for example, adjusting a penetration depth of ultrasound waves produced by the selected ultrasound probe by adjusting the ultrasound frequency setting. As another example, the controller may adjust an amplitude and/or phase of the ultrasound waves produced by the selected ultrasound probe” [0050]. Therefore, since the setting for performing the ultrasound exam may be automatically adjusted based on the analyzed images received from the camera and/or based on the spatial exam model (i.e. 300) and the adjusted settings may be 1) the frequency setting (i.e. to adjust penetration depth of ultrasound waves), 2) amplitude, or 3) phase of the ultrasound waves, the processor is configured to set an ultrasound image acquisition condition corresponding to the examination position; and acquire the ultrasound image in accordance with the ultrasound image acquisition condition.). Claim(s) 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schwab US 2021/0045717 A1 ”Schwab” and Ohtake US 2005/0119569 A1 “Ohtake” as applied to claims 3-5 above, and further in view of Yoshiro WO 2020/008743 A1 “Yoshiro”. Regarding claims 8-10, Schwab in view of Ohtake discloses all features of the claimed invention as discussed with respect to claims 3, 4 and 5 above, however, the combination does not teach “wherein the processor is configured to: perform a measurement on the subject at the examination position, and display a result of the measurement on the monitor”. Yoshiro is within the same field of endeavor as the claimed invention because it involves an ultrasonic diagnostic apparatus with a display unit and a measuring unit, the measuring unit being configured to measure a measurement subject on the basis of a set measurement algorithm corresponding to a recognized measurement subject (see [Abstract]). Yoshiro teaches “wherein the processor is configured to: perform a measurement on the subject at the examination position, and display a result of the measurement on the monitor” (“The measurement position specification receiving unit 14 of the processor 22 receives the specification of the measurement position on the ultrasonic image displayed on the display unit 8 from the user via the operation unit 15. Here, the measurement position is an approximate position of the measurement target included in the ultrasonic image” [Page 4, Lines 18-21]; “The measurement target recognition unit 9 of the processor 22 performs image recognition on the ultrasonic image within the recognition range determined based on the measurement position received by the measurement position specification reception unit 14, and includes the ultrasonic image in the ultrasonic image. The measurement object to be measured. […] The measurement target may include a site to be measured, such as an organ, and a lesion site such as a tumor, a cyst, and a hemorrhage” [Page 4, Lines 24-31]; “The measurement algorithm setting unit 12 of the processor 22 sets a measurement algorithm for the measurement target recognized by the measurement target recognition unit 9. The measurement algorithm setting unit 12 previously stores measurement algorithms corresponding to a plurality of parts that can be a measurement target as an association table, and sets the measurement algorithm with reference to the association table when the measurement target is determined” [Page 5, Lines 4-8]; “The measurement unit 10 of the processor 22 measures the measurement target recognized by the measurement target recognition unit 9 based on the measurement algorithm set by the measurement algorithm setting unit 12, and displays the display unit 8 via the display control unit 7 [t]o display the measurement result. Here, the measurement result displayed by the measurement unit 10 on the display unit 8 may include the name of the measurement target, a measurement line used for measurement, a caliper, and the like, in addition to the measurement value for the measurement target” [Page 5, Lines 22-27]. Therefore, the measurement unit 10 of the processor 22 performs a measurement of the measurement target recognized by the measurement target recognition unit 9 based on the measurement algorithm set by the measurement algorithm setting unit and displays the corresponding measurement result on the display unit 8. Therefore, the apparatus of Yoshiro includes a processor which is configured to: perform a measurement on the subject at the examination position, and display a result of the measurement on the monitor.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ultrasound diagnostic apparatus of Schwab in view of Ohtake such that the processor is configured to perform a measurement on the subject at the examination position and display a result of the measurement on the monitor as disclosed in Yoshiro in order to allow for a user to be appraised of characteristics of the anatomical features included within the ultrasound image. Performing a measurement on an anatomical feature included in an ultrasound image and displaying that measurement is one of a finite number of techniques which can be used to assess said anatomical feature with a reasonable expectation of success. Thus, modifying the ultrasound diagnostic apparatus of Ohtake such that the processor is configured to perform a measurement on the subject at the examination position and display a result of the measurement on the monitor as disclosed in Yoshiro would yield the predictable result of allowing a user to assess/diagnose characteristics of an anatomical feature present within an ultrasound image obtained from a subject. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN E SEBASTIAN whose telephone number is (571)272-6190. The examiner can normally be reached Mon.- Fri. 7:30-4:30 (Alternate Fridays Off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KAITLYN E SEBASTIAN/Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Sep 08, 2024
Application Filed
Oct 16, 2025
Non-Final Rejection — §102, §103
Dec 26, 2025
Interview Requested
Jan 06, 2026
Examiner Interview Summary
Jan 06, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Feb 11, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599359
ULTRASOUND DIAGNOSTIC APPARATUS AND METHOD OF CONTROLLING ULTRASOUND DIAGNOSTIC APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12594125
VISUALIZATION SYSTEM AND METHOD FOR ENT PROCEDURES
2y 5m to grant Granted Apr 07, 2026
Patent 12594052
METHOD AND DEVICE FOR LOCALIZING A VEIN WITHIN A LIMB
2y 5m to grant Granted Apr 07, 2026
Patent 12582385
SYSTEMS AND METHODS FOR ULTRASOUND IMAGING
2y 5m to grant Granted Mar 24, 2026
Patent 12575759
MEDICAL IMAGE DIAGNOSTIC APPARATUS, COUCH DEVICE, AND CONTROL METHOD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
93%
With Interview (+20.7%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 315 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month