Prosecution Insights
Last updated: April 19, 2026
Application No. 18/873,006

POINT OF CARE ULTRASOUND INTERFACE

Non-Final OA §102§103
Filed
Dec 09, 2024
Examiner
BYKHOVSKI, ALEXEI
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
BFLY Operations, Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
261 granted / 346 resolved
+5.4% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
380
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
23.6%
-16.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 346 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 11, and 14-15 are objected to because of the following informalities: In claim 1, lines 9, 15, and 18, the “a first preset… a search preset… a target preset” should read “a first preset of the presets… a search preset of the presets… a target preset of the presets” or “presets including a first preset, a search preset, and a target preset … the first preset… the search preset… the target preset”. In claim 1, line 10; claim 14, line 7, claim 15, line 8, “ultrasound data” should read “the ultrasound data”. In claim 11, lines 2-3; “indicating the anatomical feature has been identified” should read “indicating the anatomical feature that has been identified”. In claim 14, lines 4, 12, and 15, and claim 15, lines 5, 13, and 16, “each preset… a search preset… a target preset” should read “each preset of the plurality of presets… a search preset of the plurality of presets… a target preset of the plurality of presets”. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3-6, 10-12, and 14-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Dickie (US 20210345993), hereinafter Dickie. Regarding claim 1, Dickie teaches a processing device (30) (fig. 1) that communicates (24) with an ultrasound device (12) (“the communications module 22 wirelessly transmits signals to and receives signals from the display device 30 along wireless communication link 24.” [0028]), the processing device comprising: a display screen (32) (“The display device 30 may host a screen 32” [0029]) (“The display device 30 may be, for example, a laptop computer, a tablet computer, a desktop computer, a smart phone, a smart watch, spectacles with a built-in display, a television, a bespoke display or any other display device that is capable of being communicably connected to the scanner 12.” [0029]); a memory (36) that stores presets (“presets” [0027]), where each preset includes one or more modes (“the depth and density of the vertical scan lines” [0038]; “the settings” [0060]) used to control the ultrasound device (“a preset P2 (e.g., a cardiac preset) that is optimized for scanning a heart 66” [0038]) and one or more tools (“label icons” [0041]) to analyze ultrasound data from the ultrasound device “(various presets P1, P2, P3, P4 (shown in FIG. 2 with various label icons).” [0041]; “label the subsequently acquired ultrasound control data frames as ultrasound control data frames that correspond to the second preset" [0114]; “label the ultrasound control data frames as corresponding to the user-selected preset” [0115]); and a processor (34) coupled to the memory, wherein the processor is configured to: operate the ultrasound device using a first preset (“Such a preset may be selected… A preset may include numerous different parameters for the scanner 12” [0027]) (“In step 102, an image data frame 50 is acquired using whatever the current settings of the scanner 12 are. For example, the settings may be default settings, a particular preset, or settings that have been made manually.” [0060]) (“The display device 30 may host a screen 32 and may include a processor 34, which is connected to a non-transitory computer readable memory 36 storing computer readable instructions 38, which, when executed by the processor 34, cause the display device 30 to provide one or more of the functions of the system 10. Such functions may be, for example, the receiving of ultrasound data that may or may not be pre-processed; scan conversion of ultrasound data that is received into a ultrasound images; processing of ultrasound data in control data frames and/or image data frames; the display of an ultrasound image on the screen 32; the display of a user interface; the control of the scanner 12; and/or the storage, application, reinforcing and/or training of an AI model.” [0029]); generate ultrasound images using ultrasound data from the ultrasound device (“scan conversion of ultrasound data that is received into a ultrasound images” [0029]), where the ultrasound images include: a first portion of the ultrasound images that are imaging frames (50, 51, 52, 54, 55) acquired with the first preset (“processing of ultrasound data in … image data frames; the display of an ultrasound image on the screen 32” [0029] “In FIG. 2, consecutive image data frames 50, 51, 52 are shown, …, which in turn is followed by two further image data frames 54, 55.” [0036]; Fig. 2); and a second portion of the ultrasound images that are search frames (53) acquired with a search preset (“reference parameters RP” [0037]) (“processing of ultrasound data in control data frames” [0029]. “In FIG. 2, consecutive image data frames 50, 51, 52 are shown, followed by a control data frame 53, which in turn is followed by two further image data frames 54, 55.” [0036]; “The control data frame 53 may use reference parameters RP for acquiring the ultrasound data in the control data frame 53. In general, the reference parameters RP may be configured to be consistent, regardless of whatever preset or settings the ultrasound scanner is set to. This may mean that the reference parameters are different from the parameters of the preset P1 and the other presets that the scanner 12 may be capable of using.” [0037]); display the imaging frames of the first portion on the display screen (106) (“the display of an ultrasound image on the screen 32” [0029]; “the image frame 60 may be displayed in step 106.” [0060]; Figs. 1-3); identify an anatomical feature (66) (“a heart” [0038]) in the search frames using a deep learning model (70) (“In the example of FIG. 2, processing the control data frame 53 with the AI model results in a prediction that the control data frame 53 corresponds to a scan of a heart… a heart 66” [0038]); select a target preset (P2) based on the identified anatomical feature (“As a result of this prediction, the AI model 70 may output an instruction to the scanner 12 to set a preset P2 (e.g., a cardiac preset) that is optimized for scanning a heart 66. It can be seen that the parameters for P2 (e.g. as illustrated, the depth and density of the vertical scan lines) are different from the parameters for preset P1. As illustrated, they are also different from reference parameters RP for the control data frame 53.” [0038]; Figs. 2-3); and modify a user interface of the processing device (“the user interface” [0067]) based on the target preset (“in the auto-preset mode, the user interface on the display device 30 may be configured to show the new preset each time the preset is changed, and display an option for the user to cancel the change for a few seconds after the change.” [0067]; Fig. 2), wherein the search frames are time-interleaved with the imaging frames (“Referring still to FIG. 2, the control data frames 53 that are collected are interspersed amongst the image data frames 50, 51, 52, 54, 55, and they may continue to be acquired in an interspersed fashion as the ultrasound scan proceeds." [0053]; Fig. 2). Regarding claim 3, Dickie teaches the processing device of claim 1, wherein the search preset is the same as the first preset (“control data frames that have been labeled as being associated with various presets P1, P2, P3, P4” [0041]), wherein the search frames and the imaging frames are two-dimensional ultrasound images (“polar (R-theta) coordinates to (X-Y) coordinates.” [0021]) generated using the first preset (“The term “scan convert”, “scan conversion”, or any of its grammatical forms refers to the construction of an ultrasound media, such as a still image or a video, from lines of ultrasound scan data representing echoes of ultrasound signals. Scan conversion may involve converting beams and/or vectors of acoustic scan data which are in polar (R-theta) coordinates to cartesian (X-Y) coordinates.” [0021]), wherein the deep learning model is a neural network classifier (70) (“The term “AI model” means a mathematical or statistical model that may be generated through artificial intelligence techniques such as machine learning. For example, the machine learning may involve inputting labeled or classified data into a neural network algorithm for training, so as to generate a model that can make predictions or decisions on new data without being explicitly programmed to do so. Different software tools (e.g., TensorFlow™, PyTorch™, Keras™) may be used to perform machine learning processes.” [0015]), wherein the neural network classifier is trained with ultrasound images generated using parameters of each of the presets (“P1, P2, P3, P4”) stored in the memory (“To predict the settings that would suitable for a new control data frame 53, the AI model 70 may previously be generated using machine learning methods. For example, this may involve training the AI model with one or more datasets containing different classes of control data frames that have been labeled as being associated with various presets P1, P2, P3, P4 (shown in FIG. 2 with various label icons). These various presets may generally correspond to different types of anatomical features 74, 78, 82, 86.” [0041]). Regarding claim 4, Dickie teaches the processing device of claim 1, wherein the search preset includes a plurality of different pillar presets (any 3 out of (P1, P2, P3, P4) [0041]) that are different from the first preset (“The set of parameters for each preset is usually optimized for the particular body part to which the preset relates. There may be upwards of a hundred different parameters (including, for example, frequency, focal zones, line density, whether harmonic imaging is on, and the like) for each preset” [0003]. “It can be seen that the parameters for P2 (e.g. as illustrated, the depth and density of the vertical scan lines) are different from the parameters for preset P1.” [0038] “Referring to FIG. 3, a flowchart shows an exemplary process undertaken by the system 10 (as shown in FIG. 1), in which the scanner settings are updated as a result of analysis of a control data frame 53. In discussing FIG. 3, reference will also be generally made to the sequence of various frames shown in FIG. 2. In step 100, an image data frame counter is set to zero (i=0). In step 102, an image data frame 50 is acquired using whatever the current settings of the scanner 12 are. For example, the settings may be default settings, a particular preset, or settings that have been made manually.” [0060]), wherein the search frames include two-dimensional ultrasound images for each of the different pillar presets (“polar (R-theta) coordinates to (X-Y) coordinates.” [0021]; “training the AI model with one or more datasets containing different classes of control data frames that have been labeled as being associated with various presets P1, P2, P3, P4 (shown in FIG. 2 with various label icons). These various presets may generally correspond to different types of anatomical features 74, 78, 82, 86.” [0041]), wherein the deep learning model is a neural network classifier (70) (“The term “AI model” means a mathematical or statistical model that may be generated through artificial intelligence techniques such as machine learning. For example, the machine learning may involve inputting labeled or classified data into a neural network algorithm for training, so as to generate a model that can make predictions or decisions on new data without being explicitly programmed to do so. Different software tools (e.g., TensorFlow™, PyTorch™, Keras™) may be used to perform machine learning processes.” [0015]) trained to identify the anatomical feature based on image recognition, wherein the neural network classifier is trained with ultrasound images generated using parameters of each of the different pillar presets (“As illustrated in FIG. 2, the AI model 70 may be trained with classes of control data frames 72, 76, 80, 84 that correspond to presets P1-P4 for scanning a single type of anatomy (e.g., for lungs 74, cardiac 78, or bladders 82), or multiple types of anatomy (e.g., an abdomen preset which may suitable for scanning kidneys 86 and livers 88).[0050]; “the different classes of ultrasound control data frames used for the AI model may generally include ultrasound data acquired for one or more anatomical features; such anatomical features including a lung, a heart, a liver, a kidney, a bladder, an eye, a womb, a thyroid gland, a breast, a brain, an artery, a vein, a muscle, an embryo, a tendon, a bone, a fetus, a prostate, a uterus, an ovary, testes, a pancreas, or a gall bladder.” [0051] Figs. 2-3). Regarding claim 5, Dickie teaches the processing device of claim 4, wherein each of the plurality of pillar presets utilizes a different combination of frequency and imaging depth parameters (“The set of parameters for each preset is usually optimized for the particular body part to which the preset relates. There may be upwards of a hundred different parameters (including, … frequency, …) for each preset” [0003]. “It can be seen that the parameters for P2 (e.g. as illustrated, the depth …) are different from the parameters for preset P1.” [0038]). Regarding claim 6, Dickie teaches the processing device of claim 4, wherein the plurality of different pillar presets include four pillar presets corresponding to a cardiac anatomical region (78), an abdominal anatomical region (86, 88), a musculoskeletal anatomical region (“musculoskeletal,” [0052]), and a lung region (74) (“the preset P1 may generally be for scanning lungs 74, preset P2 may generally be for scanning cardiac features 78,… and preset P4 may generally be for scanning abdominal features such as kidneys 86 or livers 88” [0041]; "the different presets for which there may be labeled training control data frames may generally include presets for at least two of abdomen, cardiac, bladder, lung, … musculoskeletal," [0052]). Regarding claim 10, Dickie teaches the processing device of claim 1, wherein modifying the user interface includes automatically switching from the first preset to the target preset without interaction from a user of the processing device (“during operation of the scanner 12, the user may be presented with a set of presets and an auto-preset option. While each preset is suited for a particular part of the anatomy, the auto-preset option, if selected, will automatically predict and select the optimum preset using the AI model 70 as described herein.” [0057]; “in the auto-preset mode, the user interface on the display device 30 may be configured to show the new preset each time the preset is changed” [0067]). Regarding claim 11, Dickie teaches the processing device of claim 10, wherein modifying the user interface further includes indicating the anatomical feature has been identified (“pictorial representations” [0041]) on the display screen before automatically switching from the first preset to the target preset (“In FIG. 2, the anatomical features 74, 78, 82, 86, 88 that the various presets P1-P4 are respectively associated with are shown in dotted outline for illustrative purposes to provide a pictorial representation of the anatomical features; but such pictorial representations are not viewable ultrasound image frames.” [0041]; “in the auto-preset mode, the user interface on the display device 30 may be configured to show the new preset each time the preset is changed” [0067]). Regarding claim 12, Dickie teaches the processing device of claim 1, wherein modifying the user interface includes creating a control element for the target preset on the display screen, wherein the first preset is switched to the target preset in response to a user of the processing device interacting with the control element (“When using some ultrasound scanners, whether mobile or not, users are traditionally expected to select a preset depending on the part of the anatomy that is to be scanned. [0003]; “a user manually selects a different preset” [0067]; “the user manually selects the cardiac preset while control data frames for such an image is being acquired” [0068]; “operate according to a user-selected preset” [0115]). Regarding claim 14, Dickie teaches a method of operating a processing device (30) (seen in fig. 1) that communicates (24) with an ultrasound device (12) (“the communications module 22 wirelessly transmits signals to and receives signals from the display device 30 along wireless communication link 24.” [0028]; “The display device 30 may be, for example, a laptop computer, a tablet computer, a desktop computer, a smart phone, a smart watch, spectacles with a built-in display, a television, a bespoke display or any other display device that is capable of being communicably connected to the scanner 12.” [0029]), the method comprising: operating the ultrasound device using a first preset (“Such a preset may be selected… A preset may include numerous different parameters for the scanner 12” [0027]) of a plurality of presets (“presets” [0027]) stored in a memory (36) of the processing device (“In step 102, an image data frame 50 is acquired using whatever the current settings of the scanner 12 are. For example, the settings may be default settings, a particular preset, or settings that have been made manually.” [0060]) (“The display device 30 may host a screen 32 and may include a processor 34, which is connected to a non-transitory computer readable memory 36 storing computer readable instructions 38, which, when executed by the processor 34, cause the display device 30 to provide one or more of the functions of the system 10. Such functions may be, for example, the receiving of ultrasound data that may or may not be pre-processed; scan conversion of ultrasound data that is received into a ultrasound images; processing of ultrasound data in control data frames and/or image data frames; the display of an ultrasound image on the screen 32; the display of a user interface; the control of the scanner 12” [0029]), where each preset includes one or more modes (“the depth and density of the vertical scan lines” [0038]; “the settings” [0060]) used to control the ultrasound device (“a preset P2 (e.g., a cardiac preset) that is optimized for scanning a heart 66” [0038]) and one or more tools (“label icons” [0041]) to analyze ultrasound data from the ultrasound device “(various presets P1, P2, P3, P4 (shown in FIG. 2 with various label icons).” [0041]; “label the subsequently acquired ultrasound control data frames as ultrasound control data frames that correspond to the second preset" [0114]; “label the ultrasound control data frames as corresponding to the user-selected preset” [0115]); generating ultrasound images using ultrasound data from the ultrasound device (“scan conversion of ultrasound data that is received into a ultrasound images” [0029]), where the ultrasound images include: a first portion of the ultrasound images that are imaging frames (50, 51, 52, 54, 55) acquired with the first preset (“processing of ultrasound data in … image data frames; the display of an ultrasound image on the screen 32” [0029] “In FIG. 2, consecutive image data frames 50, 51, 52 are shown, …, which in turn is followed by two further image data frames 54, 55.” [0036]; Fig. 2); and a second portion of the ultrasound images that are search frames (53) acquired with a search preset (“reference parameters RP” [0037]) (“processing of ultrasound data in control data frames” [0029]. “In FIG. 2, consecutive image data frames 50, 51, 52 are shown, followed by a control data frame 53, which in turn is followed by two further image data frames 54, 55.” [0036]; “The control data frame 53 may use reference parameters RP for acquiring the ultrasound data in the control data frame 53. In general, the reference parameters RP may be configured to be consistent, regardless of whatever preset or settings the ultrasound scanner is set to. This may mean that the reference parameters are different from the parameters of the preset P1 and the other presets that the scanner 12 may be capable of using.” [0037]); displaying the imaging frames on a display screen (32) (106) (“the display of an ultrasound image on the screen 32” [0029]; “the image frame 60 may be displayed in step 106.” [0060]; Figs. 1-3); identifying an anatomical feature (66) (“a heart” [0038]) in the search frames using a deep learning model (70) (“In the example of FIG. 2, processing the control data frame 53 with the AI model results in a prediction that the control data frame 53 corresponds to a scan of a heart… a heart 66” [0038]); selecting, in response to identifying the anatomical feature, a target preset (P2) based on the identified anatomical feature (“As a result of this prediction, the AI model 70 may output an instruction to the scanner 12 to set a preset P2 (e.g., a cardiac preset) that is optimized for scanning a heart 66. It can be seen that the parameters for P2 (e.g. as illustrated, the depth and density of the vertical scan lines) are different from the parameters for preset P1. As illustrated, they are also different from reference parameters RP for the control data frame 53.” [0038]; Figs. 2-3); and modifying a user interface of the processing device (“the user interface” [0067]) based on the target preset (“in the auto-preset mode, the user interface on the display device 30 may be configured to show the new preset each time the preset is changed, and display an option for the user to cancel the change for a few seconds after the change.” [0067]; Fig. 2), wherein the search frames are time-interleaved with the imaging frames (“Referring still to FIG. 2, the control data frames 53 that are collected are interspersed amongst the image data frames 50, 51, 52, 54, 55, and they may continue to be acquired in an interspersed fashion as the ultrasound scan proceeds." [0053]; Fig. 2). Regarding claim 15, Dickie teaches a non-transitory computer readable medium (CRM) storing computer readable program code for operating a processing device (30) (seen in fig. 1) (“Different software tools (e.g., TensorFlow™, PyTorch™, Keras™) may be used to perform machine learning processes" [0015]. “The term “module” can refer to any component in this invention and to any or all of the features of the invention without limitation. A module may be a software, …and may be located, …, in … a display device” [0017]; “a processor 34, which is connected to a non-transitory computer readable memory 36 storing computer readable instructions 38, which, when executed by the processor 34, cause the display device 30 to provide one or more of the functions of the system 10” [0029]. “Also stored in the computer readable memory 36 may be computer readable data 40, which may be used by the processor 34 in conjunction with the computer readable instructions 38 to provide the functions of the system 10.” [0030]; Fig. 1) that communicates (24) with an ultrasound device (12) (“the communications module 22 wirelessly transmits signals to and receives signals from the display device 30 along wireless communication link 24.” [0028]; “The display device 30 may be, for example, a laptop computer, a tablet computer, a desktop computer, a smart phone, a smart watch, spectacles with a built-in display, a television, a bespoke display or any other display device that is capable of being communicably connected to the scanner 12.” [0029]), the computer readable program code causes the processing device to: operate the ultrasound device using a first preset (“Such a preset may be selected… A preset may include numerous different parameters for the scanner 12” [0027]) of a plurality of presets (“presets” [0027]) stored in a memory (36) of the processing device (“In step 102, an image data frame 50 is acquired using whatever the current settings of the scanner 12 are. For example, the settings may be default settings, a particular preset, or settings that have been made manually.” [0060]) (“The display device 30 may host a screen 32 and may include a processor 34, which is connected to a non-transitory computer readable memory 36 storing computer readable instructions 38, which, when executed by the processor 34, cause the display device 30 to provide one or more of the functions of the system 10. Such functions may be, for example, the receiving of ultrasound data that may or may not be pre-processed; scan conversion of ultrasound data that is received into a ultrasound images; processing of ultrasound data in control data frames and/or image data frames; the display of an ultrasound image on the screen 32; the display of a user interface; the control of the scanner 12” [0029]), where each preset includes one or more modes (“the depth and density of the vertical scan lines” [0038]; “the settings” [0060]) used to control the ultrasound device (“a preset P2 (e.g., a cardiac preset) that is optimized for scanning a heart 66” [0038]) and one or more tools (“label icons” [0041]) to analyze ultrasound data from the ultrasound device “(various presets P1, P2, P3, P4 (shown in FIG. 2 with various label icons).” [0041]; “label the subsequently acquired ultrasound control data frames as ultrasound control data frames that correspond to the second preset" [0114]; “label the ultrasound control data frames as corresponding to the user-selected preset” [0115]); generate ultrasound images using ultrasound data from the ultrasound device (“scan conversion of ultrasound data that is received into a ultrasound images” [0029]), where the ultrasound images include: a first portion of the ultrasound images that are imaging frames (50, 51, 52, 54, 55) acquired with the first preset (“processing of ultrasound data in … image data frames; the display of an ultrasound image on the screen 32” [0029] “In FIG. 2, consecutive image data frames 50, 51, 52 are shown, …, which in turn is followed by two further image data frames 54, 55.” [0036]; Fig. 2); and a second portion of the ultrasound images that are search frames (53) acquired with a search preset (“reference parameters RP” [0037]) (“processing of ultrasound data in control data frames” [0029]. “In FIG. 2, consecutive image data frames 50, 51, 52 are shown, followed by a control data frame 53, which in turn is followed by two further image data frames 54, 55.” [0036]; “The control data frame 53 may use reference parameters RP for acquiring the ultrasound data in the control data frame 53. In general, the reference parameters RP may be configured to be consistent, regardless of whatever preset or settings the ultrasound scanner is set to. This may mean that the reference parameters are different from the parameters of the preset P1 and the other presets that the scanner 12 may be capable of using.” [0037]); display the imaging frames on a display screen (32) (106) (“the display of an ultrasound image on the screen 32” [0029]; “the image frame 60 may be displayed in step 106.” [0060]; Figs. 1-3); identify an anatomical feature (66) (“a heart” [0038]) in the search frames using a deep learning model (70) (“In the example of FIG. 2, processing the control data frame 53 with the AI model results in a prediction that the control data frame 53 corresponds to a scan of a heart… a heart 66” [0038]); select, in response to identifying the anatomical feature, a target preset (P2) based on the identified anatomical feature (“As a result of this prediction, the AI model 70 may output an instruction to the scanner 12 to set a preset P2 (e.g., a cardiac preset) that is optimized for scanning a heart 66. It can be seen that the parameters for P2 (e.g. as illustrated, the depth and density of the vertical scan lines) are different from the parameters for preset P1. As illustrated, they are also different from reference parameters RP for the control data frame 53.” [0038]; Figs. 2-3); and modifying a user interface of the processing device (“the user interface” [0067]) based on the target preset (“in the auto-preset mode, the user interface on the display device 30 may be configured to show the new preset each time the preset is changed, and display an option for the user to cancel the change for a few seconds after the change.” [0067]; Fig. 2), wherein the search frames are time-interleaved with the imaging frames (“Referring still to FIG. 2, the control data frames 53 that are collected are interspersed amongst the image data frames 50, 51, 52, 54, 55, and they may continue to be acquired in an interspersed fashion as the ultrasound scan proceeds." [0053]; Fig. 2). Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Dickie as applied to claim 1, and further in view of Mehl et al (US 20070239001), hereinafter, Mehl. Regarding claim 2, Dickie teaches the processing device of claim 1, wherein the search preset is different from the first preset (“there are fewer data lines (e.g., less line density, as shown via the sparser vertical scan lines) in the control data frame 53 than the parameters for preset P1 used in the image data frames 50, 51, 52” [0037]), wherein the search frames include two-dimensional ultrasound images for the search preset (“polar (R-theta) coordinates to (X-Y) coordinates.” [0021]), wherein the search preset utilizes no image processing such that the second portion of ultrasound images are generated faster than the first portion of images, wherein the deep learning model is a neural network classifier trained to identify the anatomical feature based on image recognition (70) (“The term “AI model” means a mathematical or statistical model that may be generated through artificial intelligence techniques such as machine learning. For example, the machine learning may involve inputting labeled or classified data into a neural network algorithm for training, so as to generate a model that can make predictions or decisions on new data without being explicitly programmed to do so. Different software tools (e.g., TensorFlow™, PyTorch™, Keras™) may be used to perform machine learning processes.” [0015]), wherein the neural network classifier is trained with ultrasound images generated using parameters (RP) of the search preset (“the reference parameters RP may be configured to be consistent, regardless of whatever preset or settings the ultrasound scanner is set to. This may mean that the reference parameters are different from the parameters of the preset P1 and the other presets that the scanner 12 may be capable of using.” [0037]; “consistent reference parameters RP are used for both the training control data frames 72, 76, 80, 84 and the new control data frame 53" [0047]. “As noted above, even though the scanners 12, 202, 204 may be different, the control data frames that are captured by them are all captured with consistent reference parameters RP, so that each control data frame acquired may be used by the AI model 70 for training, without any special pre-processing of the captured data." [0075]). Dickie does not teach that the search preset utilizes a Nyquist sampling rate. However, in the ultrasonic imaging field of endeavor, Mehl discloses high-frequency array ultrasound system, which is analogous art. Mehl teaches that the search preset utilizes a Nyquist sampling rate (“If Nyquist sampling is used, then no reconstruction is required since the RF signal is sampled directly.” [0154]. “Sampling the signal spectrum in FIG. 33 using normal Nyquist sampling requires a sample rate of 80 MHz or higher.” [0330]). Therefore, based on Mehl’s teachings, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Dickie to have the search preset that utilizes a Nyquist sampling rate, as taught by Mehl, in order to facilitate ultrasound imaging of objects of interest without reconstruction. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Dickie as applied to claim 1, and further in view of Venkataramani et al (US 20220160334), hereinafter, Venkataramani. Regarding claim 7, Dickie teaches the processing device of claim 1, wherein the search preset includes a line scan mode that is different from the first preset (“there are fewer data lines (e.g., less line density, as shown via the sparser vertical scan lines) in the control data frame 53 than the parameters for preset P1 used in the image data frames 50, 51, 52” [0037]), wherein the search frames include line scan images that are time-interleaved between every imaging frame (“Referring to FIG. 1, an exemplary system 10 is shown for controlling the settings of an ultrasound scanner 12 (hereinafter “scanner” for brevity) dependent on interspersed control data frames.” [0026]. “During acquisition of the ultrasound image feed, additional ultrasound control data frames may be acquired that are interspersed amongst the ultrasound data frames.” [0033]), wherein the deep learning model is a neural network classifier (70) trained to identify the anatomical feature (“the different classes of ultrasound control data frames used for the AI model may generally include ultrasound data acquired for one or more anatomical features; such anatomical features including a lung, a heart, a liver, a kidney, a bladder, an eye, a womb, a thyroid gland, a breast, a brain, an artery, a vein, a muscle, an embryo, a tendon, a bone, a fetus, a prostate, a uterus, an ovary, testes, a pancreas, or a gall bladder.” [0051]), wherein the neural network classifier is trained with line scan images (53) generated using parameters of the line scan mode (“line density” [0003]; “the reference parameters used to acquire the control data frame 53 have a shallower depth of scan than the parameters for preset P1 used in the data frames 50, 51, 52. Also, there are fewer data lines (e.g., less line density, as shown via the sparser vertical scan lines) in the control data frame 53" [0037]; Fig. 2; “consistent reference parameters RP are used for both the training control data frames 72, 76, 80, 84 and the new control data frame 53" [0047]. “As noted above, even though the scanners 12, 202, 204 may be different, the control data frames that are captured by them are all captured with consistent reference parameters RP, so that each control data frame acquired may be used by the AI model 70 for training, without any special pre-processing of the captured data." [0075]). Dickie does not teach one-dimensional line scan images, identifying the anatomical feature based on temporal dynamics in the line scan images, wherein the neural network classifier is trained with time-varying line scan images generated using parameters of the line scan mode. However, in the ultrasonic imaging field of endeavor, Venkataramani discloses a method and system for enhanced visualization of a pleural line by automatically detecting and marking the pleural line in images of a lung ultrasound scan, which is analogous art. Venkataramani teaches one-dimensional line scan images (310), identifying the anatomical feature based on temporal dynamics in the line scan images (“The M-mode images each correspond to one location (i.e., line) in the B-mode images over time." [0031]), wherein the neural network classifier is trained with time-varying line scan images generated using parameters of the line scan mode (“aspects of the present disclosure have the technical effect of reducing computation time and resources by automatically marking a pleural line in B-mode images generated from an acquired cine loop based on identification of the pleural line in a limited number of M-mode images (e.g., 1-3 M-mode images)" [0010]; “the artificial intelligence model inferenced by the detection processor 160 may be trained to automatically identify anatomical structure (e.g., a pleural line 316) in second mode images (e.g., M-mode images 310)” [0041]; Fig. 2). Therefore, based on Venkataramani’s teachings, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Dickie to employ one-dimensional line scan images and identify the anatomical feature based on temporal dynamics in the line scan images, wherein the neural network classifier is trained with time-varying line scan images generated using parameters of the line scan mode, as taught by Venkataramani, in order to facilitate ultrasound imaging of objects of interest by reducing computation time. Regarding claim 8, Dickie modified by Venkataramani teaches the processing device of claim 7, wherein Dickie teaches that the search preset further includes a plurality of different pillar presets (P1, P2, P3, P4 [0041]) that are different from the first preset (“The set of parameters for each preset is usually optimized for the particular body part to which the preset relates. There may be upwards of a hundred different parameters (including, for example, frequency, focal zones, line density, whether harmonic imaging is on, and the like) for each preset” [0003]; “control data frames 72, 76, 80, 84 that correspond to presets P1-P4 for scanning a single type of anatomy (e.g., for lungs 74, cardiac 78, or bladders 82), or multiple types of anatomy (e.g., an abdomen preset which may suitable for scanning kidneys 86 and livers 88).” [0050]; “the settings may be default settings, a particular preset, or settings that have been made manually.” [0060]. The first preset may be made manually while presets P1, P2, P3, P4 represent different anatomy and they are in the search preset in Fig. 2), wherein the search frames include two-dimensional ultrasound images for each of the different pillar presets (“polar (R-theta) coordinates to (X-Y) coordinates.” [0021]), wherein the neural network classifier is further trained to identify the anatomical feature based on image recognition of the two-dimensional ultrasound images (“the machine learning may involve inputting labeled or classified data into a neural network algorithm for training, so as to generate a model that can make predictions or decisions on new data without being explicitly programmed to do so. Different software tools (e.g., TensorFlow™, PyTorch™, Keras™) may be used to perform machine learning processes.” [0015]), wherein the neural network classifier is further trained with ultrasound images (“control data frames” [0041]) generated using parameters of each of the different pillar presets (“To predict the settings that would suitable for a new control data frame 53, the AI model 70 may previously be generated using machine learning methods. For example, this may involve training the AI model with one or more datasets containing different classes of control data frames that have been labeled as being associated with various presets P1, P2, P3, P4 (shown in FIG. 2 with various label icons). These various presets may generally correspond to different types of anatomical features 74, 78, 82, 86.” [0041]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Dickie as applied to claim 1, and further in view of Rothberg et al (US 20190130554), hereinafter, Rothberg. Regarding claim 9, Dickie teaches the processing device of claim 1. Dickie does not teach that the processor is further configured to determine an image quality of the search frames during analysis for the anatomical feature, wherein, in response to the image quality being greater than or equal to a predetermined threshold, the search frames are analyzed for the anatomical feature, wherein, in response to the image quality being less than the predetermined threshold, the search frames are deleted from the processing device. However, in the ultrasonic imaging field of endeavor, Rothberg discloses quality indicators for collection of and automated measurement on ultrasound images, which is analogous art. Rothberg teaches that the processor is further configured to determine an image quality (604) of the search frames during analysis for the anatomical feature (“for performing the particular automatic measurement” [0087]), wherein, in response to the image quality being greater than or equal to a predetermined threshold (“50%” [0087]; “the threshold” [0089]), the search frames are analyzed for the anatomical feature (“FIG. 7 illustrates an example of the GUI 600, where the live quality indicator 604 indicates a different quality than in FIG. 6, in accordance with certain embodiments described herein. FIG. 7 differs from FIG. 6 in that in FIG. 7, the slider 618 is located approximately 60% of the distance from the first end 608 to the second end 610 of the bar 606, indicating that the live quality may be approximately 60% on a scale of 0% to 100%. Because the quality is above the threshold indicated by the acceptability indicator, the slider 618 includes a checkmark symbol that further indicates that the sequence of images is considered acceptable for performing the particular automatic measurement, and the text 612 reads “Good Image.” [0089]), wherein, in response to the image quality being less than the predetermined threshold, the search frames are deleted from the processing device (“In FIG. 6, the acceptability indicator 620 is a black bar located approximately 50% of the distance from the first end 608 to the second end 610 of the bar 606. The acceptability indicator 620 indicates a threshold quality above which a sequence of images may be considered acceptable or not acceptable for performing the particular automatic measurement. Thus, a quality below 50% on a scale of 0% to 100% may indicate that the sequence of images is considered unacceptable for performing the particular automatic measurement while a quality above 50% may indicate that the sequence of images is considered acceptable for performing the particular automatic measurement. In FIG. 6, the quality shown is approximately 0%, and thus the sequence of images may be considered unacceptable for performing the particular automatic measurement.” [0087]. It is reasonable to delete low quality images from the processing device to reduce memory and/or disk space requirements). Therefore, based on Rothberg’s teachings, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Dickie to employ the processor that is further configured to determine an image quality of the search frames during analysis for the anatomical feature, wherein, in response to the image quality being greater than or equal to a predetermined threshold, the search frames are analyzed for the anatomical feature, wherein, in response to the image quality being less than the predetermined threshold, the search frames are deleted from the processing device, as taught by Rothberg, in order to facilitate ultrasound imaging of objects of interest. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Dickie as applied to claim 12, and further in view of Yoon (US 20160262726), hereinafter, Yoon. Regarding claim 13, Dickie teaches the processing device of claim 12, wherein modifying the user interface includes: creating a first timer that limits an amount of time available for the user to interact with the control element (“for a few seconds” [0067]) ("in the auto-preset mode, the user interface on the display device 30 may be configured to show the new preset each time the preset is changed, and display an option for the user to cancel the change for a few seconds after the change. [0067]. Dickie does not teach creating a second timer that disables the target preset from selection based on the identified anatomical feature for a predetermined amount of time. However, in the ultrasonic imaging field of endeavor, Yoon discloses a method and ultrasound apparatus for setting preset, which is analogous art. Yoon teaches creating a second timer that disables the target preset from selection based on the identified anatomical feature for a predetermined amount of time (“In operation S250, the ultrasound imaging apparatus 1000 may hide the selection window, in the case that the ultrasound imaging apparatus determines that the user input of selecting one of the presets has not been entered within a reference time.” [0080]. “Referring to FIG. 5A, the ultrasound imaging apparatus 1000 may display the selection window 100 on the screen and hide the selection window, in the case that the user input for selecting one of the presets is not entered within a reference time.” [0109]. “Referring to FIG. 5B, in the case that the user input has not been recognized for changing the imaging probe or the imaging preset within a reference time, the ultrasound imaging apparatus 1000 may hide the selection window 100.” [0114]), Therefore, based on Yoon’s teachings, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Dickie to create a second timer that disables the target preset from selection based on the identified anatomical feature for a predetermined amount of time, as taught by Yoon, in order to simplify the user interface. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXEI BYKHOVSKI whose telephone number is (571)270-1556. The examiner can normally be reached on Monday-Friday: 8:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui Pho can be reached on 571-272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXEI BYKHOVSKI/ Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Dec 09, 2024
Application Filed
Nov 13, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599363
WEARABLE DEVICE FOR HANDS-FREE OPERATION OF AN ULTRASOUND PROBE
2y 5m to grant Granted Apr 14, 2026
Patent 12594053
PATIENT-SPECIFIC NEUROMODULATION ALIGNMENT STRUCTURES
2y 5m to grant Granted Apr 07, 2026
Patent 12593994
Blood Pressure Sensors
2y 5m to grant Granted Apr 07, 2026
Patent 12594054
AN ULTRASOUND SCANNER THAT SUPPORTS HANDSET WIRELESS NETWORK CONNECTIVITY
2y 5m to grant Granted Apr 07, 2026
Patent 12582384
ULTRASOUND IMAGING PROBE FOR HIFU RADIATION DEVICE, AND ULTRASOUND IMAGE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+28.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 346 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month