Prosecution Insights
Last updated: April 19, 2026
Application No. 18/850,097

METHOD AND SYSTEM FOR PREDICTING BUTTON PUSHING SEQUENCES DURING ULTRASOUND EXAMINATION

Final Rejection §103§112
Filed
Sep 24, 2024
Examiner
ROBINSON, NICHOLAS A
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Koninklijke Philips N V
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
64 granted / 131 resolved
-21.1% vs TC avg
Strong +55% interview lift
Without
With
+54.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
51 currently pending
Career history
182
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
30.6%
-9.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 131 resolved cases

Office Action

§103 §112
DETAILED ACTION This Office action is responsive to communications filed on 01/23/2026. Claims 1-5, 13, 15-19 have been amended. Claims 6, 14, 20 are canceled. Presently, Claims 1-5, 7-13, & 15-19 remain pending and are hereinafter examined on the merits. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Previous objections to the Abstract are withdrawn in view of the amendments filed on 01/23/2026. Previous rejections under 35 USC § 112(b) are withdrawn in view of the amendments filed on 01/23/2026. Previous rejections under 35 USC § 102(a)(1) are withdrawn in view of the amendments filed on 01/23/2026. Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) as applied in the prior rejection under 35 USC § 103 of record for any teaching or matter specifically challenged in the argument. The new grounds of rejection now relies on Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1). Claim Objections The following claims are objected to because of the following informalities and should recite: Claim 13: line, 22 ‘the predicted strings of next button pushes”. Appropriate correction is required. Claim 19: line, 18 ‘the predicted strings of next button pushes”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Claim 1: line 1, “relatively”. The term in claim is a relative term which renders the claim indefinite. The term is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For examination purposes, the Examiner assumes a more stable probe position indicative of slower, less frequency movements over small distance. Appropriate correction is required. The above rejections to claim 1 apply to claim 13 & 19 for substantially identical claim limitations recited in the claim. Accordingly, proper ordinal numbering and/or antecedent basis is required. Claim 4: Line 5, “the converting step”. There is insufficient antecedent basis for this limitation in the claim, as required by MPEP 2173.05(e). For examination purposes, the Examiner assumes from the converting of each of the sequency of button pushes to the corresponding encoded vector. Accordingly, proper antecedent basis is required. Line 5. “encoded vectors”. It is unclear if the phrase refers to or is separate from the “corresponding encoded vectors” in lines 3-4. For examination purposes, the Examiner assumes they are the phrase refers to the corresponding encoded vectors. Consistent claim language is required when referring to the same term. Appropriate correction is required. The above rejections to claim 4 apply to claim 18 for substantially identical claim limitations recited in the claim. Accordingly, proper ordinal numbering and/or antecedent basis is required. The dependent claims of the above rejected claims are rejected due to their dependency. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 9, 11, 13, 15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1). Claim 1: Anand discloses, A method of performing an ultrasound examination (¶Abstract) using an ultrasound imaging system (FIG.2) comprising a transducer probe (transducer probe 26) and a control interface (Ultrasound systems 10, ¶0005, ‘portable ultrasound systems 10 having a cart/base/support 12, a display or display/monitor 14, one or more input interface devices 16 (such as keyboard or mouse), and a generator 18.’) for controlling acquisition of ultrasound images during the ultrasound examination, the method comprising: (¶Abstract, ‘A method tracks a user's operation of an ultrasound system. Based on the tracked operations, the method uses a machine-learning module to generate a proposed custom system configuration for the ultrasound system for the user. The proposed custom system configuration is presented to the user. In response to a user instruction, the proposed custom system configuration is implemented on the ultrasound system.’; FIG. 2-3) obtaining sequences of button pushes performed by a user via the control interface during an exam workflow, each of the sequences of button pushes having a corresponding sequence length defined by a number of button pushes in the sequence of buttons; (FIG. 9B-10, S180, ¶0087, ‘a setup monitoring step S170 tracks and records any operator changes to ultrasound system setup during the interaction sequence. An instruction tracking step S180 begins the ongoing process of recording the interaction sequence of operator instructions that are entered during the course of the exam. A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence. At the conclusion of the exam, a generate changes step S200 can display text and graphical content to the operator for interface modifications based on the interaction sequence of operator setup and operation instructions. The generated changes that are suggested can relate to system setup commands or to the sequence of operator instructions. These results from the operator can then be compared with previous or other existing interaction sequences for the purpose of determining what types of changes would be useful for improving operator workflow for the particular type of exam noted. Generate changes step S200 can show proposed changes to the equipment setup using display screen 14 as shown schematically in FIG. 10. In the example shown herein, a display screen 70 shows a number of control buttons 72 often used by the operator, singly or in a defined interaction sequence, are displayed. The order, size, or other arrangement aspect of control buttons 72 can vary widely, depending on what best supports operator workflow. Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation. In a customization step S210, the operator is given an option to accept or to not use the projected custom system configuration changes, such as by pressing a control button 76.’; ¶0090, ‘Based on the patterns it observes in the user's actions (e.g. sequence of key strokes, equipment settings, and feature usage) during the interaction sequence the method described with reference to FIGS. 9A and 9B can propose new individualized layout(s) and can offer one-touch operation as part of the system configuration sequence for frequently used commands.’ see also ¶Abstract, ¶0014, ¶0057, ¶0087-0091, Claim 1, Claim 2) differentiating components of the exam workflow based on the sequences of button pushes, wherein the differentiated components depend on a clinical application of the ultrasound examination; (FIG. 9A; ¶0086, ‘FIG. 9A sequence, an exam type determination step S130 determines, from operator entries or from information provided from some other source, the exam type that will be performed. A setup parameters application step S140 applies the setup parameters from the existing custom system configuration or using new profile data. A begin examination step S150 then begins the ultrasound exam with the profile settings applied. An initiate machine learning step S160 initiates the machine learning software module for monitoring and recording operator instructions during the interaction sequence on the graphical user interface (GUI) for the current exam.’, see also ¶Abstract, Claims 1, Claim 2, ¶0046, ¶0047-0050, ¶0078-0079, ¶0087, ¶0088-0091) performing sequence pattern mining of the sequences of button pushes based on the differentiated components of the exam workflow to extract user specific workflow trends, wherein the user specific workflow trends comprise a plurality of most frequently used of the sequences of button pushes; (¶0079, ‘using machine learning in order to identify operator tendencies, patterns, and preferences from the interaction sequence.’; ¶0086, ‘an exam type determination step S130 determines, from operator entries or from information provided from some other source, the exam type that will be performed’; ¶0087, ‘An instruction tracking step S180 begins the ongoing process of recording the interaction sequence of operator instructions that are entered during the course of the exam. […] A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence.’, see also ¶Abstract, ¶0057, ¶0079, ¶0082-0083 ¶0086-0091) predicting strings of next button pushes using a predictive model based on at least one previous button push in the user specific workflow trends, respectively, (¶0076, ‘Examples of some of the ultrasound parameters that are typically adapted by the sonographer to task requirements on a per-exam basis during the operator interaction sequence include dynamic range, acoustic signal gain, transmit frequency, choice of harmonic imaging versus fundamental-only imaging, time gain compensation (TGC) setting, preference for triplex versus duplex Doppler modes, and other settings.’; ¶0087, ‘A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence. […] These results from the operator can then be compared with previous or other existing interaction sequences for the purpose of determining what types of changes would be useful for improving operator workflow for the particular type of exam noted. […] Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation.’; ¶0090, ‘Based on the patterns it observes in the user's actions (e.g. sequence of key strokes, equipment settings, and feature usage) during the interaction sequence the method described with reference to FIGS. 9A and 9B can propose new individualized layout(s) and can offer one-touch operation as part of the system configuration sequence for frequently used commands.’, see also ¶Abstract, ¶0046, ¶0079, ¶0082-0083, ¶0086-0091), wherein the predicting the strings of the next button pushes is triggered by the user specific workflow trends; and -Anand teaches a machine learning pattern recognition module that observe, track and record operator interaction with the system in order to detect workflow patterns, ¶0089. The predicative model is based on the observed, tracked and recorded button push in the user specific work flow trends. “The new individualized layout(s) and one-touch operation” as described with FIG. 9A & 9B refer to the predicted sequence of operator actions, including button pushes, ¶0087, ¶0090-0091. The predicative capability for individual button presses within a sequence corresponding to commands that are indicative of the detected workflow patterns (i.e., use specific workflow trends), ¶0087, ¶0089, ¶0090-0091. Anand teaches that the commands correspond to adjusting the performance of the ultrasound system, specifical image acquisition processing, as well as overall workflow efficiency, ¶0076-0077, ¶0078. And thus Anand teaches predicting strings of the next button pushes corresponding to commands is triggered by the user specific workflow trends. outputting at least one macro button corresponding to the predicted strings of next button pushes on the control interface, wherein selection of the at least one macro button by the user executes the at least one of the predicted strings of next button pushes. (¶0087, ‘The order, size, or other arrangement aspect of control buttons 72 can vary widely, depending on what best supports operator workflow. Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation.’) Anand fails to disclose: detecting probe motion of the transducer probe during the ultrasound examination; wherein predicting the strings of the next commands is triggered by the detected probe motion; However, Boudier in the context of a method and system for receiving motion data of a probe and programmed to issue a commands based on the motion data, discloses, detecting probe motion of the transducer probe during the ultrasound examination; (¶Abstract, ¶0020, ‘, the motion detection feature 120 may be disposed within a housing 122 of the probe 104, coupled with the housing 122 of probe 104, or otherwise integrated with the probe 104. […] The motion detection feature 120 may include any of various features that facilitate motion detection. In one embodiment, the motion detection feature 120 includes an accelerometer, a light emitter (e.g., an infrared light emitter) that functions with a separate detector, or a combination of such features. In some embodiments, the motion detection feature may be separate from the probe 104 and may be a component of or interact directly with the main unit 102. For example, the motion detection feature 120 may include a device that includes a camera configured to track the motion of the probe 104. Further, in some embodiments, a combination of features may be utilized to track movement of the probe 104.’) wherein predicting the strings of the next commands is triggered by the detected probe motion; (¶0027, ‘one type of probe motion that may be detected and utilized to initiate a command in accordance with present embodiments. Specifically, the user 202 is maneuvering the probe 200 in sweeping patterns to essentially form a cross-like pattern 208. This series of motions may be detected by the motion detection feature or features 120 and the processor 130 may interpret the series of motions as corresponding to a delete command, a command to power down the system, a command to adjust performance, or any of various different control commands. If one or more of the buttons 206 are pressed during the motion associated with this pattern 208, a different or slightly modified command may be initiated. For example, pressing a button in addition to the motion of the pattern 208 may cause the related command to initiate more quickly. Accordingly, the user 202 can employ the probe to provide commands without accessing an associated main unit. For example, when the area in which the procedure is being performed blocks ready access to the main unit but enables access to the probe.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the work flow trend trigger of Anand to include the probe motion trigger of Boudier (i.e, to also initiate the predictive process of Anand in response to detected probe motion of Boudier) The motivation to do this yields predictable results such as improving the utilization of an ultrasound probe by allowing a healthcare worker to access certain control features while handling the probe to move the probe to certain patterns to control operations thereof, ¶0031 of Boudier. The modified combination would disclose wherein predicting the strings of the next button pushes is triggered by the detected probe motion. Anand in view of Boudier fail to disclose: and wherein the detected probe motion is at least one of a relatively more stable probe motion and a stopped probe motion. However, Holl in the context of ultrasound probe imaging discloses the detected probe motion is at least one of a stopped probe motion. (¶Abstract, ¶0007-0009, Claim 1, Claim 7- in response to detecting no motion for a period of time and in response reduce power consumption of the probe). It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the detection of the probe motion of modified Anand to be include at least one of a stopped probe motion as taught by Holl. The motivation to do this yield predictable results such as ensuring conservation of power for the probe, as suggested by Holl, ¶Abstract. Claim 9: Modified Anand discloses all the elements above in claim 1, Anand fails to disclose: wherein detecting the probe motion of the transducer probe comprises monitoring the transducer probe using at least one sensor on the transducer probe, and determining the probe motion from position data provided by the at least one sensor. However, Boudier is relied upon above discloses: wherein detecting the probe motion of the transducer probe comprises monitoring the transducer probe using at least one sensor on the transducer probe, and determining the probe motion from position data provided by the at least one sensor. (¶0025, ‘The motion detection feature 120 may include one or a combination of various different features that work to detect movement of the probe 104. In one embodiment, the motion detection feature 120 includes one or more accelerometers. In some embodiments, the motion detection feature 120 includes an infrared light emitter and an associated detection, and information related to light emitted by the infrared light source and detected by a component (e.g., an infrared light detector) of the main unit 102 may be transmitted from the main unit 102 to the probe 104. Further, in some embodiments, the main unit 104 may include or interface with a motion tracking feature, such as a camera configured to track movement of the probe, and data produced by such a feature may be utilized to produce command signals that are transmitted to the probe via the communication features 126, 128. Assembled information from the one or more motion detection features 120 may be utilized to identify a relative position or series of positions of the probe 104, and this information may correlate to programmed instructions for operation of the ultrasound system 100.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the detecting of probe motion of modified Anand to comprise monitoring the transducer probe using at least one sensor on the transducer probe, and determining the probe motion from position data provided by the at least one sensor as taught by Boudier. The motivation to do this yields predictable results such as improving the utilization of an ultrasound probe by allowing a healthcare worker to access certain control features while handling the probe to move the probe to certain patterns to control operations thereof, ¶0031 of Boudier. Claim 11: Modified Anand discloses all the elements above in claim 1, Anand discloses: wherein the control interface comprises a touch screen display. (¶0005, ‘The display/monitor 14 can be a touchscreen in order to function as an input device.’) Claim 13: Anand discloses, A system for performing an ultrasound examination, comprising: (¶Abstract) an ultrasound imaging system (FIG.2) comprising a transducer probe (transducer probe 26) and a control interface (Ultrasound systems 10, ¶0005, ‘portable ultrasound systems 10 having a cart/base/support 12, a display or display/monitor 14, one or more input interface devices 16 (such as keyboard or mouse), and a generator 18.’) for controlling acquisition of ultrasound images during the ultrasound examination; (¶Abstract, ‘A method tracks a user's operation of an ultrasound system. Based on the tracked operations, the method uses a machine-learning module to generate a proposed custom system configuration for the ultrasound system for the user. The proposed custom system configuration is presented to the user. In response to a user instruction, the proposed custom system configuration is implemented on the ultrasound system.’; FIG. 2-3) a display configured to display the ultrasound images; (¶0020, ‘FIG. 5 shows a displayed ultrasound image having a region of interest shown as a bounded rectangle, wherein features within the region of interest are highlighted in color.’; ¶0032, ‘an image displayed on the display. The user then selects a position of the cursor as indicating a point for a region of interest. The display is a CRT (cathode-ray tube), LCD (liquid crystal device), plasma screen, projector, combinations thereof, or other now known or later developed devices for displaying an image, a region of interest, region of interest information and/or user input information. The display can be a touch screen display that includes a keyboard having a programmable layout.’) at least one processor coupled to the ultrasound imaging system and the display; and a non-transitory memory for storing instructions that, when executed by the at least one processor, cause the at least one processor to: (¶0031-0032, ¶0095-96) obtain sequences of button pushes performed by a user via the control interface during an exam workflow, each of the sequences of button pushes having a corresponding sequence length defined by a number of button pushes in the sequence of buttons; (FIG. 9B-10, S180, ¶0087, ‘a setup monitoring step S170 tracks and records any operator changes to ultrasound system setup during the interaction sequence. An instruction tracking step S180 begins the ongoing process of recording the interaction sequence of operator instructions that are entered during the course of the exam. A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence. At the conclusion of the exam, a generate changes step S200 can display text and graphical content to the operator for interface modifications based on the interaction sequence of operator setup and operation instructions. The generated changes that are suggested can relate to system setup commands or to the sequence of operator instructions. These results from the operator can then be compared with previous or other existing interaction sequences for the purpose of determining what types of changes would be useful for improving operator workflow for the particular type of exam noted. Generate changes step S200 can show proposed changes to the equipment setup using display screen 14 as shown schematically in FIG. 10. In the example shown herein, a display screen 70 shows a number of control buttons 72 often used by the operator, singly or in a defined interaction sequence, are displayed. The order, size, or other arrangement aspect of control buttons 72 can vary widely, depending on what best supports operator workflow. Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation. In a customization step S210, the operator is given an option to accept or to not use the projected custom system configuration changes, such as by pressing a control button 76.’; ¶0090, ‘Based on the patterns it observes in the user's actions (e.g. sequence of key strokes, equipment settings, and feature usage) during the interaction sequence the method described with reference to FIGS. 9A and 9B can propose new individualized layout(s) and can offer one-touch operation as part of the system configuration sequence for frequently used commands.’ see also ¶Abstract, ¶0014, ¶0057, ¶0087-0091, Claim 1, Claim 2) differentiate components of the exam workflow based on the sequences of button pushes, wherein the differentiated components depend on a clinical application of the ultrasound examination; (FIG. 9A; ¶0086, ‘FIG. 9A sequence, an exam type determination step S130 determines, from operator entries or from information provided from some other source, the exam type that will be performed. A setup parameters application step S140 applies the setup parameters from the existing custom system configuration or using new profile data. A begin examination step S150 then begins the ultrasound exam with the profile settings applied. An initiate machine learning step S160 initiates the machine learning software module for monitoring and recording operator instructions during the interaction sequence on the graphical user interface (GUI) for the current exam.’, see also ¶Abstract, Claims 1, Claim 2, ¶0046, ¶0047-0050, ¶0078-0079, ¶0087, ¶0088-0091) perform sequence pattern mining of the sequences of button pushes based on the differentiated components of the exam workflow to extract user specific workflow trends, wherein the user specific workflow trends comprise a plurality of most frequently used of the sequences of button pushes; (¶0079, ‘using machine learning in order to identify operator tendencies, patterns, and preferences from the interaction sequence.’; ¶0086, ‘an exam type determination step S130 determines, from operator entries or from information provided from some other source, the exam type that will be performed’; ¶0087, ‘An instruction tracking step S180 begins the ongoing process of recording the interaction sequence of operator instructions that are entered during the course of the exam. […] A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence.’, see also ¶Abstract, ¶0057, ¶0079, ¶0082-0083 ¶0086-0091) predict strings of next button pushes using a predictive model based on at least one previous button push in the user specific workflow trends, respectively; and (¶0076, ‘Examples of some of the ultrasound parameters that are typically adapted by the sonographer to task requirements on a per-exam basis during the operator interaction sequence include dynamic range, acoustic signal gain, transmit frequency, choice of harmonic imaging versus fundamental-only imaging, time gain compensation (TGC) setting, preference for triplex versus duplex Doppler modes, and other settings.’; ¶0087, ‘A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence. […] These results from the operator can then be compared with previous or other existing interaction sequences for the purpose of determining what types of changes would be useful for improving operator workflow for the particular type of exam noted. […] Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation.’; ¶0090, ‘Based on the patterns it observes in the user's actions (e.g. sequence of key strokes, equipment settings, and feature usage) during the interaction sequence the method described with reference to FIGS. 9A and 9B can propose new individualized layout(s) and can offer one-touch operation as part of the system configuration sequence for frequently used commands.’, see also ¶Abstract, ¶0046, ¶0079, ¶0082-0083, ¶0086-0091) output at least one macro button on the display corresponding to the predicted strings of next button pushes on the control interface, wherein selection of the at least one macro button by the user executes the at least one of the predicted strings of next button pushes. (¶0087, ‘The order, size, or other arrangement aspect of control buttons 72 can vary widely, depending on what best supports operator workflow. Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation.’) trigger the predicting the strings of the next button pushes in response to the user specific workflow trends -Anand teaches a machine learning pattern recognition module that observe, track and record operator interaction with the system in order to detect workflow patterns, ¶0089. The predicative model is based on the observed, tracked and recorded button push in the user specific work flow trends. “The new individualized layout(s) and one-touch operation” as described with FIG. 9A & 9B refer to the predicted sequence of operator actions, including button pushes, ¶0087, ¶0090-0091. The predicative capability for individual button presses within a sequence corresponding to commands that are indicative of the detected workflow patterns (i.e., use specific workflow trends), ¶0087, ¶0089, ¶0090-0091. Anand teaches that the commands correspond to adjusting the performance of the ultrasound system, specifical image acquisition processing, as well as overall workflow efficiency, ¶0076-0077, ¶0078. And thus Anand teaches predicting strings of the next button pushes corresponding to commands is triggered by the user specific workflow trends. Anand fails to disclose: detect probe motion of the transducer probe during the ultrasound examination; and trigger a predicting of strings of the next commands in response to the detected probe motion; However, Boudier in the context of a method and system for receiving motion data of a probe and programmed to issue a commands based on the motion data, discloses, detect probe motion of the transducer probe during the ultrasound examination; (¶Abstract, ¶0020, ‘, the motion detection feature 120 may be disposed within a housing 122 of the probe 104, coupled with the housing 122 of probe 104, or otherwise integrated with the probe 104. […] The motion detection feature 120 may include any of various features that facilitate motion detection. In one embodiment, the motion detection feature 120 includes an accelerometer, a light emitter (e.g., an infrared light emitter) that functions with a separate detector, or a combination of such features. In some embodiments, the motion detection feature may be separate from the probe 104 and may be a component of or interact directly with the main unit 102. For example, the motion detection feature 120 may include a device that includes a camera configured to track the motion of the probe 104. Further, in some embodiments, a combination of features may be utilized to track movement of the probe 104.’) and trigger a predicting of strings of the next commands in response to the detected probe motion; (¶0027, ‘one type of probe motion that may be detected and utilized to initiate a command in accordance with present embodiments. Specifically, the user 202 is maneuvering the probe 200 in sweeping patterns to essentially form a cross-like pattern 208. This series of motions may be detected by the motion detection feature or features 120 and the processor 130 may interpret the series of motions as corresponding to a delete command, a command to power down the system, a command to adjust performance, or any of various different control commands. If one or more of the buttons 206 are pressed during the motion associated with this pattern 208, a different or slightly modified command may be initiated. For example, pressing a button in addition to the motion of the pattern 208 may cause the related command to initiate more quickly. Accordingly, the user 202 can employ the probe to provide commands without accessing an associated main unit. For example, when the area in which the procedure is being performed blocks ready access to the main unit but enables access to the probe.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the work flow trend trigger of Anand to include the probe motion trigger of Boudier (i.e, to also initiate the predictive process of Anand in response to detected probe motion of Boudier) The motivation to do this yields predictable results such as improving the utilization of an ultrasound probe by allowing a healthcare worker to access certain control features while handling the probe to move the probe to certain patterns to control operations thereof, ¶0031 of Boudier. The modified combination would disclose trigger the predicting of the strings of the next button in response to the detected probe motion. Anand in view of Boudier fail to disclose: and wherein the detected probe motion is at least one of a relatively more stable probe motion and a stopped probe motion. However, Holl in the context of ultrasound probe imaging discloses the detected probe motion is at least one of a stopped probe motion. (¶Abstract, ¶0007-0009, Claim 1, Claim 7- in response to detecting no motion for a period of time and in response reduce power consumption of the probe). It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the detection of the probe motion of modified Anand to be include at least one of a stopped probe motion as taught by Holl. The motivation to do this yield predictable results such as ensuring conservation of power for the probe, as suggested by Holl, ¶Abstract. Claim 15: Modified Anand discloses all the elements above in claim 13, Anand fails to disclose: wherein the instructions cause the at least one processor to detect the probe motion of the transducer probe by monitoring the transducer probe using at least one sensor on the transducer probe, and determining the probe motion from position data provided by the at least one sensor. However, Boudier is relied upon above discloses: wherein the instructions cause the at least one processor to detect the probe motion of the transducer probe by monitoring the transducer probe using at least one sensor on the transducer probe, and determining the probe motion from position data provided by the at least one sensor. (¶0025, ‘The motion detection feature 120 may include one or a combination of various different features that work to detect movement of the probe 104. In one embodiment, the motion detection feature 120 includes one or more accelerometers. In some embodiments, the motion detection feature 120 includes an infrared light emitter and an associated detection, and information related to light emitted by the infrared light source and detected by a component (e.g., an infrared light detector) of the main unit 102 may be transmitted from the main unit 102 to the probe 104. Further, in some embodiments, the main unit 104 may include or interface with a motion tracking feature, such as a camera configured to track movement of the probe, and data produced by such a feature may be utilized to produce command signals that are transmitted to the probe via the communication features 126, 128. Assembled information from the one or more motion detection features 120 may be utilized to identify a relative position or series of positions of the probe 104, and this information may correlate to programmed instructions for operation of the ultrasound system 100.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the detecting of probe motion of modified Anand to comprise monitoring the transducer probe using at least one sensor on the transducer probe, and determining the probe motion from position data provided by the at least one sensor as taught by Boudier. The motivation to do this yields predictable results such as improving the utilization of an ultrasound probe by allowing a healthcare worker to access certain control features while handling the probe to move the probe to certain patterns to control operations thereof, ¶0031 of Boudier. Claim 19: Anand discloses, A non-transitory computer readable medium storing instructions for performing an ultrasound examination that, when executed by at least one processor, cause the at least one processor to: (¶0031-0032, ¶0095-96) obtain sequences of button pushes performed by a user during an exam workflow via a control interface, configured to interface with a transducer probe during the ultrasound examination, each of the sequences of button pushes having a corresponding sequence length defined by a number of button pushes in the sequence of buttons; (FIG. 9B-10, S180, ¶0087, ‘a setup monitoring step S170 tracks and records any operator changes to ultrasound system setup during the interaction sequence. An instruction tracking step S180 begins the ongoing process of recording the interaction sequence of operator instructions that are entered during the course of the exam. A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence. At the conclusion of the exam, a generate changes step S200 can display text and graphical content to the operator for interface modifications based on the interaction sequence of operator setup and operation instructions. The generated changes that are suggested can relate to system setup commands or to the sequence of operator instructions. These results from the operator can then be compared with previous or other existing interaction sequences for the purpose of determining what types of changes would be useful for improving operator workflow for the particular type of exam noted. Generate changes step S200 can show proposed changes to the equipment setup using display screen 14 as shown schematically in FIG. 10. In the example shown herein, a display screen 70 shows a number of control buttons 72 often used by the operator, singly or in a defined interaction sequence, are displayed. The order, size, or other arrangement aspect of control buttons 72 can vary widely, depending on what best supports operator workflow. Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation. In a customization step S210, the operator is given an option to accept or to not use the projected custom system configuration changes, such as by pressing a control button 76.’; ¶0090, ‘Based on the patterns it observes in the user's actions (e.g. sequence of key strokes, equipment settings, and feature usage) during the interaction sequence the method described with reference to FIGS. 9A and 9B can propose new individualized layout(s) and can offer one-touch operation as part of the system configuration sequence for frequently used commands.’ see also ¶Abstract, ¶0014, ¶0057, ¶0087-0091, Claim 1, Claim 2) differentiate components of the exam workflow based on the sequences of button pushes, wherein the differentiated components depend on a clinical application of the ultrasound examination; (FIG. 9A; ¶0086, ‘FIG. 9A sequence, an exam type determination step S130 determines, from operator entries or from information provided from some other source, the exam type that will be performed. A setup parameters application step S140 applies the setup parameters from the existing custom system configuration or using new profile data. A begin examination step S150 then begins the ultrasound exam with the profile settings applied. An initiate machine learning step S160 initiates the machine learning software module for monitoring and recording operator instructions during the interaction sequence on the graphical user interface (GUI) for the current exam.’, see also ¶Abstract, Claims 1, Claim 2, ¶0046, ¶0047-0050, ¶0078-0079, ¶0087, ¶0088-0091) perform sequence pattern mining of the sequences of button pushes based on the differentiated components of the exam workflow to extract user specific workflow trends, wherein the user specific workflow trends comprise a plurality of most frequently used of the sequences of button pushes; (¶0079, ‘using machine learning in order to identify operator tendencies, patterns, and preferences from the interaction sequence.’; ¶0086, ‘an exam type determination step S130 determines, from operator entries or from information provided from some other source, the exam type that will be performed’; ¶0087, ‘An instruction tracking step S180 begins the ongoing process of recording the interaction sequence of operator instructions that are entered during the course of the exam. […] A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence.’, see also ¶Abstract, ¶0057, ¶0079, ¶0082-0083 ¶0086-0091) predict strings of next button pushes using a predictive model based on at least one previous button push in the user specific workflow trends, respectively (¶0076, ‘Examples of some of the ultrasound parameters that are typically adapted by the sonographer to task requirements on a per-exam basis during the operator interaction sequence include dynamic range, acoustic signal gain, transmit frequency, choice of harmonic imaging versus fundamental-only imaging, time gain compensation (TGC) setting, preference for triplex versus duplex Doppler modes, and other settings.’; ¶0087, ‘A machine learning step S190 applies machine learning and pattern recognition logic to the recorded interaction sequence. […] These results from the operator can then be compared with previous or other existing interaction sequences for the purpose of determining what types of changes would be useful for improving operator workflow for the particular type of exam noted. […] Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation.’; ¶0090, ‘Based on the patterns it observes in the user's actions (e.g. sequence of key strokes, equipment settings, and feature usage) during the interaction sequence the method described with reference to FIGS. 9A and 9B can propose new individualized layout(s) and can offer one-touch operation as part of the system configuration sequence for frequently used commands.’, see also ¶Abstract, ¶0046, ¶0079, ¶0082-0083, ¶0086-0091); and output at least one macro button on a display corresponding to the predicted strings of next button pushes on the control interface, wherein selection of the at least one macro button by the user executes the at least one of the predicted strings of next button pushes. (¶0087, ‘The order, size, or other arrangement aspect of control buttons 72 can vary widely, depending on what best supports operator workflow. Where a known sequence of control button presses is typically used, the operator interface can highlight the next or most likely control button in sequence for standard operation.’) trigger the predicting the strings of the next button pushes in response to the user specific workflow trends -Anand teaches a machine learning pattern recognition module that observe, track and record operator interaction with the system in order to detect workflow patterns, ¶0089. The predicative model is based on the observed, tracked and recorded button push in the user specific work flow trends. “The new individualized layout(s) and one-touch operation” as described with FIG. 9A & 9B refer to the predicted sequence of operator actions, including button pushes, ¶0087, ¶0090-0091. The predicative capability for individual button presses within a sequence corresponding to commands that are indicative of the detected workflow patterns (i.e., use specific workflow trends), ¶0087, ¶0089, ¶0090-0091. Anand teaches that the commands correspond to adjusting the performance of the ultrasound system, specifical image acquisition processing, as well as overall workflow efficiency, ¶0076-0077, ¶0078. And thus Anand teaches predicting strings of the next button pushes corresponding to commands is triggered by the user specific workflow trends. Anand fails to disclose: detect probe motion of the transducer probe during the ultrasound examination; and trigger a predicting of strings of the next commands in response to the detected probe motion; However, Boudier in the context of a method and system for receiving motion data of a probe and programmed to issue a commands based on the motion data, discloses, detect probe motion of the transducer probe during the ultrasound examination; (¶Abstract, ¶0020, ‘, the motion detection feature 120 may be disposed within a housing 122 of the probe 104, coupled with the housing 122 of probe 104, or otherwise integrated with the probe 104. […] The motion detection feature 120 may include any of various features that facilitate motion detection. In one embodiment, the motion detection feature 120 includes an accelerometer, a light emitter (e.g., an infrared light emitter) that functions with a separate detector, or a combination of such features. In some embodiments, the motion detection feature may be separate from the probe 104 and may be a component of or interact directly with the main unit 102. For example, the motion detection feature 120 may include a device that includes a camera configured to track the motion of the probe 104. Further, in some embodiments, a combination of features may be utilized to track movement of the probe 104.’) and trigger a predicting of strings of the next commands in response to the detected probe motion; (¶0027, ‘one type of probe motion that may be detected and utilized to initiate a command in accordance with present embodiments. Specifically, the user 202 is maneuvering the probe 200 in sweeping patterns to essentially form a cross-like pattern 208. This series of motions may be detected by the motion detection feature or features 120 and the processor 130 may interpret the series of motions as corresponding to a delete command, a command to power down the system, a command to adjust performance, or any of various different control commands. If one or more of the buttons 206 are pressed during the motion associated with this pattern 208, a different or slightly modified command may be initiated. For example, pressing a button in addition to the motion of the pattern 208 may cause the related command to initiate more quickly. Accordingly, the user 202 can employ the probe to provide commands without accessing an associated main unit. For example, when the area in which the procedure is being performed blocks ready access to the main unit but enables access to the probe.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the work flow trend trigger of Anand to include the probe motion trigger of Boudier (i.e, to also initiate the predictive process of Anand in response to detected probe motion of Boudier) The motivation to do this yields predictable results such as improving the utilization of an ultrasound probe by allowing a healthcare worker to access certain control features while handling the probe to move the probe to certain patterns to control operations thereof, ¶0031 of Boudier. The modified combination would disclose trigger the predicting of the strings of the next button in response to the detected probe motion. Anand in view of Boudier fail to disclose: and wherein the detected probe motion is at least one of a relatively more stable probe motion and a stopped probe motion. However, Holl in the context of ultrasound probe imaging discloses the detected probe motion is at least one of a stopped probe motion. (¶Abstract, ¶0007-0009, Claim 1, Claim 7- in response to detecting no motion for a period of time and in response reduce power consumption of the probe). It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the detection of the probe motion of modified Anand to be include at least one of a stopped probe motion as taught by Holl. The motivation to do this yield predictable results such as ensuring conservation of power for the probe, as suggested by Holl, ¶Abstract. Claims 2, 4, 7, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1), as applied to claim 1, above in further view of Joy (US 2020/0005088 A1). Claim 2: Modified Anand discloses all the elements above in claim 1, Anand fails to disclose: wherein obtaining the sequences of button pushes comprises: accessing log files of the ultrasound imaging system; and pairing the ultrasound images with the log files based on the exam workflow and transition states of the ultrasound imaging system. However, Joy in the context of medical image interfaces based on learning discloses, accessing log files of the ultrasound imaging system; and pairing the ultrasound images with the log files based on the exam workflow and transition states of the ultrasound imaging system. (¶0028, ‘the imaging modality 110 includes an X-ray imager, ultrasound scanner, magnetic resonance imager, or the like. In this example, image data representative of the image(s) is communicated between the imaging modality 110 and the acquisition workstation 120. The image data is communicated electronically over a wired or wireless connection, for example.’; ¶0046, ‘The machine learning engine 330 and/or the machine learning model 336 is used to analyze image similarities between different medical cases so that images and/or their associated data can be correlated to user inputs so that subsequently encountered similar images can be correlated or tied to the user inputs. By correlating images with user input, application settings and/or user interface configuration settings, the machine learning model 336 learns a context that matches and/or correlates the user input/commands with the content of the images, which can include associated data and/or metadata. In other words, the example machine learning model 336 determines patterns (e.g., contextual patterns) between a context/situation and resulting user interface application configurations (e.g., interface configuration changes) that have been applied by a user. As an example, instead of simply correlating a situation to a user interface configuration, the machine learning model 336 can determine similarities in situations (e.g., similarities in context and/or images, etc.) and/or a degree of similarity without necessarily requiring an exact situation, thereby resulting in great adaptability in learning and application of the modified user interface configurations. Additionally or alternatively, in some examples, the application is executed is modified or adjusted (e.g., a sequence of hierarchy of when different modules or executables are run, etc.) by the machine learning model 336.’, see also ¶0027-0028, ¶0038, ¶0052-0053) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the obtaining sequence of button pushes of modified Anand to comprise accessing log files of the ultrasound imaging system; and pairing the ultrasound images with the log files based on the exam workflow and transition states of the ultrasound imaging system as taught by Joy. The motivation to do this yields predictable results such as improving the user interface by continuously adjusting the user interface setup based on contextual learning of medical data, as suggested by Joy ¶0095. Claim 4: Modified Anand discloses all the elements above in claim 2, Anand fails to disclose: wherein differentiating the components of the exam workflow comprises: converting each of the sequence of button pushes to a corresponding encoded vector; clustering encoded vectors from the converting step; and separating the clustered encoded vectors into the transition states. However, Joy as relied upon above discloses: wherein differentiating the components of the exam workflow comprises: converting each of the sequence of button pushes to a corresponding encoded vector; clustering encoded vectors from the converting step; and separating the clustered encoded vectors into the transition states. -Joy discloses, monitoring and recording user activity, which includes commands, sequences, inputs, image arrangement, ¶0049-0050. These user actions are analyzed, ¶0043-44 and stored as “user interactions” and “user input sequences”, ¶0055-0056. Commands and inputs constitute as “button pushes”. Joy teaches that features are extracted from user data to define user vectors, ¶0072-0073, and from training vectors, ¶0070-0071. These “feature vectors” or “user vectors” are numerical representations, that constitute as “encoded vectors,” of the user actions and their sequences, ¶0057, ¶0058. -Joy discloses, the machine learning engine uses a similarity algorithm, ¶0052-0053, which analyzes user commands in conjunction with medical content data and its associated metadata. The similarity algorithm utilizes a similarity-based classification model to estimate a class label based on degree of similarity between incoming data and training data samples, or pairwise similarities between training samples, ¶0052-0053. It calculates similarity based on the aforementioned feature vectors, and a minimal distance value directly translating to higher similarity, ¶0053-0054. The description of performing similarity analysis on vectors to determine “nearest neighbors”, ¶0052-0053, and to identify “similarities in situations, ¶0046, amounts to a clustering algorithm that the encoded vectors are grouped based on similarity. -Joy further teaches, that the learning network develops the model based on “user actions in relationship to a context of the medical content data”, with the model defining “contextual patterns” of user actions based on this content and medical content data, ¶Abstract, ¶0065, Claim 1, Claim 16. The “context” is defined as contextual relationship between the medical context data and user inputs or actions, ¶0026. Whereas, “radiology workflow” includes stages, such as opening imaging data and configuration settings, ¶0038. The system of Joy continuously adjust the user interface based on the contextual learning of medical data, ¶0095. This means, that the learning contextual patterns derived from similarities amount the user actions and content, differentiate with different phases or states” within the exam workflow. The system learns how user actions and configuration are appropriate for different stages of context of medical case, effectively segmenting workflow based on the behavior and data context. These learning patterns adapt based on the medical workflow which serves to identify and respond to “transition states”. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the differentiation of the components of modified Anand of the exam workflow to comprise the teachings of Joy. The motivation to do this yields predictable results such as improving the user interface by continuously adjusting the user interface setup based on contextual learning of medical data, as suggested by Joy ¶0095. Claim 7: Modified Anand discloses all the elements above in claim 1, Anand fails to disclose: wherein performing the sequence pattern mining comprises: assigning different weights to the user specific workflow trends based at least in part on the clinical application of the ultrasound examination. However, Joy in the context of medical image interfaces based on learning discloses, wherein performing the sequence pattern mining comprises: assigning different weights to the user specific workflow trends based at least in part on the clinical application of the ultrasound examination. (Joy teaches, when performing similarity analysis to develop predictions, states, “For features with string values, comparisons done and appropriate numeric value weightage is associated for exact match and approximate match to be later used for a similarity calculation.”, ¶0054. The deep learning neural network includes “certain example connections 932, 952, 972 can be given added weight while other example connections 934, 954, 974 are given less weight in the neural network 900.”, ¶0078. This demonstrates the model itself assigns weights to different aspects of the data. The model is based on the “user actions in relationship to a context of the medical content data”, ¶Abstract. This context includes the clinical application, which “depend on a plurality of parameters, such as an imaging modality of the exam under review, existence of historical images and number of historical images, previous reports, and/or list of prescribed medications, etc”, ¶0033. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the sequence patten mining of Anand to comprise assigning different weights to the user specific workflow trends based at least in part on the clinical application of the ultrasound examination as taught by Joy. The motivation to do this yields predictable results such as improving the user interface by continuously adjusting the user interface setup based on contextual learning of medical data, as suggested by Joy ¶0095. Claim 16: Modified Anand discloses all the elements above in claim 13, Anand fails to disclose: wherein the instructions cause the at least one processor to obtain the sequences of button pushes by: accessing log files of the ultrasound imaging system; and pairing the ultrasound images with the log files based on the exam workflow and transition states of the ultrasound imaging system. However, Joy in the context of medical image interfaces based on learning discloses, accessing log files of the ultrasound imaging system; and pairing the ultrasound images with the log files based on the exam workflow and transition states of the ultrasound imaging system. (¶0028, ‘the imaging modality 110 includes an X-ray imager, ultrasound scanner, magnetic resonance imager, or the like. In this example, image data representative of the image(s) is communicated between the imaging modality 110 and the acquisition workstation 120. The image data is communicated electronically over a wired or wireless connection, for example.’; ¶0046, ‘The machine learning engine 330 and/or the machine learning model 336 is used to analyze image similarities between different medical cases so that images and/or their associated data can be correlated to user inputs so that subsequently encountered similar images can be correlated or tied to the user inputs. By correlating images with user input, application settings and/or user interface configuration settings, the machine learning model 336 learns a context that matches and/or correlates the user input/commands with the content of the images, which can include associated data and/or metadata. In other words, the example machine learning model 336 determines patterns (e.g., contextual patterns) between a context/situation and resulting user interface application configurations (e.g., interface configuration changes) that have been applied by a user. As an example, instead of simply correlating a situation to a user interface configuration, the machine learning model 336 can determine similarities in situations (e.g., similarities in context and/or images, etc.) and/or a degree of similarity without necessarily requiring an exact situation, thereby resulting in great adaptability in learning and application of the modified user interface configurations. Additionally or alternatively, in some examples, the application is executed is modified or adjusted (e.g., a sequence of hierarchy of when different modules or executables are run, etc.) by the machine learning model 336.’, see also ¶0027-0028, ¶0038, ¶0052-0053) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the obtaining sequence of button pushes of modified Anand to comprise accessing log files of the ultrasound imaging system; and pairing the ultrasound images with the log files based on the exam workflow and transition states of the ultrasound imaging system as taught by Joy. The motivation to do this yields predictable results such as improving the user interface by continuously adjusting the user interface setup based on contextual learning of medical data, as suggested by Joy ¶0095. Claim 17: Modified Anand discloses all the elements above in claim 16, Anand fails to disclose: wherein the transition states of the ultrasound imaging system comprise a live state for providing the ultrasound images in real-time and a frozen state for freezing an ultrasound image of the ultrasound images, the frozen state enabling the user to review and/or measure a portion of the ultrasound image. However, Berger in the context of ultrasound probe integration with a display discloses, wherein the transition states of the ultrasound imaging system comprise a live state for providing the ultrasound images in real-time and a frozen state for freezing an ultrasound image of the ultrasound images, the frozen state enabling the user to review and/or measure a portion of the ultrasound image. (¶0421, ‘The Live/Freeze buttons that are used during a scan to record the examination or save the image to a file are illustrated in FIGS. 41A and 41B in accordance with a preferred embodiment of the present invention. The live button provides a real-time image display, while the freeze button freezes the image during the scan to allow the user to print or save to a file.’; ¶0433, ‘During a scan, live images are recorded by frame. Depending upon the mode the user selects, a certain amount of frames are recorded. For example, the B-mode allows the capture of up to 60 frames in a Cine loop. When the user freezes a real-time image during a scan, all movement is suspended in the image display area. The freezed frame can be saved as a single image file or an entire image loop dependent upon the mode’; ¶0450, ‘The ultrasound images can be magnified and text annotation can be added to the image area. […]The user can create measurements for Distance, Ellipse, or Peak Systole/End Diastole depending upon the mode you are using.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the transition states of ultrasound imaging system of modified Anand to include a live state for providing the ultrasound images in real-time and a frozen state for freezing an ultrasound image of the ultrasound images, the frozen state enabling the user to review and/or measure a portion of the ultrasound image as taught by Berger. The motivation to do this yields predictable results such as providing continuous views of anatomical structures during procedures and to capture a still frame of interest for documenting findings. Claim 18: Modified Anand discloses all the elements above in claim 16, Anand fails to disclose: wherein the instructions cause the at least one processor to differentiate the components of the exam workflow by: converting each of the sequence of button pushes to a corresponding encoded vector; clustering encoded vectors from the converting step; and separating the clustered encoded vectors into the transition states. However, Joy as relied upon above discloses: differentiate the components of the exam workflow by: converting each of the sequence of button pushes to a corresponding encoded vector; clustering encoded vectors from the converting step; and separating the clustered encoded vectors into the transition states. -Joy discloses, monitoring and recording user activity, which includes commands, sequences, inputs, image arrangement, ¶0049-0050. These user actions are analyzed, ¶0043-44 and stored as “user interactions” and “user input sequences”, ¶0055-0056. Commands and inputs constitute as “button pushes”. Joy teaches that features are extracted from user data to define user vectors, ¶0072-0073, and from training vectors, ¶0070-0071. These “feature vectors” or “user vectors” are numerical representations, that constitute as “encoded vectors,” of the user actions and their sequences, ¶0057, ¶0058. -Joy discloses, the machine learning engine uses a similarity algorithm, ¶0052-0053, which analyzes user commands in conjunction with medical content data and its associated metadata. The similarity algorithm utilizes a similarity-based classification model to estimate a class label based on degree of similarity between incoming data and training data samples, or pairwise similarities between training samples, ¶0052-0053. It calculates similarity based on the aforementioned feature vectors, and a minimal distance value directly translating to higher similarity, ¶0053-0054. The description of performing similarity analysis on vectors to determine “nearest neighbors”, ¶0052-0053, and to identify “similarities in situations, ¶0046, amounts to a clustering algorithm that the encoded vectors are grouped based on similarity. -Joy further teaches, that the learning network develops the model based on “user actions in relationship to a context of the medical content data”, with the model defining “contextual patterns” of user actions based on this content and medical content data, ¶Abstract, ¶0065, Claim 1, Claim 16. The “context” is defined as contextual relationship between the medical context data and user inputs or actions, ¶0026. Whereas, “radiology workflow” includes stages, such as opening imaging data and configuration settings, ¶0038. The system of Joy continuously adjust the user interface based on the contextual learning of medical data, ¶0095. This means, that the learning contextual patterns derived from similarities amount the user actions and content, differentiate with different phases or states” within the exam workflow. The system learns how user actions and configuration are appropriate for different stages of context of medical case, effectively segmenting workflow based on the behavior and data context. These learning patterns adapt based on the medical workflow which serves to identify and respond to “transition states”. It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the differentiation of the components of modified Anand of the exam workflow to comprise the teachings of Joy. The motivation to do this yields predictable results such as improving the user interface by continuously adjusting the user interface setup based on contextual learning of medical data, as suggested by Joy ¶0095. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1) in view of Joy (US 2020/0005088 A1), as applied to claim 2, above in further view Berger et al (US 2016/0338676 A1). Claim 3: Modified Anand discloses all the elements above in claim 2, Anand fails to disclose: wherein the transition states of the ultrasound imaging system comprise a live state for providing the ultrasound images in real-time and a frozen state for freezing an ultrasound image of the ultrasound images, the frozen state enabling the user to review and/or measure a portion of the ultrasound image. However, Berger in the context of ultrasound probe integration with a display discloses, wherein the transition states of the ultrasound imaging system comprise a live state for providing the ultrasound images in real-time and a frozen state for freezing an ultrasound image of the ultrasound images, the frozen state enabling the user to review and/or measure a portion of the ultrasound image. (¶0421, ‘The Live/Freeze buttons that are used during a scan to record the examination or save the image to a file are illustrated in FIGS. 41A and 41B in accordance with a preferred embodiment of the present invention. The live button provides a real-time image display, while the freeze button freezes the image during the scan to allow the user to print or save to a file.’; ¶0433, ‘During a scan, live images are recorded by frame. Depending upon the mode the user selects, a certain amount of frames are recorded. For example, the B-mode allows the capture of up to 60 frames in a Cine loop. When the user freezes a real-time image during a scan, all movement is suspended in the image display area. The freezed frame can be saved as a single image file or an entire image loop dependent upon the mode’; ¶0450, ‘The ultrasound images can be magnified and text annotation can be added to the image area. […] The user can create measurements for Distance, Ellipse, or Peak Systole/End Diastole depending upon the mode you are using.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the transition states of ultrasound imaging system of modified Anand to include a live state for providing the ultrasound images in real-time and a frozen state for freezing an ultrasound image of the ultrasound images, the frozen state enabling the user to review and/or measure a portion of the ultrasound image as taught by Berger. The motivation to do this yields predictable results such as providing continuous views of anatomical structures during procedures and to capture a still frame of interest for documenting findings. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1), as applied to claim 1, in further view of Hurtig et al (US 2016/0125705 A1). Claim 5: Modified Anand discloses all the elements above in claim 1, Anand fails to disclose: wherein performing the sequence pattern mining comprises: applying a gap constraint to the extracted user specific workflow trends to skip sequences of button pushes that contain contiguously multiple same button pushes. However, Hertig in the context of classifying sensor inputs of push buttons discloses, applying a gap constraint to the extracted user specific workflow trends to skip sequences of button pushes that contain contiguously multiple same button pushes. (¶0135, ‘At step 1904, one or more of the first plurality of candidate inputs can be classified as intentional inputs based on the first timing threshold. For example, a difference in time can be determined between a first time of receiving a first candidate input and a second time of receiving a candidate input prior (e.g., the immediately prior) to the first candidate input. If the difference is less than the first timing threshold, then the first candidate signal can be classified as unintentional. If the difference is greater than the first timing threshold, then the first candidate signal can be classified as intentional.’; ¶0054, ‘rather than responding each time a user touches a switch plate, a signal processing switch can effectively filter out the unintentional touches and only respond to an intentional touch.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the sequence pattern mining of modified Anand to comprise applying a gap constraint to the extracted user specific workflow trends to skip sequences of button pushes that contain contiguously multiple same button pushes as taught by Hertig. The motivation to do this yields predictable results such as not include unintentional responses in the sequence of intentional responses, ¶0135 of Hertig. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1), as applied to claim 1, in further view of Chae (US 2016/0155227 A1). Claim 8: Modified Anand discloses all the elements above in claim 1, Anand fails to disclose: wherein detecting the probe motion of the transducer probe comprises monitoring the transducer probe using an external camera, and determining the probe motion from images provided by the external camera. However, Cahe in the context of probe motion determination discloses: wherein detecting the probe motion of the transducer probe comprises monitoring the transducer probe using an external camera, and determining the probe motion from images provided by the external camera. (¶0060, ‘an image capturing device, such as a motion camera or a depth camera, is installed at a location inside the CAD apparatus 200 to capture the probe motion, and the probe motion determiner 240 may determine the probe motion by analyzing input images from the image capturing device.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the detection of probe motion of modified Anand to include monitoring the transducer probe using an external camera, and determining the probe motion from images provided by the external camera as taught by Chae. The motivation to do this yields predictable results such as providing a cost-effective means to track probe motion. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1), as applied to claim 9, in further view of Mine et al (US 2017/0252002 A1). Claim 10: Modified Anand discloses all the elements above in claim 9, Anand fails to disclose: wherein the at least one sensor comprises at least one of an electromagnetic (EM) sensor or an inertial measurement unit (IMU) sensor. However, Mine in the context of ultrasonic apparatus and support discloses: wherein the at least one sensor comprises at least one of an electromagnetic (EM) sensor or an inertial measurement unit (IMU) sensor. (¶0051, ‘The magnetic sensor 121 installed on the ultrasonic probe 120 provides information on a position and rotation of the ultrasonic probe 120 which is more accurate than positional information of the ultrasonic probe 120 obtained by the camera 130.’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the at least one sensor of modified Anand to an electromagnetic (EM) sensor as taught by Mine. The motivation to do this yields predictable results providing more accurate positional information than positional information obtained by an external camera, ¶0051 of Mine. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Anand (US 2017/0347993 A1) in view of Boudier (US 2012/0179035 A1) in view of Holl et al (US20130053697A1), as applied to claim 11, in further view of Romatier et al (US 2018/0081344 A1). Claim 12: Modified Anand discloses all the elements above in claim 11, Anand fails to disclose: wherein the at least one macro button is displayed on the touch screen in a color of a plurality of different colors corresponding to a plurality of confidence levels associated with the at least one macro button. However, Romatier in the context of diagnostic systems interactions between operators and touchscreens discloses, wherein the at least one macro button is displayed on the touch screen in a color of a plurality of different colors corresponding to a plurality of confidence levels associated with the at least one macro button. (Claim 8, ‘change a color of the interactive warning button based on a severity level of an associated warning.’; ¶0052, ‘The diagnostic system 10 manages interactions between the operators and the present system by way of the HMI, such as a keyboard, a touch sensitive pad, a touchscreen, a mouse, a trackball, a voice recognition system, and/or the like.’; ¶0089, ‘a color of the warning button 36 may indicate or be associated with a severity level of the associated warning’; ¶0090, ‘a first color (e.g., RED) may be used for critical warnings associated with the parameter values of sensors operating out of a predetermined range (e.g., minimum or maximum values). A second color (e.g., YELLOW or AMBER) may be used for cautionary warnings associated with the parameter values of sensors operating within the predetermined range but out of a normal operative range. A third color (e.g., GREEN) may be used to indicate that the parameter values are operating within a normal operative range. A fourth color may be used to indicate another status (e.g., the parameter values operating within another range)’) It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the at least one macro button of modified Anand to be displayed on the touch screen in a color of a plurality of different colors corresponding to a plurality of confidence levels associated with the at least one macro button as taught by Romatier. The motivation to do this yields predictable results such as improbbing managing of the analysis of the diagnostic system, ¶0007 of Romatier. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Robinson whose telephone number is (571)272-9019. The examiner can normally be reached M-F 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.A.R./Examiner, Art Unit 3798 /PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Aug 19, 2025
Non-Final Rejection — §103, §112
Jan 23, 2026
Response Filed
Feb 10, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594024
METHOD FOR PREDICTING SURVIVAL OF NON SMALL CELL LUNG CANCER PATIENTS WITH BRAIN METASTASIS
2y 5m to grant Granted Apr 07, 2026
Patent 12569219
METHODS AND SYSTEMS FOR VALVE REGURGITATION ASSESSMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12569142
Method And System For Context-Aware Photoacoustic Imaging
2y 5m to grant Granted Mar 10, 2026
Patent 12569154
PATHLENGTH RESOLVED CW-LIGHT SOURCE BASED DIFFUSE CORRELATION SPECTROSCOPY
2y 5m to grant Granted Mar 10, 2026
Patent 12564381
SYSTEMS AND METHODS FOR CONTRAST ENHANCED IMAGING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+54.9%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 131 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month