DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/13/2026 has been entered.
Response to Amendment
This action is in response to the remarks filed on 2/13/2026. The amendments
filed on 2/13/2026 are entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas of “mental processes” or “concepts relating to data comparisons that can be performed mentally or are analogous to human mental work” without significantly more.
Analyses of the subject matter eligibility tests are performed for each of the independent claims and associated dependent claims below.
Regarding independent claim 1, the claim recites:
The limitation of “generate, before an ultrasound imaging session and for one or more ultrasound imaging session contexts of the ultrasound imaging session, context signals based on measurements of light captured by one or more light sensors, measurements of sound captured by one or more microphones, measurements of time captured by one or more clocks, or measurements of visual signals captured by one or more cameras” is considered to be an abstract idea of a mental process and concept relating to data comparisons that can be performed mentally or are analogous to human mental work as a user may merely think and receive within the mind about context signals based upon a visually observed ultrasound imaging session or mental measurements of time before an ultrasound imaging session occurs. The limitation of “execute the CIE to determine one or more context parameters of the ultrasound imaging session based on the context signals from the CDC, wherein the one or more context parameters include one or more information on ambient light at a display location for the ultrasound imaging session (imaging session display location), sound information at a location for the ultrasound imaging session (imaging session location), or visual information at the ultrasound imaging session location;” is considered to be an abstract idea of a mental process and concept relating to data comparisons that can be performed mentally or are analogous to human mental work as a user may merely think and receive within the mind about context signals and make determinations of parameters based upon the received information. These may include ambient light, sound, or visual information determinations, all of which capable of being determined within the human mind based upon intended ultrasound system outputs. The limitation of “access, before the ultrasound imaging session, the one or more context parameters” is considered to be an abstract idea of a mental process and concept relating to data comparisons that can be performed mentally or are analogous to human mental work as a user may merely think and receive within the mind about accessing context parameters before ultrasound imaging. The limitation of “determine, before the ultrasound imaging session, one or more ultrasound imaging settings of the ultrasound imaging session based on the one or more context parameters, the one or more ultrasound imaging settings corresponding to an imaging session preset stored in the memory” is considered to be an abstract idea of a mental process and concept relating to data comparisons that can be performed mentally or are analogous to human mental work as a user may merely think and receive within the mind about settings of an ultrasound imaging based upon mentally observed context parameters and occurring before an ultrasound imaging session. The limitation of “select, from the memory and before the ultrasound imaging session, the imaging session preset, and” is considered to be an abstract idea of a mental process and concept relating to data comparisons that can be performed mentally or are analogous to human mental work as a user may merely think and receive within the mind about and imaging session preset to select from a memorized set of presets and before an ultrasound imaging session. The limitation of “cause the imaging session preset to be implemented as a preset for the ultrasound imaging session” is considered to be an abstract idea of a mental process and concept relating to data comparisons that can be performed mentally or are analogous to human mental work as a user may merely think about user input of a preset on the ultrasound imaging session. Therefore, the claim is directed to an abstract idea and a judicial exception.
Step 2A Prong 2 Analysis (Claim 1): This judicial exception is not integrated into a practical application because it does not recite any elements that integrate the abstract idea into a practical application such as improving the operation of the diagnostic device, or effecting a particular treatment or prophylaxis for a disease or medical condition. The claims do not recite any features of components that integrates the judicial exception into a practical application because the additional recited elements of “context determination circuitry”, “a memory storing a context information engine”, “circuitry”, and “processing circuitry” form generic computing elements in which the abstract idea is merely implemented or performed on a generic computer device. While a computer is used to implement the abstract idea, the recited judicial exception does not improve how the functioning of a computer occurs. The obtaining of context signals including from “signals corresponding to at least one of measurements of light captured by one or more light sensors, measurements of sound captured by one or more microphones, measurements of time captured by one or more clocks, or measurements of visual signals captured by one or more cameras” and output of settings on an ultrasound imaging session form an extra-solution activity of mere data gathering and data outputting steps. Therefore, all of these claimed elements are not sufficient to improve the functioning of a diagnostic device or form of technology. Furthermore, while directed to activity for medical diagnostics, the claimed steps do not effect a particular treatment or prophylaxis for a disease or medical condition.
Step 2B Analysis (Claim 1): The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional recited elements of “context determination circuitry”, “a memory storing a context information engine”, “circuitry”, and “processing circuitry” form generic computing elements in which the abstract idea is merely implemented or performed on a generic computer device. While a computer is used to implement the abstract idea, the recited judicial exception does not improve how the functioning of a computer occurs. The obtaining of context signals including from “signals corresponding to at least one of measurements of light captured by one or more light sensors, measurements of sound captured by one or more microphones, measurements of time captured by one or more clocks, or measurements of visual signals captured by one or more cameras” and output of settings on an ultrasound imaging session form an extra-solution activity of mere data gathering and data outputting steps. The limitations do not include improvements to the functioning of a computer or to any other technology or technical field, and the elements of the claim further do not effect a particular treatment or prophylaxis for a disease or medical condition. Furthermore, there are no claimed features that provide elements to identify improvements to these general computing technologies based on the claimed features. As discussed above, any limitations form insignificantly extra-solution activity, and link the judicial exception to generic computing elements within the art of medical diagnostics.
Independent claim 15 includes similar subject matter to independent claim 1 and is similarly rejected under 35 U.S.C. 101. Each element of claim 15 can that is recited as a judicial exception abstract idea can be directly mapped to the analogous elements of claim 1 above in the rejection. Step 2A Prong 2 and Step 2B analysis for different elements within claim 15 are detailed below.
Step 2A Prong 2 Analysis (Claim 15): This judicial exception is not integrated into a practical application because it does not recite any elements that integrate the abstract idea into a practical application such as improving the operation of the diagnostic device, or effecting a particular treatment or prophylaxis for a disease or medical condition. The claims do not recite any features of components that integrates the judicial exception into a practical application because the additional recited elements of “context determination circuitry”, “a context information engine”, “computer-readable non-transitory storage media”, and “one or more processors” form generic computing elements in which the abstract idea is merely implemented or performed on a generic computer device. While a computer is used to implement the abstract idea, the recited judicial exception does not improve how the functioning of a computer occurs. The obtaining of context signals including from “signals corresponding to at least one of measurements of light captured by one or more light sensors, measurements of sound captured by one or more microphones, measurements of time captured by one or more clocks, or measurements of visual signals captured by one or more cameras” and output of settings on an ultrasound imaging session form an extra-solution activity of mere data gathering and data outputting steps. The limitation of “wherein the one or more settings include one or more of gain, depth, frequency, time gain compensation, dynamic range, focus, harmonics, mode, focal zone, persistence, automatic gain control, spatial compounding, frequency compounding, sine functions or line density” merely limits the type of settings that can be output after the mental processing abstract idea steps, which further falls into the category of extra-solution activity of mere data outputting. Therefore, all of these claimed elements are not sufficient to improve the functioning of a diagnostic device or form of technology. Furthermore, while directed to activity for medical diagnostics, the claimed steps do not effect a particular treatment or prophylaxis for a disease or medical condition.
Step 2B Analysis (Claim 15): The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional recited elements of “context determination circuitry”, “a context information engine”, “computer-readable non-transitory storage media”, and “one or more processors” form generic computing elements in which the abstract idea is merely implemented or performed on a generic computer device. While a computer is used to implement the abstract idea, the recited judicial exception does not improve how the functioning of a computer occurs. The obtaining of context signals including from “signals corresponding to at least one of measurements of light captured by one or more light sensors, measurements of sound captured by one or more microphones, measurements of time captured by one or more clocks, or measurements of visual signals captured by one or more cameras” and output of settings on an ultrasound imaging session form an extra-solution activity of mere data gathering and data outputting steps. The limitation of “wherein the one or more settings include one or more of gain, depth, frequency, time gain compensation, dynamic range, focus, harmonics, mode, focal zone, persistence, automatic gain control, spatial compounding, frequency compounding, sine functions or line density” merely limits the type of settings that can be output after the mental processing abstract idea steps, which further falls into the category of extra-solution activity of mere data outputting. The limitations do not include improvements to the functioning of a computer or to any other technology or technical field, and the elements of the claim further do not effect a particular treatment or prophylaxis for a disease or medical condition. Furthermore, there are no claimed features that provide elements to identify improvements to these general computing technologies based on the claimed features. As discussed above, any limitations form insignificantly extra-solution activity, and link the judicial exception to generic computing elements within the art of medical diagnostics.
Independent claim 21 includes the subject matter to independent claim 1 and is similarly rejected under 35 U.S.C. 101. Each element of claim 21 can be directly mapped to the analogous elements of claim 1 above in the rejection.
Dependent claims 2, 16, and 22, include limitations that are directed to narrowing the types of context signals of independent claims which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed signals. Furthermore, it narrows the extra-solution activity of mere data gathering and does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claims 3, 17, and 23, include limitations that are directed to narrowing the types of context parameters of independent claims which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed parameters. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claims 4, 18, and 24, include limitations that are directed to narrowing the types of context parameters of independent claims which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed parameters. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claims 5, 19, and 25, include limitations that are directed to narrowing the types of context parameters and context signals of independent claims which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed parameters. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 6, include limitations that are directed to narrowing the types of settings of independent claim 1 which narrows how the mental processing abstract idea of independent claim 1 is performed as a user can mentally analyze the particular claimed settings to be a real-time second context parameter setting change. It also forms extra solution activity of mere data outputting. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claims 7, 20, and 26, include limitations that are directed to narrowing the types of settings of independent claims which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed settings. It also forms extra solution activity of mere data outputting. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 8, include limitations that are directed to narrowing the types of preset settings of independent claim 1 which narrows how the mental processing abstract idea of independent claim 1 is performed as a user can mentally analyze the particular claimed settings. It also forms extra solution activity of mere data outputting. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 9, include limitations that are directed to narrowing the types of context parameters and context signals of independent claims which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed parameters. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 10, include limitations that are directed to narrowing the types of context parameters and context signals to include a mapping which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed parameters. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 11, include limitations that are directed to narrowing the generic computer elements of independent claim 1 and therefore it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 12, include limitations that are directed to narrowing the types of context parameters to include access of historical information which narrows how the mental processing abstract idea of independent claims is performed as a user can mentally analyze the particular claimed parameters. Therefore, it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 13, include limitations that are directed to narrowing the generic computer elements of independent claim 1 and therefore it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Dependent claim 14, include limitations that are directed to narrowing the generic computer elements of independent claim 1 and therefore it does not integrate the judicial exception of the independent claim into a practical application or amount to significantly more.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-7 and 9-26 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hoppmann et al. (U.S. Pub. No. 20230111601) hereinafter Hoppmann.
Regarding claim 1, Hoppmann teaches:
An ultrasound imaging system (abstract) including:
context determination circuitry (CDC) to generate, before an ultrasound imaging session and for one or more ultrasound imaging session contexts of the ultrasound imaging session, context signals based on measurements of light captured by one or more light sensors, measurements of sound captured by one or more microphones, measurements of time captured by one or more clocks, or measurements of visual signals captured by one or more cameras ([0008]-[0009], patient scanning difficulty level determination is made based upon context signals based upon the patient and other contexts of a particular ultrasound imaging session; [0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context generating; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context outputs used before or during an ultrasound imaging of a patient target region of interest; [0051], “change the settings on the ultrasound device to “override” the poor user input and/or improve same.”, changing of settings would occur for a context signal change prior to an ultrasound imaging session, as the change in imaging settings would not be relevant to an ultrasound imaging session that has already occurred. Whether this new session with adjusted settings occurs immediately following the change in settings, or the next time that the ultrasound device is used would both be sufficient to teach the current broadest reasonable interpretation of the claim; [0053]-[0056]; [0057], inputs to the input unit 320 includes a microphone for an audio signal or a camera for a video signal; [0058]-[0063], describe the use of various sensors which include both a microphone and/or camera sensors (RGB or IR) as inputs to the AI apparatus for providing an output that includes the changes or corrections to ultrasound device settings. With a change in imaging settings, this change would occur to the next subsequent ultrasound imaging session, which would teach to generating the context signals “before” the ultrasound imaging session; [0064]-[0068] [0070]-[0073]; [0074], “new ultrasound examinations can be “personalized” with specific machine settings” forms a teaching to adjustment and access of context signals before an imaging session; [0075]-[0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context signal. Because the settings are configured to change properties of the ultrasound imaging system for image acquisition, this necessarily must occur before an imaging session for the changes to impact imaging. These changes can’t occur retroactively and therefore after changes are made, would be applied to the subsequent imaging session in which the settings are changed; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent);
a memory storing a context information engine (CIE) ([0034]; [0053], memory 370 storing the AI neural network machine learning system; see also [0054]-[0067]);
circuitry coupled to the memory and adapted to execute the CIE to determine one or more context parameters of the ultrasound imaging session based on the context signals from the CDC, wherein the one or more context parameters include one or more information on ambient light at a display location for the ultrasound imaging session (imaging session display location), sound information at a location for the ultrasound imaging session (imaging session location), or visual information at the ultrasound imaging session location; ([0008]-[0009]; [0011]-[0012], SDL scanning difficulty level and output from the AI neural network machine learning algorithm forms the context parameters of the ultrasound imaging session based upon the measured characteristics such as patient characteristics of an ultrasound imaging session; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context parameter outputs; [0053]-[0067]; [0070]-[0077], patient’s SDL for ultrasound imaging forms the context parameter. Patient’s SDL includes contexts based upon visually observable information (visual information at the ultrasound imaging session location) of a patient at the ultrasound imaging location, which teaches to the broadest reasonable interpretation of the ”visual information” as claimed. As provided in [0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation and therefore includes visual information at the ultrasound imaging session location about the patient and how the procedure may require adjustment based upon this visual information.); and
processing circuitry to:
access, before the ultrasound imaging session, the one or more context parameters ([0008]-[0009]; [0011]-[0012], SDL scanning difficulty level and output from the AI neural network machine learning algorithm forms the context parameters of the ultrasound imaging session based upon the measured characteristics such as patient characteristics of an ultrasound imaging session; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context parameter outputs used before or during an ultrasound imaging of a patient target region of interest; [0051], “change the settings on the ultrasound device to “override” the poor user input and/or improve same.”, changing of settings would occur for a context parameter change prior to an ultrasound imaging session, as the change in imaging settings would not be relevant to an ultrasound imaging session that has already occurred. Whether this new session with adjusted settings occurs immediately following the change in settings, or the next time that the ultrasound device is used would both be sufficient to teach the current broadest reasonable interpretation of the claim; [0053]-[0056]; [0057], inputs to the input unit 320 includes a microphone for an audio signal or a camera for a video signal; [0058]-[0063], describe the use of various sensors which include both a microphone and/or camera sensors (RGB or IR) as inputs to the AI apparatus for providing an output that includes the changes or corrections to ultrasound device settings. With a change in imaging settings, this change would occur to the next subsequent ultrasound imaging session, which would teach to accessing the context parameters “before” the ultrasound imaging session; [0064]-[0068] [0070]-[0073]; [0074], “new ultrasound examinations can be “personalized” with specific machine settings” forms a teaching to adjustment and access of parameters before an imaging session; [0075]-[0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter. Because the settings are configured to change properties of the ultrasound imaging system for image acquisition, this necessarily must occur before an imaging session for the changes to impact imaging. These changes can’t occur retroactively and therefore after changes are made, would be applied to the subsequent imaging session in which the settings are changed; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent); and
determine, before the ultrasound imaging session, one or more ultrasound imaging settings of the ultrasound imaging session based on the one or more context parameters, the one or more ultrasound imaging settings corresponding to an imaging session preset stored in the memory ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent as a “preset” as claimed);
select, from the memory and before the ultrasound imaging session, the imaging session preset ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output and include at least one preset function setting for subsequent ultrasound examinations; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent as a “preset” as claimed), and
cause the imaging session preset to be implemented as a preset for the ultrasound imaging session ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output and include at least one preset function setting for subsequent ultrasound examinations; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent as a “preset” as claimed).
Regarding claim 2, Hoppmann teaches all of the limitations of claim 1. Hoppmann further teaches:
wherein the context signals include at least one of signals corresponding to a measurement of radio waves or signals transmitted to the ultrasound imaging system by another system ([0054]-[0055], teaches to radio communication systems for acquiring data for the input unit 320 and provides context signals based upon measurements of radio waves or another system transmitting data to the ultrasound imaging system; [0061]-[0062], sensors transmitting to the ultrasound device forms another sensing system and includes radar which is a form of radio wave; [0063], changes to ultrasound settings based upon the acquired input data; see also [0054]-[0067]).
Regarding claim 3, Hoppmann teaches all of the limitations of claim 1. Hoppmann further teaches:
wherein the one or more context parameters include one or more of: patient information, clinician information, imaging session location, imaging session time information, information on ambient light at a display location for the ultrasound imaging session (imaging session display location), display device type information, sound information at a location for the ultrasound imaging session (imaging session location), or visual information at the ultrasound imaging session location ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]).
Regarding claim 4, Hoppmann teaches all of the limitations of claim 3. Hoppmann further teaches:
wherein: patient information includes at least one of patient identity, history of prior patient imaging sessions, history of prior patient diagnoses, patient’s bodily characteristics, patient age or patient gender ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]);
clinician information includes at least one of clinician identity, clinician specialty, or history of clinician settings; imaging session location information includes at least one of an identification of geographic location or an identification of indoor location; information on ambient light at a location where images of the ultrasound imaging session are to be or are being displayed includes at least one of an identification of sunlight or an identification of artificial light; sound information includes at least one of information relating to communication between a clinician and patient, or information spoken by a clinician regarding an imaging session to be performed; and visual information includes an image or a video of at least one of a patient, a clinician, or an imaging session location (as each limitation is only claimed in the alternative in claim 3, narrowing each of these types of patient information merely further limits the alterative and is not required within the claim. In the current rejections, Hoppmann teaches to the patient information in claims 3 and 4 and is sufficient to teach to the broadest reasonable interpretation of the claims).
Regarding claim 5, Hoppmann teaches all of the limitations of claim 1. Hoppmann further teaches:
wherein the circuitry is to execute the CIE to determine at least one of the one or more context parameters based on context signals corresponding to respective contexts of the at least one of the one or more context parameters ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]).
Regarding claim 6, Hoppmann teaches all of the limitations of claim 1. Hoppmann further teaches:
wherein the one or more settings include one or more of gain, depth, frequency, time gain compensation, dynamic range, focus, harmonics, mode, focal zone, persistence, automatic gain control, spatial compounding, frequency compounding, sine functions or line density ([0009], gain or depth; [0011]-[0012]; [0077], gain, time gain compensation, depth, frequency focusing, etc).
Regarding claim 7, Hoppmann teaches all of the limitations of claim 1. Hoppmann further teaches:
wherein the one or more settings correspond to one or more first settings and the one or more context parameters correspond to one or more first context parameters, the processing circuitry further configured to:
access one or more second context parameters during the ultrasound imaging session ([0008]-[0009]; [0011]-[0012], SDL scanning difficulty level and output from the AI neural network machine learning algorithm forms the context parameters of the ultrasound imaging session based upon the measured characteristics such as patient characteristics of an ultrasound imaging session; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context parameter outputs used before or during an ultrasound imaging of a patient target region of interest; [0053]-[0067]; [0070]-[0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system);
determine one or more second ultrasound imaging settings of the ultrasound imaging session based on the one or more second context parameters ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system); and
cause the one or more second settings to be dynamically implemented for the ultrasound imaging session. ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system).
Regarding claim 9, Hoppmann teaches all of the limitations of claim 1. Hoppmann further teaches:
wherein the context signals are first context signals, the one or more ultrasound imaging session contexts are one or more first ultrasound imaging session contexts, the one or more context parameters are one or more first context parameters; and the one or more settings are one or more first settings, wherein: the CDC is to generate, during the ultrasound imaging session and after an implementation of the first settings, second context signals based on one or more second ultrasound imaging session contexts of the ultrasound imaging session; the circuitry coupled to the memory is adapted to execute the CIE to determine one or more second context parameters of the ultrasound imaging session based on the second context signals from the CDC; and the processing circuitry to: access the one or more second context parameters during the ultrasound imaging session; determine one or more second ultrasound imaging settings of the ultrasound imaging session based on the one or more second context parameters; and cause the one or more second settings to be implemented for the ultrasound imaging session (In the following cited portions, multiple patient related characteristics are incorporated into the neural network machine learning model to determine SDL outputs that automatically change multiple ultrasound imaging settings. In light of the multiple inputs, multiple configurable settings, and real-time automatic adjustment of settings, the totality of the cited portions are considered to teach to second context signals, second context parameters, and second ultrasound imaging settings when the device is used across an imaging session; [0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter).
Regarding claim 10, Hoppmann teaches all of the limitations of claim 1. Hoppmann further teaches:
wherein the memory is a first memory, and the processing circuitry is to cause to store, in a at least one of the first memory or a second memory, a mapping of the one or more settings to the one or more context parameters ([0053], memory 370 is part of the AI apparatus 300 and provides a mapping of the AI neural network machine learning algorithm for controlling of the settings as in [0077]; see also [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; An AI neural network machine learning algorithm maps inputs to outputs based upon the training process. Certain input parameters are processed in a way that maps to a particular output parameter that is contextual to the input parameter. This teaches to the broadest reasonable interpretation of “mapping”).
Regarding claim 11, Hoppmann teaches all of the limitations of claim 10. Hoppmann further teaches:
wherein the second memory is in a device different from the ultrasound imaging system, the processing circuitry to send the mapping for transmission to the second memory ([0053], memory 370 is part of the AI apparatus 300 and provides a mapping of the AI neural network machine learning algorithm for controlling of the settings as in [0077]; see also [0042]-[0050]; [0053]-[0067]; [0070]-[0076]).
Regarding claim 12, Hoppmann teaches all of the limitations of claim 11. Hoppmann further teaches:
wherein the processing circuitry is to access historical information regarding the one or more context parameters to determine the one or more settings ([0077], Integration of the AI neural network machine learning model to automatically determine the one or more settings forms an accessing of the historical information related to the context parameters of the AI for determining the settings; see also [0042]-[0050]; [0053]-[0067]; [0070]-[0076], SDL-AI stores the historical information of the model for processing settings and imaging context parameters).
Regarding claim 13, Hoppmann teaches all of the limitations of claim 12. Hoppmann further teaches:
wherein the historical information is in a memory of a computing node of a cloud computing system ([0068]-[0069], AI server 400 forms a cloud computing system in which the historical information of the neural network machine learning algorithm is stored within the memory of the cloud computing system; [0075]-[0076]).
Regarding claim 14, Hoppmann teaches all of the limitations of claim 13. Hoppmann further teaches:
including one of an ultrasound imaging probe or a computing system to be paired to an ultrasound imaging probe ([0039], probe 100; [0040], probe).
Regarding claim 15, Hoppmann teaches:
A product comprising one or more tangible computer-readable non-transitory storage media comprising computer-executable instructions operable to, when executed by one or more processors of an ultrasound imaging system, cause the one or more processors to implement operations at the ultrasound imaging system (abstract), the operations comprising:
generating, using a context determination circuitry (CDC), before an ultrasound imaging session and for one or more ultrasound imaging session contexts of the ultrasound imaging session, context signals based on measurements of light captured by one or more light sensors, measurements of sound captured by one or more microphones, measurements of time captured by one or more clocks, or measurements of visual signals captured by one or more cameras ([0008]-[0009], patient scanning difficulty level determination is made based upon context signals based upon the patient and other contexts of a particular ultrasound imaging session; [0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context generating; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context outputs used before or during an ultrasound imaging of a patient target region of interest; [0051], “change the settings on the ultrasound device to “override” the poor user input and/or improve same.”, changing of settings would occur for a context signal change prior to an ultrasound imaging session, as the change in imaging settings would not be relevant to an ultrasound imaging session that has already occurred. Whether this new session with adjusted settings occurs immediately following the change in settings, or the next time that the ultrasound device is used would both be sufficient to teach the current broadest reasonable interpretation of the claim; [0053]-[0056]; [0057], inputs to the input unit 320 includes a microphone for an audio signal or a camera for a video signal; [0058]-[0063], describe the use of various sensors which include both a microphone and/or camera sensors (RGB or IR) as inputs to the AI apparatus for providing an output that includes the changes or corrections to ultrasound device settings. With a change in imaging settings, this change would occur to the next subsequent ultrasound imaging session, which would teach to generating the context signals “before” the ultrasound imaging session; [0064]-[0068] [0070]-[0073]; [0074], “new ultrasound examinations can be “personalized” with specific machine settings” forms a teaching to adjustment and access of context signals before an imaging session; [0075]-[0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context signal. Because the settings are configured to change properties of the ultrasound imaging system for image acquisition, this necessarily must occur before an imaging session for the changes to impact imaging. These changes can’t occur retroactively and therefore after changes are made, would be applied to the subsequent imaging session in which the settings are changed; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent);
executing a context information engine (CIE) to determine one or more context parameters of the ultrasound imaging session based on the context signals from the CDC, wherein the one or more context parameters include one or more information on ambient light at a display location for the ultrasound imaging session (imaging session display location), sound information at a location for the ultrasound imaging session (imaging session location), or visual information at the ultrasound imaging session location; ([0008]-[0009]; [0011]-[0012], SDL scanning difficulty level and output from the AI neural network machine learning algorithm forms the context parameters of the ultrasound imaging session based upon the measured characteristics such as patient characteristics of an ultrasound imaging session; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context parameter outputs; [0053]-[0067]; [0070]-[0077], patient’s SDL for ultrasound imaging forms the context parameter. Patient’s SDL includes contexts based upon visually observable information (visual information at the ultrasound imaging session location) of a patient at the ultrasound imaging location, which teaches to the broadest reasonable interpretation of the ”visual information” as claimed. As provided in [0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation and therefore includes visual information at the ultrasound imaging session location about the patient and how the procedure may require adjustment based upon this visual information.); and
determining, before the ultrasound imaging session, one or more ultrasound imaging settings of the ultrasound imaging session based on the one or more context parameters ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent as a “preset” as claimed);
wherein the one or more settings include one or more of gain, depth, frequency, time gain compensation, dynamic range, focus, harmonics, mode, focal zone, persistence, automatic gain control, spatial compounding, frequency compounding, sine functions and line density ([0009], gain or depth; [0011]-[0012]; [0077], gain, time gain compensation, depth, frequency focusing, etc.); and
causing the one or more settings to be implemented for the ultrasound imaging session, by selecting from a memory and before the ultrasound imaging session, the imaging session preset and causing the imaging session preset to be implemented as a preset for the ultrasound imaging session ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output and include at least one preset function setting for subsequent ultrasound examinations; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent as a “preset” as claimed).
Regarding claim 16, Hoppmann teaches all of the limitations of claim 15. Hoppmann further teaches:
wherein the context signals further include at least one of signals corresponding to a measurement of radio waves or signals transmitted to the ultrasound imaging system by another system ([0054]-[0055], teaches to radio communication systems for acquiring data for the input unit 320 and provides context signals based upon measurements of radio waves or another system transmitting data to the ultrasound imaging system; [0061]-[0062], sensors transmitting to the ultrasound device forms another sensing system and includes radar which is a form of radio wave; [0063], changes to ultrasound settings based upon the acquired input data; see also [0054]-[0067]).
Regarding claim 17, Hoppmann teaches all of the limitations of claim 15. Hoppmann further teaches:
wherein the one or more context parameters include one or more of: patient information, clinician information, imaging session location, imaging session time information, information on ambient light at a display location for the ultrasound imaging session, display device type information, sound information at a location for the ultrasound imaging session (imaging session location), or visual information at the ultrasound imaging session location ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]).
Regarding claim 18, Hoppmann teaches all of the limitations of claim 17. Hoppmann further teaches:
wherein: patient information includes at least one of patient identity, history of prior patient imaging sessions, history of prior patient diagnoses, patient’s bodily characteristics, patient age or patient gender ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]); clinician information includes at least one of clinician identity, clinician specialty, or history of clinician settings; imaging session location information includes at least one of an identification of geographic location or an identification of indoor location; information on ambient light at a location where images of the ultrasound imaging session are to be or are being displayed includes at least one of an identification of sunlight or an identification of artificial light; sound information includes at least one of information relating to communication between a clinician and patient, or information spoken by a clinician regarding an imaging session to be performed; and visual information includes an image or a video of at least one of a patient, a clinician, or an imaging session location (as each limitation is only claimed in the alternative in claim 17, narrowing each of these types of patient information merely further limits the alterative and is not required within the claim. In the current rejections, Hoppmann teaches to the patient information in claims 17 and 18 and is sufficient to teach to the broadest reasonable interpretation of the claims).
Regarding claim 19, Hoppmann teaches all of the limitations of claim 18. Hoppmann further teaches:
wherein executing the CIE to determine at least one of the one or more context parameters is based on context signals corresponding to respective contexts of the at least one of the one or more context parameters ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]).
Regarding claim 20, Hoppmann teaches all of the limitations of claim 15. Hoppmann further teaches:
wherein the one or more settings correspond to one or more first settings and the one or more context parameters correspond to one or more first context parameters, the operations further comprising:
access one or more second context parameters during the ultrasound imaging session ([0008]-[0009]; [0011]-[0012], SDL scanning difficulty level and output from the AI neural network machine learning algorithm forms the context parameters of the ultrasound imaging session based upon the measured characteristics such as patient characteristics of an ultrasound imaging session; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context parameter outputs used before or during an ultrasound imaging of a patient target region of interest; [0053]-[0067]; [0070]-[0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system);
determine one or more second ultrasound imaging settings of the ultrasound imaging session based on the one or more second context parameters ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system); and
cause the one or more second settings to be dynamically implemented for the ultrasound imaging session. ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system).
Regarding claim 21, Hoppmann teaches:
A method to be performed at an ultrasound imaging system including:
generating, using a context determination circuitry (CDC), upon the patient and other contexts of a particular ultrasound imaging session; [0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context generating; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context outputs used before or during an ultrasound imaging of a patient target region of interest; [0051], “change the settings on the ultrasound device to “override” the poor user input and/or improve same.”, changing of settings would occur for a context signal change prior to an ultrasound imaging session, as the change in imaging settings would not be relevant to an ultrasound imaging session that has already occurred. Whether this new session with adjusted settings occurs immediately following the change in settings, or the next time that the ultrasound device is used would both be sufficient to teach the current broadest reasonable interpretation of the claim; [0053]-[0056]; [0057], inputs to the input unit 320 includes a microphone for an audio signal or a camera for a video signal; [0058]-[0063], describe the use of various sensors which include both a microphone and/or camera sensors (RGB or IR) as inputs to the AI apparatus for providing an output that includes the changes or corrections to ultrasound device settings. With a change in imaging settings, this change would occur to the next subsequent ultrasound imaging session, which would teach to generating the context signals “before” the ultrasound imaging session; [0064]-[0068] [0070]-[0073]; [0074], “new ultrasound examinations can be “personalized” with specific machine settings” forms a teaching to adjustment and access of context signals before an imaging session; [0075]-[0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context signal. Because the settings are configured to change properties of the ultrasound imaging system for image acquisition, this necessarily must occur before an imaging session for the changes to impact imaging. These changes can’t occur retroactively and therefore after changes are made, would be applied to the subsequent imaging session in which the settings are changed; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent);
executing a context information engine (CIE) to determine one or more context parameters of the ultrasound imaging session based on the context signals from the CDC wherein the one or more context parameters include one or more information on ambient light at a display location for the ultrasound imaging session (imaging session display location), sound information at a location for the ultrasound imaging session (imaging session location), or visual information at the ultrasound imaging session location; ([0008]-[0009]; [0011]-[0012], SDL scanning difficulty level and output from the AI neural network machine learning algorithm forms the context parameters of the ultrasound imaging session based upon the measured characteristics such as patient characteristics of an ultrasound imaging session; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context parameter outputs; [0053]-[0067]; [0070]-[0077], patient’s SDL for ultrasound imaging forms the context parameter. Patient’s SDL includes contexts based upon visually observable information (visual information at the ultrasound imaging session location) of a patient at the ultrasound imaging location, which teaches to the broadest reasonable interpretation of the ”visual information” as claimed. As provided in [0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation and therefore includes visual information at the ultrasound imaging session location about the patient and how the procedure may require adjustment based upon this visual information.);
determining, before the ultrasound imaging session, one or more ultrasound imaging settings of the ultrasound imaging session based on the one or more context parameters the one or more ultrasound imaging settings corresponding to an imaging session preset stored in the memory ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent as a “preset” as claimed); and
causing the one or more settings to be implemented as a preset for the ultrasound imaging session by selecting, from the memory and before the ultrasound imaging session, the imaging session preset and causing the imaging session preset to be implemented as a preset for the ultrasound imaging session ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output and include at least one preset function setting for subsequent ultrasound examinations; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; claim 10, changes in settings for a subsequent examination forms an accessing of the context parameters before the ultrasound imaging session that is subsequent as a “preset” as claimed).
Regarding claim 22, Hoppmann teaches all of the limitations of claim 21. Hoppmann further teaches:
wherein the context signals further include at least one of signals corresponding to a measurement of radio waves or signals transmitted to the ultrasound imaging system by another system ([0054]-[0055], teaches to radio communication systems for acquiring data for the input unit 320 and provides context signals based upon measurements of radio waves or another system transmitting data to the ultrasound imaging system; [0061]-[0062], sensors transmitting to the ultrasound device forms another sensing system and includes radar which is a form of radio wave; [0063], changes to ultrasound settings based upon the acquired input data; see also [0054]-[0067]).
Regarding claim 23, Hoppmann teaches all of the limitations of claim 21. Hoppmann further teaches:
wherein the one or more context parameters include one or more of: patient information, clinician information, imaging session location, imaging session time information, information on ambient light at a display location for the ultrasound imaging session, display device type information, sound information at a location for the ultrasound imaging session (imaging session location), or visual information at the ultrasound imaging session location ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]).
Regarding claim 24, Hoppmann teaches all of the limitations of claim 23. Hoppmann further teaches:
wherein: patient information includes at least one of patient identity, history of prior patient imaging sessions, history of prior patient diagnoses, patient’s bodily characteristics, patient age or patient gender ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]); clinician information includes at least one of clinician identity, clinician specialty, or history of clinician settings; imaging session location information includes at least one of an identification of geographic location or an identification of indoor location; information on ambient light at a location where images of the ultrasound imaging session are to be or are being displayed includes at least one of an identification of sunlight or an identification of artificial light; sound information includes at least one of information relating to communication between a clinician and patient, or information spoken by a clinician regarding an imaging session to be performed; and visual information includes an image or a video of at least one of a patient, a clinician, or an imaging session location (as each limitation is only claimed in the alternative in claim 23, narrowing each of these types of patient information merely further limits the alterative and is not required within the claim. In the current rejections, Hoppmann teaches to the patient information in claims 23 and 24 and is sufficient to teach to the broadest reasonable interpretation of the claims)
Regarding claim 25, Hoppmann teaches all of the limitations of claim 24. Hoppmann further teaches:
executing the CIE to determine at least one of the one or more context parameters is based on context signals corresponding to respective contexts of the at least one of the one or more context parameters ([0011]-[0012], patient physical characteristics such as “patient size, degree of body fat on the patient, tissue interfaces within the patient, disease processes, tissue calcification within the patient, quantity of gas within the patient, quantity of urine within the patient, presence of ultrasound artifacts obtained from examining the patient, target organ size, and/or abnormality of a target organ” is incorporated into the SDL calculation as context signals as claimed for further context parameters and therefore forms patient information, imaging session location information, or visual information at the session location; [0042], further elaborates upon the factors that are used in the SDL-AI measurement for determining and SDL output (context parameter); [0043]-[0050]; [0053]-[0067]; [0070]-[0077]).
Regarding claim 26, Hoppmann teaches all of the limitations of claim 21. Hoppmann further teaches:
wherein the one or more settings correspond to one or more first settings and the one or more context parameters correspond to one or more first context parameters, the method further including:
accessing one or more second context parameters during the ultrasound imaging session ([0008]-[0009]; [0011]-[0012], SDL scanning difficulty level and output from the AI neural network machine learning algorithm forms the context parameters of the ultrasound imaging session based upon the measured characteristics such as patient characteristics of an ultrasound imaging session; [0042]-[0050], SDL neural network machine learning algorithm forms the CIE for context parameter outputs used before or during an ultrasound imaging of a patient target region of interest; [0053]-[0067]; [0070]-[0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system);
determining one or more second ultrasound imaging settings of the ultrasound imaging session based on the one or more second context parameters ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system); and
causing the one or more second settings to be dynamically implemented for the ultrasound imaging session ([0008]-[0009]; [0011]-[0012], settings of the ultrasound imaging are determined from the SDL context parameter output; [0042]-[0050]; [0053]-[0067]; [0070]-[0076]; [0077], patient’s SDL for ultrasound imaging forms the context parameter, which updates settings automatically before imaging after accessing the context parameter; The cited portions include a number of context parameters (input data), wherein the use of an additional category of input data forms one or more second context parameters which can be utilized in real-time to adjust the imaging settings of the ultrasound imaging system).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Hoppmann as applied to claim 7 above, and further in view of Srinivasa Naidu et al. (U.S. Pub. No. 20210353260) hereinafter Srinivasa Naidu.
Regarding claim 8, primary reference Hoppmann teaches all of the limitations of claim 7. Primary reference Hoppmann further fails to teach:
wherein the preset includes one of: an abdominal preset, a renal preset, a cardiac preset, an obstetrics preset, a musculoskeletal preset, a breast preset, a neonatal preset, an interventional preset, a pelvis preset, or a thyroid preset
However, the analogous art of Srinivasa Naidu of an ultrasound imaging system settings and parameter determination system (abstract) teaches:
wherein the preset includes one of: an abdominal preset, a renal preset, a cardiac preset, an obstetrics preset, a musculoskeletal preset, a breast preset, a neonatal preset, an interventional preset, a pelvis preset, or a thyroid preset ([0025], abdominal renal preset or cardiac preset settings for imaging organ to organ).
It would have been obvious to one of ordinary skill in the art before the effective
filing date of the claimed invention to have modified the ultrasound context parameter and settings determination system of Hoppmann to include the cardiac and abdominal preset as taught by Srinivasa Naidu because selecting organ specific preset settings improves image quality and leads to more accurate diagnostics of target regions of interest (Srinivasa Naidu, [0025]).
Response to Arguments
Applicant's arguments filed 2/13/2026 have been fully considered but they are not persuasive. Responses to each of the applicant’s arguments are detailed below.
Regarding the applicant’s arguments on pages 13-19 of the remarks, the applicant argues that the present independent claims should not be rejected under 35 U.S.C. 101. The applicant argues that the applicant argues that the ultrasound system with particular hardware is not reasonably capable of being performed within the mind. Examiner notes that in the current rejections above, each of the interpreted processing steps are capable of being performed in the human mind, because each limitation is directed to a scope that does not differentiate from the processing capabilities of the human mind. With the same received extra solution data gathering information, the claimed processing features performed by circuitry or an analogous human mind would be capable of performing the same steps. It is the interpretation of the current rejections that choosing a setting based upon mental observations is readily performed within a human mind and the additional hardware is directed either to generic and routine computer elements or is tangentially referred to elements related to extra solution activity of mere data gathering. The applicant argues that context signals generated from an imaging environment include electrical and digital signals internal to the system and must process machine-formatted outputs of the sensors. This interpretation mirrors the interpretation that the generic and routine computer elements are only included as a means to implement or perform the abstract idea on a generic computer device. Whether the device processes electrical signals, or a human analyzes the same claimed elements mentally, it does not change the 101 analysis that interprets the recited limitations as mental processing abstract ideas.
The applicant further argues on page 15 of the remarks that accessing context parameters is not the same as mentally recalling a fact. As stated above, memory and processing components form generic and routine computer elements that are utilized to merely perform or implement the abstract idea. While mental processing of a context parameter in a prior art reference would not anticipate the claims in the context of a prior art rejection, in the current 101 analysis, it is sufficient that the claimed elements are directed to an abstract idea of mental processing. There are no narrowly recited limitations within the claim that actually improve how the computer system processes that would preclude the broad scope of the recited limitations from being interpreted as a mental processing abstract idea. The applicant further argues that settings must be in a machine-readable form and therefore cannot be configured by a user merely determining settings mentally. As discussed above, these limitations are similarly interpreted as capable of being performed mentally and the programming of the decisions on a generic computer device is not sufficient to integrate into a practical application or amount to significantly more. The applicant argues that the claims cause an imaging setting preset to be implemented to an ultrasound device and therefore a human mind cannot control such an electronic output. In the current rejections, the decision made by a user to control the ultrasound system is a mental processing abstract idea that when interfacing with an ultrasound system it would be extra solution activity of mere data outputting. The setting is not further modified once output to the generic computer elements of a standard ultrasound device, and therefore it is a mere data output of a selection to the device and does not incorporate the abstract idea into a practical application or amount to significantly more. While the applicant argues that a human cannot perform the machine-level processing, the current interpretation of extra solution activity combined with processing decisions readily capable of being performed mentally render the applicant’s argument not persuasive.
Regarding the applicant’s arguments on pages 16-17 of the remarks, the applicant argues that the extra solution activity in the eligibility guidance refers to generic data gathering before analysis or displaying or storing the result afterwards where the underlying idea would be the same even if those operations were omitted. The applicant argues that sensors are incorporated into the imaging system, but in the current claims these sensors are only tangentially referred to in relation to data gathering and only the circuitry is positively recited into the ultrasound system of independent claim 1. The applicant argues that there would be no operative control scheme left if the use of sensors were removed, but every element of the claim is operative without sensors as only a data input is required within the claim. A human processing the same data would come to the same conclusions as the recited circuitry and therefore the underlying idea of generic circuitry configured to perform the abstract idea would be the same if the extra solution activity was omitted. The applicant argues that the environmental context loop is a claimed practical application, but the broadly claimed processing of ultrasound setting decisions based upon context is no different than a human operator walking into an ultrasound room, noticing environmental attributes, and deciding to select a particular setting based upon the mental decisions. At no point in the claimed processing features are there any machine-specific or data-driven analyses that occur that would render a human mind incapable of performing the mentally equivalent processing step. This means that the generic computer elements are not used to improve the technology of an ultrasound system by performing tasks beyond those capable of being performed within the mind, but rather merely to implement mental processing steps on the generic computer hardware located on all ultrasound device systems.
On page 17 of the applicant’s remarks, the applicant argues that technicians could not acquire the same measurements from a light sensor, camera or microphone and make context influenced decisions on the data. The current interpretation in the rejections above, is that the extra solution data gathering by such devices does not narrow the data beyond anything that a human user would be capable of analyzing within the mind. Ambient light, acoustic noise, or camera images are all readily interpreted by a human mind, and similar improvements to a subsequent ultrasound scan could be made from any human mental processing decision made based upon such data. For example, noticing a bright room and increasing the screen brightness could be readily performed by a human mind. It is the examiner’s interpretation in the current rejections above that the claims perform the same decisions that humans make and the claims only execute those decisions on a generic computer device. The sensors do not provide any data (such as analyzing specific frequency ranges of an audio signal) that would necessarily require a machine to process it in a way that humans could not.
On page 18 of the applicant’s remarks, the applicant argues CadioNet supports eligibility for the claims because it recites a particular way of processing cardiac signals and provides to improved processing over a mental process. The current claims recite different limitations than claims associated with the cardiac signals, and it is the current interpretations that the broadly recited settings parameter processing of the current claimed invention are capable of being performed mentally. While device-level analysis may occur within the claimed ultrasound system, it is the interpretation in the present 101 rejections that it is merely the abstract idea mental processing being performed on a generic computer device. Faster implementation of settings as argued by the applicant does not change the interpretation of it being mental processing abstract ideas implemented on a generic computer device. There must be additional recited processing steps (such as a particularly recited computer algorithm) that change how the actual processing is achieved that would amount to more than the mental processing abstract idea. In the current form, every recited processing step is either extra solution activity or readily capable of being performed mentally.
The applicant argues that dependent claims include additional classes of parameters and mapping of context parameters and amount to more than just narrowing the abstract ideas of the independent claims. In the current rejections, all of these recited associations can also be performed mentally by a human mind and therefore just form a more narrowly claimed version of the independent claim abstract ideas. Therefore, each of the analyzed dependent claims also do not incorporate the recited abstract ideas into a practical application or amount to significantly more.
Regarding the applicant’s arguments on pages 22-24 of the remarks, the applicant argues that the prior art reference of Hoppmann with teachings to changing settings for subsequent examinations as not an anticipatory concept to the present invention. The applicant argues that the Hoppmann’s SDL engine stores presets for future exams but does not disclose that prior to any imaging, using sensors to derive context parameters for conditions of settings prior to imaging. It is not the current interpretation of the applicant’s claimed invention that “prior to any imaging” the device must have never performed any imaging on the patient prior to that imaging session. Therefore, when Hoppmann teaches to incorporating analogous context signals and parameters into changing the ultrasound imaging settings for a subsequent examination, the period of time between acquiring context signals, changing the settings, and the subsequent ultrasound exam forms a complete teaching to all of the applicant’s claimed processing steps. Whether ultrasound has been performed prior to those steps, does not preclude the Hoppmann system from also anticipating the applicant’s claimed invention.
The applicant further argues that the Hoppmann reference teaching to acquiring data from a microphone or camera to feed as inputs to the AI engine fail to teach to sound information at an imaging session or ambient light information. In the current rejections these limitations are claimed as alternatives, and therefore the “visual information” which is broadly interpreted to correspond to any visual information which could also encompass a patient, is currently taught by the Hoppmann reference in the above rejections. Patient-centric factors are also at the imaging session location as the patient is present during imaging. Visual information is not given a special definition in the applicant’s specification and therefore is not interpreted in any manner that is narrower than the broadest reasonable interpretation.
The applicant further argues that the two-stage function of the CDC and CIE cannot be interpreted as being taught by the single AI engine as provided in the Hoppmann reference. It is the current interpretation of these limitations that the circuitry and engine refer to computer-implemented limitations and therefore any artificial intelligence system or computer code capable of performing the claimed function is sufficient for teaching to the claims. The applicant has not provided any additional, positively recited claim elements that would narrow the interpretation of the circuitry and engine to be a particular form of electronic hardware or computer code that would differentiate from the Hoppmann reference and therefore these arguments directed to the Hoppmann reference needing to disclose named elements analogous to these claim elements, and not only the related function and hardware capable of performing the claimed functions is not persuasive. The applicant argues that Hoppmann does not disclose these dedicated systems, but the functional teachings of Hoppmann with analogous computer-implemented hardware structures is capable of teaching to all of the claimed processing elements and anticipates the claimed invention.
Regarding the applicant’s arguments to claim 10, the applicant argues that the claim requires storing of settings to the one or more context parameters. One of ordinary skill in the art would readily understand that an AI based analysis tool maps input elements to output elements within the neural network structure. Therefore, the recited portions of the Hoppmann reference is a specific trained neural network map between elements and sufficient to teach to the claim. The claim recites this “mapping” in a generically broad manner that one of ordinary skill in the art would understand that stored model parameters of Hoppmann form a mapping of settings outputs to context parameter inputs. The applicant argues that “historical values of the claimed context parameters” are provided in claim 10, but this feature is not required to be stored in the memory. Only the claimed “mapping” is required to be stored, which the AI based analysis tool is stored on the Hoppmann disclosed systems.
For these reasons, the applicant’s arguments have been considered but are not persuasive.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN A FRITH whose telephone number is (571)272-1292. The examiner can normally be reached M-Th 8:00-5:30 Second Fri 8:00-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 571-270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN A FRITH/Primary Examiner, Art Unit 3798