DETAILED ACTION
This Non-Final communication is in response to Application No. 18/648,741 filed 4/29/2024 which claims priority to Provisional Application No. 63/463,709 filed 5/3/2023. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 have been examined.
Priority
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed application, Provisional Application No. 63/463,709 (“Provisional”), fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application.
Specifically, at least the following limitation of independent claim 1 (and similarly in independent claims 14 and 18) is not fully supported by the Provisional:
“a prompt string generator (116) configured (i) to receive the user input from the GUI, (ii) to extract at least a type of examination and the data set from the at least one processor, (iii) to translate the user input, the type of examination, and the data set into a natural language prompt string in response to selection of the report selector, and (iv) to output the prompt string to an application programming interface (API) (117), wherein the prompt string is suitable for the type of examination and readable by the AI driven large language model…”
The Provisional describes translating patient reporting data to a prompt string and sending it to an AI model for processing via an API (see Provisional, para [0007]). However, the Provisional does not describe either (1) the extracting of a type of examination, or (2) that patient reporting data, or the prompt string more generally, necessarily includes all of “the user input, the type of examination, and the data set” as in claims 1, 14, and 18. There is no description of any user input, or type of examination, being incorporated into the prompt string. Additionally, the example prompt string is not described as requiring, or even defining, a type of examination (see Provisional, para [0009]. Further, claim 1 of the Provisional recites:
“a prompt generator configured to identify from the image and data set a type of examination and to automatically translate the data set into a prompt string suitable for the type of examination and readable by a remote AI chatbot program…”
Here, again, “user input” and “type of examination” are not recited as incorporated within the prompt string.
Therefore, the effective filing date for the claims is the filing date of this application: 4/29/2024.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5 and 10-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Solomon et al. (US 2024/0298989 A1, citations are to Provisional Application No. 63/489,220 filed 3/9/2023, hereinafter “Solomon”), and further in view of Paulett et al. (US 2024/0347156 A1, filed 4/17/2024, hereinafter “Paulett”).
Regarding claim 1, Solomon teaches a system for generating a report from ultrasound imaging, the system comprising:
an ultrasound imaging system (100) configured to provide ultrasound image data from an examination of a subject. More specifically, a medical diagnostic system that includes a diagnostic testing instrument 110 which can be an ultrasound device producing medical images (Solomon, abstract, [0020], [0022], [0040]-[0042], [0060], [0061]).
a server (101) configured to execute one or more trained, artificial intelligence (AI) driven … models configured to generate … responses... More specifically, the system includes a server 102 with machine learning logic/models 144/146 that generate responses to input (Solomon, [0019], [0045], [0048]).
wherein the ultrasound imaging system comprises:
at least one processor (115, 510) configured to receive the ultrasound image data acquired by an ultrasound imaging device during the examination of the subject, including corresponding metadata, to generate an ultrasound image based on the ultrasound image data, and to generate a data set indicative of a plurality of measurements in the ultrasound image. More specifically, an ultrasound medical image is received and a first ML logic identifies features/labels, construed as metadata, and determines measurement/dimension data from the image (Solomon, [0052]-[0053]).
a display (113, 530) configured to display the ultrasound image and the data set; More specifically, the process of Figure 8 includes the previously mentions steps of receiving a medical image step 202 and determining measurement data step 204. The measurements with the images are displayed in a graphical user interface at step 206 (Solomon, [0072]-[0075]).
a graphical user interface (GUI) (114) provided on the display, the GUI including a report selector selectable by a user to request a report regarding the examination of the subject and one or more configurable elements for receiving user input from the user. More specifically, a graphical user interface provides a way to receive input data associated with the subject for sending to the server (Solomon, [0027]). The next step is to receive additional information from a user on a graphical user interface in step 208, construable as requesting a report, that results in the performing of step 210, where a second ML logic takes as input, the medical information, the measurement/dimension data outputs a report 170 (Solomon, [0075]-[0076]).
wherein the AI driven … model … automatically generates the report suitable for the type of examination and the user input, wherein the report includes a summary of the data set, and wherein the display is further configured to receive and display the report from the AI driven large language model. More specifically, step 212 is the display of the resultant report which includes a summary of measurement/dimension data (Solomon, [0056], [0077], Figure 3, 8)
However, Solomon may not explicitly teach every aspect of
[the one or more trained AI driven models are] large language models;
[the one or more trained AI driven models respond] to natural language input;
a prompt string generator (116) configured (i) to receive the user input from the GUI, (ii) to extract at least a type of examination and the data set from the at least one processor, (iii) to translate the user input, the type of examination, and the data set into a natural language prompt string in response to selection of the report selector, and (iv) to output the prompt string to an application programming interface (API) (117), wherein the prompt string is suitable for the type of examination and readable by the AI driven large language model,
wherein the AI driven large language model receives the prompt string via the API [for generating the report].
Paulett discloses a method for radiology reporting includes any or all of: determining a set of inputs, determining a template, and generating a radiology report (Paulett, abstract). A user interface can be used to set report generation parameters and generating the report can be initiated by at least a button click (Paulett, [0025]). Report generating models can include large language models ([0035]-[0036]). The set of inputs to the model can include ultrasound information, study (examination) type, radiologist/physician information/inputs, findings/measurements (Paulett, [0046]-[0048]). Generating of the report includes constructing prompt string with the set of inputs to be sent to an LLM (Paulett, [0078], [0096]-[0099], Figure 4A). Several features are construable to be the claimed API, such as “reporting platform” 140 (Paullett, Figure 14, [0038]-[0039]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Solomon and Paulett that a system for generating an ultrasound report using artificial intelligence would include the artificial intelligence consisting of a large language model that receives a generated natural language prompt string including user input, type of examination, and a measurement data set, via an API, and generates a report including a summary of the data set for display. With Solomon and Paulett disclosing generating reports based on ultrasound images by sending ultrasound information, including user entered information and measurements, to an artificial intelligence that generates a report in response, and with Paulett additionally disclosing the artificial intelligence consisting of a large language model that receives a generated prompt string including user input, type of examination, and a measurement data set, via an API, and generates a report including a summary of the data set for display, one of ordinary skill in the art of implementing a system for generating an ultrasound report using artificial intelligence would include the artificial intelligence consisting of a large language model that receives a generated natural language prompt string including user input, type of examination, and a measurement data set, via an API, and generates a report including a summary of the data set for display in order to utilize AI that has the best ability to output responses closest to the natural language writing style of intended recipients as needed in medical field reporting (Paulett, [0095]). One would therefore be motivated to combine these teachings as in doing so would create this system for generating an ultrasound report using artificial intelligence.
Regarding claim 2, Solomon and Paulett teach the system of claim 1, wherein the one or more configurable elements of the GUI include at least a first element for receiving indication of a role of the AI driven large language model, and a second element for receiving indication of a profession of the user, wherein the AI driven large language model determines a type of report based at least in part on the role of the AI driven large language model and the profession of the user, and automatically generates the report according to the determined type of report. More specifically, inputs to the large language model, and thus part of the prompt, can include “clinical indications/reasons for the study”, “study”, “study information”, “study type”, any of which construable as the claimed role of the AI driven large language model (Paulett, [0052], [0076]). Inputs to the large language model, and thus part of the prompt, can also include (1), writing style catered to a particular audience for the report designated via the user interface including patients, other medical professionals, insurance groups, or (2) style preferences of the radiologist or of another medical professional, or (3) language input customized for a specific radiologist, the radiology group, the healthcare facility, any of which construable as the claimed indication of a profession of the user. These inputs go into the generation of the report (Paulett, [0095], [0101]-[0104], [0113]).
Regarding claim 3, Solomon and Paulett teach the system of claim 2, wherein the one or more configurable elements of the GUI further include a third element for receiving indication of a target recipient of the report, including a language of the target recipient, wherein the AI driven large language model determines the type of report further based on the target recipient. More specifically, inputs to the large language model, and thus part of the prompt, can also include a language designation, or language style, to customize for a specific recipient of the report. These inputs go into the generation of the report (Paulett, [0095], [0101]-[0104], [0113]).
Regarding claim 4, Solomon and Paulett teach the system of claim 1, wherein the one or more configurable elements of the GUI include a fourth element for receiving indication of a type of report, wherein the AI driven large language model automatically generates the report according to the received type of report. More specifically, inputs to the large language model, and thus part of the prompt, can include “study type”, “a field of the report”, “specific report type”, “imaging modality (e.g., CT)”, “procedural type (e.g., x-ray)”, “body part (e.g., chest)”, “clinical indication”, the selection of a report template, any of which construable as the claimed receiving indication of a type of report (Paulett, [0027]-[0028], [0049], [0052], [0076], [0103]).
Regarding claim 5, Solomon and Paulett teach the system of claim 1, wherein the report is customized in a predetermined user format provided by the user input. More specifically, users can customize the format of the generated report (Paulett, [0095], [0106]).
Regarding claim 10, Solomon and Paulett teach the system of claim 1, wherein the data set is formatted as a draft report, and the report suitable for the type of examination generated by the AI driven large language model is formatted as a final report. More specifically, the set of findings is construable as a draft report, which is included in the input prompt with a selected report template and sent to the report generation LLM model for generating a final report (Paulett, [0093]-[0094]). Alternatively, the post-processing of the report implies that the first report is a draft (that undergoes proof-reading, improving report language, etc.) and the post-processing models output the final report (Paulett, [0111]-[0114]).
Regarding claim 11, Solomon and Paulett teach the system of claim 1, wherein the server is integrated with the ultrasound imaging system. More specifically, in an embodiment, the functions of the server, including the AI, can be performed by computing device 106 which is integrated with the diagnostic equipment (ultrasound) 110 at clinic 108 (Solomon, Figure 1, [0026], [0042]).
Regarding claim 12, Solomon and Paulett teach the system of claim 1, wherein the server comprises a remote computer accessible by the ultrasound imaging system via a communications network. More specifically, in the main embodiment, the server, including the AI, is remotely accessed by computing device which is part of the ultrasound system (Solomon, Figure 1, [0026], [0041]).
Regarding claim 13, Solomon and Paulett teach the system of claim 1, each of the configurable elements comprises a text field, a button, or a drop down list. More specifically, inputs to the prompt for generating the report can use text boxes, pick lists, and drop downs (Paulett, [0050], [0060], [0061]).
Regarding claims 14, 15, 16, 17 these claims recite a non-transitory computer readable medium storing instructions that substantially perform the steps performed by the system of claims 1, 2, 3, 4, therefore, the same rationale of rejection is applicable.
Regarding claim 18, this claim recites a method that substantially performs the steps performed by the system of claim 1, therefore, the same rationale of rejection is applicable.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Solomon and Paulett, and further in view of Bricker et al. (US 2022/0133279 A1, hereinafter “Bricker”).
Regarding claim 6, Solomon and Paulett teach the system of claim 1,- including a button in a user interface for generating the report (Paulett, [0025], [0056], [0106], [0107]), however, Solomon and Paulett may not explicitly teach every aspect of wherein the report button is a touchscreen button on the display.
Bricker discloses an ultrasound system including a touchscreen with a touch button selectable to simultaneously finish a scan and send data to a cloud infrastructure for generating the ultrasound report (Bricker, [0009], [0066], [0100], [0109], [0134]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Solomon and Paulett with Bricker that a system for generating an ultrasound report using artificial intelligence would include the system including a touchscreen having the touch button for generating the report. With Solomon, Paulett, and Bricker disclosing generating reports based on ultrasound images by sending ultrasound information to a server that generates a report, with Paulett and Bricker disclosing the generation of the reports can be done with user interface buttons, and with Bricker additionally suggesting the user interface can be a touch screen with a report generating touch button, one of ordinary skill in the art of implementing a system for generating an ultrasound report using artificial intelligence would include the system including a touchscreen having the touch button for generating the report in order to utilize typical user interfaces for inputting commands. One would therefore be motivated to combine these teachings as in doing so would create this system for generating an ultrasound report using artificial intelligence.
Claim(s) 7, 8, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Solomon and Paulett, and further in view of Hahn et al. (US 2012/0185292 A1, hereinafter “Hahn”).
Regarding claim 7, Solomon and Paulett teach the system of claim 1, wherein the one or more configurable elements of the GUI include a data set selection element for receiving a selection of the data set. More specifically, measurements or findings can be selected as part of the set of inputs (Paulett, [0046]-[0050]).
However, Solomon and Paulett may not explicitly teach every aspect of
wherein the report button is enabled after the selection of the data set.
Hahn discloses a medical imaging device which includes ultrasounds that can generate imaging reports (Hahn, [0004], [0065], [0123]). In response to selection of some criteria, sections of the interface are displayed that include a Run Report Button. After selecting the criteria, the report can be generated by pressing the Run Report button (Hahn, [0281], [0282]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Solomon and Paulett with Hahn that a system for generating an ultrasound report using artificial intelligence based on a selected data set would include that the report button is enabled after the selection of the data set. With Solomon, Paulett, and Hahn disclosing generating reports based on ultrasound images by sending ultrasound information to a server that generates a report, with Paulett and Hahn disclosing the generation of the reports can be done with user interface buttons, and with Hahn additionally suggesting that the report button is enabled after the selection of the data set, one of ordinary skill in the art of implementing a system for generating an ultrasound report using artificial intelligence based on a selected data set would include that the report button is enabled after the selection of the data set in order to ensure the proper criteria is met for generating a complete report. One would therefore be motivated to combine these teachings as in doing so would create this system for generating an ultrasound report using artificial intelligence.
Regarding claim 8, Solomon and Paulett with Hahn teach the system of claim 7, wherein the GUI is configured to display the selected data set on the display, and to enable the user to edit the selected data set prior to the selected data set being provided to the prompt string generator. More specifically, users can manually edit/tune the prompt data prior to submitting it for report generation (Paulett, [0098], [0100], [0105])
Regarding claim 20, this claim recites a method that substantially performs the steps performed by the system of claim 7, therefore, the same rationale of rejection is applicable.
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Solomon and Paulett, and further in view of Iyer et al. (US 2025/0315428 A1, with citations below to Provisional Application No 63/575,457 filed 4/5/2024, hereinafter “Iyer”).
Regarding claim 9, Solomon and Paulett teach the system of claim 1, including a being able to enter unstructured text via text boxes to be input into a template to send to a report generating model, and even a prompt editor model (Paulett, [0034], [0050], [0059], Figure 11), however, Paulett does not necessarily describe a user interface for editing an LLM prompt string directly and, therefore, may not explicitly teach every aspect of wherein the GUI is configured to display the prompt string on the display, and to enable the user to edit the prompt string prior to the prompt string generator outputting the prompt string to the API.
Iyer discloses computer-implemented systems and methods for machine-learned collaboration for prompt editing (Iyer, abstract). The model interface system may utilize one or more application programming interfaces (APIs) to pass prompts to and receive responses from the generative model system (Iyer, [0048]). Machine-learned sequence processing models such as large-language models, for instance, can be configured to receive prompts including instructions, tasks, examples, and/or other data indicative of desired actions or outputs from the model (Iyer, [0052]). Figures 4A-4D depicts a user interface for prompt string editing or fine tuning the prompt to be sent to the LLM for generating a response (Iyer, Figures 4A-4D, [0059]-[0063]). The prompt editing interface can be utilized to generate prompts and responses for medical data (Iyer, [0081]-0082]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Solomon and Paulett with Iyer that a system for generating an ultrasound report by sending a prompt string to a LLM would include a prompt string editing user interface for editing the prompt prior to sending it to the LLM. With Solomon, Paulett, and Iyer disclosing AI generating responses based on medical information, with Paulett and Iyer disclosing the submission of prompt strings for medical information can be done with user interface buttons, and with Iyer additionally suggesting the user interface for editing prompts before sending them through an API to an LLM, one of ordinary skill in the art of implementing a system for generating an ultrasound report by sending a prompt string to a LLM would include a prompt string editing user interface for editing the prompt prior to sending it to the LLM in order to make adjustments to the prompt string that further customize and fine tune the output from the LLM. One would therefore be motivated to combine these teachings as in doing so would create this system for generating an ultrasound report using artificial intelligence.
Regarding claim 19, this claim recites a method that substantially performs the steps performed by the system of claim 9, therefore, the same rationale of rejection is applicable.
Pertinent Prior Art
The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action.
Errico (US 2023/0346339 A1) – an ultrasound system with AI report generation.
Sorenson (US 2018/0137244 A1) – an AI medical report generation including from ultrasound images.
Bernard (US 10,140,421 B1) – an AI medical report generation including from ultrasound images.
Hare (US 2021/0264238 A1) – an ultrasound system with AI report generation.
Lyman (US 2022/0051771 A1) – an ultrasound system with AI report generation.
Paik (US 2024/0177836 A1) – an AI medical report generation including from ultrasound images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK F RIEGLER whose telephone number is (571)270-3625. The examiner can normally be reached M-F 9:30am-6:00pm, ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PATRICK F RIEGLER/ Primary Examiner, Art Unit 2171