DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 15 is objected to because of the following informalities: the claim recites “A method and system for…”. The claim language is incorrectly directed towards both a method and a system. Based on the language of the dependent claims, it seems as if the claim should only be directed toward a method, and should be amended to make the correction. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-5, 7-10 and 12-15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Barnard et al. (U.S. 2024/0021280).
With regard to claim 1, Barnard teaches a method for electronic structured medial report generation recorded on non-transitory computer-readable medium and capable of execution by a computer ([abstract] Systems and methods are provided for managing patient data. The system integrates medical data from multiple sources to a unified patient database…The data in the unified patient database is used to display patient data in user-friendly interface views), the method comprising the steps of:
displaying a website or other electronic interface by a computer for electronic structured medial report generation ([0024] the plurality of sources comprise two or more of: an EMR (electronic medical record) system, a PACS (picture archiving and communication system), a Digital Pathology (DP) system, an LIS (laboratory information system), a RIS (radiology information system), patient reported outcomes, a wearable device, or a social media website);
starting a new report by accessing a new/edit report section of the website ([0076] The analysis result can be updated whenever new data (e.g., new diagnosis results, new biopsy results, etc.) is added for the patient; [0093] referring to operation 326 of FIG. 3C, upon detecting that an add icon 325 is activated, data entry interface 300 can display an additional sets of fields for the user to enter information about a new diagnostic procedure);
selecting a category from a categories menu for assigning a category to the new report (Figs. 3A-3C; [0007] displaying, via the GUI, the medical record and a menu configured to receive user input selecting one or more primary cancers; [0093] Moreover, a pull-down menu 332 is provided to select the site of the tumor mass found in the new diagnostic procedure for fields 334. The candidates listed in pull-down menu 332 can be provided as standardized terminologies by enrichment module 234 so that only standardized terminologies are input into fields 334);
selecting one or more pre-structured/planned words which populates a body of the new report fields (Figs. 3A-3C; [0093] a pull-down menu 332 is provided to select the site of the tumor mass found in the new diagnostic procedure for fields 334. The candidates listed in pull-down menu 332 can be provided as standardized terminologies by enrichment module 234 so that only standardized terminologies are input into fields 334);
displaying the new report as selections are made and added to the new report body ([0094] FIG. 3D, FIG. 3E, and FIG. 3F illustrate examples of operations to create a new page for a second primary tumor after page 311 (for the primary tumor at right upper lobe of the lung) is populated with data);
creating a template (Fig. 3C, 325; [0093] referring to operation 326 of FIG. 3C, upon detecting that an add icon 325 is activated, data entry interface 300 can display an additional sets of fields for the user to enter information about a new diagnostic procedure);
offering one or more options for formatting and content for selection from a drop down menu (Fig. 3C-3F; [0093] a pull-down menu 332 is provided to select the site of the tumor mass found in the new diagnostic procedure for fields 334. The candidates listed in pull-down menu 332 can be provided as standardized terminologies by enrichment module 234 so that only standardized terminologies are input into fields 334); and
generating a final report output with pre-determined formatting and any selected options (Fig. 3G; [0097] FIG. 3G illustrates a patient summary view 370 of the portal 220. The patient summary view 370 is a view of a graphical user interface for viewing and modifying data for a patient).
With regard to claim 2, the limitations are addressed above and Barnard teaches further comprising the steps of
displaying a series of drop down and check box menus ([0130] The fields include an interface element for adding information about an anatomic site (e.g., right upper lobe of lung, which is selected from a drop-down menu when the “select existing” radio button is selected). The fields further include a histologic type and histologic grade. A user can fill in the diagnostic information. The interface element 662 further includes a user-selectable check box 663 that can be checked to set the diagnosed primary tumor as patient's condition for discussion); and
selecting a sub-category for the template (Fig. 6D).
With regard to claim 3, the limitations are addressed above and Barnard teaches further comprising the steps of
once the template is created, presenting the user the option to create a shortcut by selecting one or more words or criteria, which are unique to a dropdown selection from the dropdown menu (Fig. 6D; [0130]-[0132] The edit drawer 674 is an interface element such as a modal that opens on detecting user interaction such as a click. The edit drawer 674 includes fields for accepting user input to edit the information previously input (e.g., via interface element 662). The components of the drawer 674 include data entry fields that can be used to edit fields such as date, diagnosis, pending diagnosis 676, anatomic site 680, and histologic type 682. The edit drawer 674 further includes radio buttons 678 to select existing or create new anatomic sites. These interface elements can be used to retrieve data to update the data stored to unified patient database 204).
With regard to claim 4, the limitations are addressed above and Barnard teaches further comprising the steps of
during the process of creating the template, the user has used an auto-impression feature defined as the pre-selection of language for particular options/fields ([0061] automatically and/or via user input, fields are filled or updated; [0066] the extracted medical data can then be used to automatically populate various fields of the patient summary; [0087] Data entry interface 300 can guide a user to enter data manually and/or approve or edit automatically extracted data; [0107] FIG. 5A, FIG. 5B, FIG. 5C, and FIG. 5D illustrate example operations of document abstraction interface 400 on a pathology report. Document abstraction interface 400 can be used to guide a user to confirm data types for data to be integrated into the unified patient database, such as in fields automatically populated using machine learning); and
one button selection populates the auto-impression feature in a findings sections with the pre-selection of language for particular options/fields associated with the auto-impression feature, which generates pre-structured language in the report body ([0121] In the example interface view 600 depicted in FIG. 6A, the fields are configured to accept user input via interface elements including drop-down menus 606-616, a text entry field 618, and radio buttons 620; [0122] the save button 622 may be activated; [0126]-[0127] The user can select the radio button 642 for either select existing or create new).
With regard to claim 5, the limitations are addressed above and Barnard teaches further comprising the steps of
providing a template with predefined and preset and user defined semantics to generate reports ([abstract] The system integrates medical data from multiple sources to a unified patient database...The data in the unified patient database is used to display patient data in user-friendly interface views, including a patient journey view that displays patient data in a chronological fashion organized by data types);
creating and editing templates for users to use (Fig. 3C, 325; [0093] referring to operation 326 of FIG. 3C, upon detecting that an add icon 325 is activated, data entry interface 300 can display an additional sets of fields for the user to enter information about a new diagnostic procedure);
using a combination of dropdown, checkbox, and radio button features with pre-structured language from previously created templates ([0130] The fields include an interface element for adding information about an anatomic site (e.g., right upper lobe of lung, which is selected from a drop-down menu when the “select existing” radio button is selected). The fields further include a histologic type and histologic grade. A user can fill in the diagnostic information. The interface element 662 further includes a user-selectable check box 663 that can be checked to set the diagnosed primary tumor as patient's condition for discussion); and
creating templates that correspond to studies done of diagnostic imaging and radiology reports ([abstract] The data retrieved from the disparate sources is stored to data elements in the unified patient database in a network of connected objects including data about tumor masses, treatments, reports, medical history, and diagnoses).
With regard to claim 7, the limitations are addressed above and Barnard teaches wherein
a title of each feature (dropdown or checkbox) is shown to the right of each "gear" icon within a button (Fig. 3B, flag 321; [0092] “pending diagnosis” flag 321 is asserted…In operation 322, in primary tumor field 302, “pending diagnosis” flag 321 is de-asserted to confirm that the mass in the right upper lobe of the lung is a primary tumor); and
to the right of the gear icon is a button containing the "default" selection for each feature (dropdown or checkbox) (Figs. 3B-3F, flag 321; [0092] “pending diagnosis” flag 321 is asserted…In operation 322, in primary tumor field 302, “pending diagnosis” flag 321 is de-asserted to confirm that the mass in the right upper lobe of the lung is a primary tumor).
With regard to claim 8, the limitations are addressed above and Barnard teaches wherein
the user had previously used abbreviations called "shortcuts" to be displayed in the final four options (Figs. 3B-3F, flag 321; Fig. 3F, options 366; [0092] “pending diagnosis” flag 321 is asserted…In operation 322, in primary tumor field 302, “pending diagnosis” flag 321 is de-asserted to confirm that the mass in the right upper lobe of the lung is a primary tumor;
the user has selected the pre-structured/planned words for this dropdown ([0099] These fields can include both drop-down menus, from which a type of treatment, primary cancer, status, or outcome can be selected, and fields configured to accept typed user input such as a number of cycles, start date, end date, responsible party, and additional notes);
it is populated to the right of the title of the feature so that the user can see the report as they make selections (Figs. 3B-3F, flag 321; Figs. 5D-6D); and
this language will populate the official report ([0120] Based on the fields, the data provided can be stored to corresponding data objects in a data graph. This can also include a data object for the report itself).
With regard to claim 9, the limitations are addressed above and Barnard teaches wherein
one of the formatting and content options for selection from the drop down menu includes the selection of specific phrases ([0121] the fields are configured to accept user input via interface elements including drop-down menus 606-616, a text entry field 618, and radio buttons 620…the drop-down 606 may be populated with each possible type of report which has been previously configured for the system (e.g., radiology reports, pathology reports, etc.). The user can click on the drop-down 606, view the possible types of reports, and select surgical pathology report, which will then be used to populate a corresponding object on the back-end); and
one or more pre entered options corresponding to phrases representing actual text that will populate the body of the report are displayed as a dropdown menu so the user can see what options of prepared phrases are available to populate the body of the report (Figs. 3B-3F, flag 321; Figs. 5D-6D; [0120] these fields are accessible via a drop-down 604 labeled report information. Other selection mechanisms can be used besides drop-down lists).
With regard to claim 10, the limitations are addressed above and Barnard teaches wherein the user will see here the actual text that will populate the report (Figs. 3B-3F, flag 321; Figs. 5D-6D; [0120] these fields are accessible via a drop-down 604 labeled report information. Other selection mechanisms can be used besides drop-down lists).
With regard to claim 12, the limitations are addressed above and Barnard teaches wherein
the final output is formatted, displaying the options the user selected (Fig. 3G; [0097] FIG. 3G illustrates a patient summary view 370 of the portal 220. The patient summary view 370 is a view of a graphical user interface for viewing and modifying data for a patient); and
these selections are placed to the right of the title of the feature (Figs. 3B-3F, flag 321; [0092] “pending diagnosis” flag 321 is asserted…In operation 322, in primary tumor field 302, “pending diagnosis” flag 321 is de-asserted to confirm that the mass in the right upper lobe of the lung is a primary tumor).
With regard to claim 13, the limitations are addressed above and Barnard teaches further comprising
a feature called "auto-impression" that has been used by the user ([0061] automatically and/or via user input, fields are filled or updated; [0066] the extracted medical data can then be used to automatically populate various fields of the patient summary; [0087] Data entry interface 300 can guide a user to enter data manually and/or approve or edit automatically extracted data; [0107] FIG. 5A, FIG. 5B, FIG. 5C, and FIG. 5D illustrate example operations of document abstraction interface 400 on a pathology report. Document abstraction interface 400 can be used to guide a user to confirm data types for data to be integrated into the unified patient database, such as in fields automatically populated using machine learning);
the user has preselected language in the process of creating this template to populate the impression, for that particular option/field is selected ([0061] automatically and/or via user input, fields are filled or updated; [0066] the extracted medical data can then be used to automatically populate various fields of the patient summary; [0087] Data entry interface 300 can guide a user to enter data manually and/or approve or edit automatically extracted data; [0107] FIG. 5A, FIG. 5B, FIG. 5C, and FIG. 5D illustrate example operations of document abstraction interface 400 on a pathology report. Document abstraction interface 400 can be used to guide a user to confirm data types for data to be integrated into the unified patient database, such as in fields automatically populated using machine learning); and
a user has the option of using an auto-impression field ([0121] In the example interface view 600 depicted in FIG. 6A, the fields are configured to accept user input via interface elements including drop-down menus 606-616, a text entry field 618, and radio buttons 620; [0122] the save button 622 may be activated; [0126]-[0127] The user can select the radio button 642 for either select existing or create new).
With regard to claim 14, the limitations are addressed above and Barnard teaches further comprising the steps of
creating and saving one or more medical conclusions/impressions for later ([0099] Responsive to detecting user interaction with a save button 384, the system saves the data input to the fields. For example, the data element input into each field can be saved to the unified patient database 204, organized based on a data type corresponding to that field; [0122] Once a user has entered information, the save button 622 may be activated, and, responsive to detecting user interaction with the save button 622, the entered data is saved to the unified patient database 204); and
selecting one or more medical conclusions/impressions for inclusion at the end of the report body ([0024] the plurality of sources comprise two or more of: an EMR (electronic medical record) system, a PACS (picture archiving and communication system), a Digital Pathology (DP) system, an LIS (laboratory information system), a RIS (radiology information system), patient reported outcomes, a wearable device, or a social media website).
With regard to claim 15, Barnard teaches a method and system for electronic structural medical report generation from templates ([abstract] Systems and methods are provided for managing patient data. The system integrates medical data from multiple sources to a unified patient database…The data in the unified patient database is used to display patient data in user-friendly interface views) using auto impression that interfaces with an artificial intelligence interface (AI) to analyze images from radiology imaging studies, recorded on non-transitory computer-readable medium and capable of execution by a computer ([0061] automatically and/or via user input, fields are filled or updated; [0066] the extracted medical data can then be used to automatically populate various fields of the patient summary; [0087] Data entry interface 300 can guide a user to enter data manually and/or approve or edit automatically extracted data), the method comprising the steps of:
providing a Web App/software displaying a website or other electronic interface by a computer for electronic structured medial report generation ([0303] Aspects of embodiments can be implemented in the form of control logic using hardware (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner);
starting a new report by accessing a new/edit report section of the website ([0076] The analysis result can be updated whenever new data (e.g., new diagnosis results, new biopsy results, etc.) is added for the patient; [0093] referring to operation 326 of FIG. 3C, upon detecting that an add icon 325 is activated, data entry interface 300 can display an additional sets of fields for the user to enter information about a new diagnostic procedure);
selecting a category from a categories menu for assigning a category to the new report (Figs. 3A-3C; [0007] displaying, via the GUI, the medical record and a menu configured to receive user input selecting one or more primary cancers; [0093] Moreover, a pull-down menu 332 is provided to select the site of the tumor mass found in the new diagnostic procedure for fields 334. The candidates listed in pull-down menu 332 can be provided as standardized terminologies by enrichment module 234 so that only standardized terminologies are input into fields 334);
selecting one or more pre-structured/planned words which populates a body of the new report fields (Figs. 3A-3C; [0093] a pull-down menu 332 is provided to select the site of the tumor mass found in the new diagnostic procedure for fields 334. The candidates listed in pull-down menu 332 can be provided as standardized terminologies by enrichment module 234 so that only standardized terminologies are input into fields 334);
displaying the new report as selections are made and added to the new report body ([0094] FIG. 3D, FIG. 3E, and FIG. 3F illustrate examples of operations to create a new page for a second primary tumor after page 311 (for the primary tumor at right upper lobe of the lung) is populated with data);
creating a template (Fig. 3C, 325; [0093] referring to operation 326 of FIG. 3C, upon detecting that an add icon 325 is activated, data entry interface 300 can display an additional sets of fields for the user to enter information about a new diagnostic procedure);
offering one or more options for formatting and content for selection from a drop down menu (Fig. 3C-3F; [0093] a pull-down menu 332 is provided to select the site of the tumor mass found in the new diagnostic procedure for fields 334. The candidates listed in pull-down menu 332 can be provided as standardized terminologies by enrichment module 234 so that only standardized terminologies are input into fields 334); and
generating a final report output with pre-determined formatting and any selected options (Fig. 3G; [0097] FIG. 3G illustrates a patient summary view 370 of the portal 220. The patient summary view 370 is a view of a graphical user interface for viewing and modifying data for a patient);
displaying a series of drop down and check box menus ([0130] The fields include an interface element for adding information about an anatomic site (e.g., right upper lobe of lung, which is selected from a drop-down menu when the “select existing” radio button is selected). The fields further include a histologic type and histologic grade. A user can fill in the diagnostic information. The interface element 662 further includes a user-selectable check box 663 that can be checked to set the diagnosed primary tumor as patient's condition for discussion);
selecting a sub-category for the template (Fig. 6D);
once the template is created, presenting the user the option to create a shortcut by selecting one or more words or criteria, which are unique to a dropdown selection from the dropdown menu (Fig. 6D; [0130]-[0132] The edit drawer 674 is an interface element such as a modal that opens on detecting user interaction such as a click. The edit drawer 674 includes fields for accepting user input to edit the information previously input (e.g., via interface element 662). The components of the drawer 674 include data entry fields that can be used to edit fields such as date, diagnosis, pending diagnosis 676, anatomic site 680, and histologic type 682. The edit drawer 674 further includes radio buttons 678 to select existing or create new anatomic sites. These interface elements can be used to retrieve data to update the data stored to unified patient database 204);
during the process of creating the template, the user has used an auto-impression feature defined as the pre-selection of language for particular options/fields ([0061] automatically and/or via user input, fields are filled or updated; [0066] the extracted medical data can then be used to automatically populate various fields of the patient summary; [0087] Data entry interface 300 can guide a user to enter data manually and/or approve or edit automatically extracted data; [0107] FIG. 5A, FIG. 5B, FIG. 5C, and FIG. 5D illustrate example operations of document abstraction interface 400 on a pathology report. Document abstraction interface 400 can be used to guide a user to confirm data types for data to be integrated into the unified patient database, such as in fields automatically populated using machine learning);
one button selection populates the auto-impression feature in a findings sections with the pre-selection of language for particular options/fields associated with the auto-impression feature, which generates pre-structured language in the report body ([0121] In the example interface view 600 depicted in FIG. 6A, the fields are configured to accept user input via interface elements including drop-down menus 606-616, a text entry field 618, and radio buttons 620; [0122] the save button 622 may be activated; [0126]-[0127] The user can select the radio button 642 for either select existing or create new); and
using a combination of dropdown, checkbox, and radio button features with pre-structured language from previously created templates ([0130] The fields include an interface element for adding information about an anatomic site (e.g., right upper lobe of lung, which is selected from a drop-down menu when the “select existing” radio button is selected). The fields further include a histologic type and histologic grade. A user can fill in the diagnostic information. The interface element 662 further includes a user-selectable check box 663 that can be checked to set the diagnosed primary tumor as patient's condition for discussion).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Barnard et al. (U.S. 2024/0021280) in view of Rao et al. (U.S. 2022/0253592).
With regard to claim 6, the limitations are addressed above. Barnard teaches each paragraph has the potential to have from one or more vertical tabs that allow the user to select the appropriate option for a patient (Fig. 3B; [0092] Referring to operation 320 of FIG. 3B, primary tumor field 302 can receive input text “right upper lobe of the lung” (e.g., a location), but the diagnosis is not yet confirmed and is still pending, and “pending diagnosis” flag 321 is asserted. The title of patient summary page 311 remains “Unnamed Primary.”);
a checkbox associated with one or more headers or feature titles corresponding to the body contents of the report is generated and displayed (Figs. 3B-3F; [0130] The interface element 662 further includes a user-selectable check box 663 that can be checked to set the diagnosed primary tumor as patient's condition for discussion);
the checkbox associated with each corresponding header or feature title is used to either allow or suppress these specific words to be included in the generated final report ([0130] The fields include an interface element for adding information about an anatomic site (e.g., right upper lobe of lung, which is selected from a drop-down menu when the “select existing” radio button is selected). The fields further include a histologic type and histologic grade. A user can fill in the diagnostic information. The interface element 662 further includes a user-selectable check box 663 that can be checked to set the diagnosed primary tumor as patient's condition for discussion). However, Barnard does not specifically teach:
- wherein feature paragraph titles include:
clinical indication/history,
technique,
findings, and
impression;
Rao teaches receiving a medical report created by a medical professional at creation time [abstract]. Rao also teaches wherein feature paragraph titles include clinical indication/history, technique, findings, and impression ([0508] A set of sections of the medical report can include: a Procedure section; an Accession section; a Date of Exam section; a Referring Physician section; an Exam section; a Technique section; a Contrast section; a Comparisons section; a Findings section; an Impression section; and/or other sections included in radiology reports and/or other types of medical reports. Some or all of these sections can be denoted with corresponding headers and/or labels in the text of the medical report, such as: “PROCEDURE(S)”; “ACCESSION(S)”; “DATE OF EXAM”; “REFERRING PHYSICIAN”; “EXAM”; “TECHNIQUE”; “CONTRAST”; “COMPARISONS”; “FINDINGS”; “IMPRESSION”; and/or other corresponding labels). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the oncology workflow for clinical decision support as taught by Barnard, with the medical report with set sections as taught by Rao, to have achieved medical imaging devices and knowledge-based systems used in conjunction with client/server network architectures.
With regard to claim 16, the limitations are addressed above and Barnard teaches further comprising analyzing images from radiology imaging studies ([0076] The analysis result can be updated whenever new data (e.g., new diagnosis results, new biopsy results, etc.) is added for the patient; [0093] referring to operation 326 of FIG. 3C, upon detecting that an add icon 325 is activated, data entry interface 300 can display an additional sets of fields for the user to enter information about a new diagnostic procedure). However, Barnard does not specifically teach:
- interfacing with an artificial intelligence interface to analyze images
Rao teaches receiving a medical report created by a medical professional at creation time [abstract]. Rao also teaches interfacing with an artificial intelligence interface to analyze images ([0052] Some or all of the medical scan subsystems 101 described herein can perform some of all of their respective functionality by utilizing at least one artificial intelligence algorithm and/or technique...utilizing a computer vision techniques and/or natural language processing techniques in accordance with artificial intelligence and/or machine learning; [0067] The medical report natural language model can be generated in accordance with natural language processing techniques, at least one artificial intelligence algorithm and/or technique, and/or at least one machine learning algorithm and/or technique). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the oncology workflow for clinical decision support as taught by Barnard, with the artificial intelligence to analyze images as taught by Rao, to have achieved medical imaging devices and knowledge-based systems used in conjunction with client/server network architectures.
With regard to claim 17, the limitations are addressed above and Barnard teaches wherein published radiology criteria and images for defining normal anatomy or variations and abnormal pathology and differentiate them from each ([0010] an EMR (electronic medical record) system, a PACS (picture archiving and communication system), a Digital Pathology (DP) system, an LIS (laboratory information system), and a RIS (radiology information system); [0063] This makes it easy to visualize patient cancer milestones and cancer progression (as it metastasizes, relapses, or recurs, for example). The patient journey includes a set of objects in a timeline. The objects can correspond to categories such as pathology, diagnostics, and treatments; [0089] Each of patient summary page 311, patients report 312, oncology treatment information 314, current medications information 316, and patient history information 318 further includes a publish button. For example, patient summary page 311 includes a publish button 319). However, Barnard does not specifically teach:
- the AI uses radiology criteria
Rao teaches receiving a medical report created by a medical professional at creation time [abstract]. Rao also teaches interfacing with an artificial intelligence to use radiology criteria ([0052] Some or all of the medical scan subsystems 101 described herein can perform some of all of their respective functionality by utilizing at least one artificial intelligence algorithm and/or technique...utilizing a computer vision techniques and/or natural language processing techniques in accordance with artificial intelligence and/or machine learning; [0067] The medical report natural language model can be generated in accordance with natural language processing techniques, at least one artificial intelligence algorithm and/or technique, and/or at least one machine learning algorithm and/or technique). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the oncology workflow for clinical decision support as taught by Barnard, with the artificial intelligence to analyze images as taught by Rao, to have achieved medical imaging devices and knowledge-based systems used in conjunction with client/server network architectures.
With regard to claim 18, the limitations are addressed above and Barnard teaches wherein once the system has analyzed the images of the study, the interface will display the images with the pathology outlined with a bright color, so the physician can identify it ([0126] A particular color may be used to highlight the fields and indicate that the system has retrieved the data populating these fields from an EMR or another external database; [0128] The highlighting may be in a different color than used to highlight the fields shown in FIG. 6C, to indicate a different status for the data populating these fields; [0132] The highlight remains until editing is complete. On click, the color goes to a focused state and an edit drawer 674 is opened; [0165] For each category, the associated information may be color-coded (e.g., events in orange, pathology in red, etc.)). However, Barnard does not specifically teach:
- the AI has analyzed the images
Rao teaches receiving a medical report created by a medical professional at creation time [abstract]. Rao also teaches interfacing with an artificial intelligence to analyze the images ([0052] Some or all of the medical scan subsystems 101 described herein can perform some of all of their respective functionality by utilizing at least one artificial intelligence algorithm and/or technique...utilizing a computer vision techniques and/or natural language processing techniques in accordance with artificial intelligence and/or machine learning; [0067] The medical report natural language model can be generated in accordance with natural language processing techniques, at least one artificial intelligence algorithm and/or technique, and/or at least one machine learning algorithm and/or technique). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the oncology workflow for clinical decision support as taught by Barnard, with the artificial intelligence to analyze images as taught by Rao, to have achieved medical imaging devices and knowledge-based systems used in conjunction with client/server network architectures.
With regard to claim 19, the limitations are addressed above and Barnard teaches wherein
the web app generates a medical/radiology report from its analysis ([0005] identifying the primary cancer by analyzing the data elements and the data types; displaying the GUI comprising a prompt for a user to confirm the primary cancer identification; and receiving user confirmation of the primary cancer identification via the GUI.).
With regard to claim 20, the limitations are addressed above and Barnard teaches wherein a radiologist will review the report and sign or edit the report for the final report ([0066] Moreover, the portal also allows a user to import a document file (e.g., a pathology report, a doctor note, etc.) from the aforementioned data sources; [0171] determining treatment effectiveness is a complex judgement based on elements of clinical response, radiologic response; [0204] data object 1306 is populated with data which originates from a radiology report PDF 1312 that was obtained on a given date). However, Barnard does not specifically teach:
- once the report is signed the referring physician can go on the WA to review a video created by the AI showing the pathology and can review the video with the patient; and
- the web app allows the patient to sign a consent and acknowledgement that the patient reviewed it
Rao teaches once the report is signed the referring physician can go on the WA to review a video created by the AI ([0067] The medical report natural language model can be generated in accordance with natural language processing techniques, at least one artificial intelligence algorithm and/or technique, and/or at least one machine learning algorithm and/or technique) showing the pathology and can review the video with the patient ([0564] describe digital information whose content corresponds to any of a number of desired types (e.g., data, video, speech, text, graphics, audio, etc. any of which may generally be referred to as ‘data’)); and the web app allows the patient to sign a consent and acknowledgement that the patient reviewed it ([0408] creation of a medical report corresponds to completion of a medical
report by a medical professional, signing of a completed medical report by a medical professional, and/or receiving of a completed medical report from a client device associated with the medical professional…the creation time of a medical report is established based on a time that a digital signature and/or indication that the medical report be submitted received from a client device 120, for example, in accordance with a medical professional submitting a drafted medical report that is generated via interaction with their client device 120). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the oncology workflow for clinical decision support as taught by Barnard, with the artificial intelligence to analyze images as taught by Rao, to have achieved medical imaging devices and knowledge-based systems used in conjunction with client/server network architectures.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Barnard et al. (U.S. 2024/0021280) in view of Lyman et al. (U.S. 2024/0161035).
With regard to claim 11, the limitations are addressed above and Barnard teaches wherein when creating templates
the user has an option to create a shortcut ([0004] creating a patient record for a patient in a unified patient database, the patient record comprising an identifier of the patient and one or more data objects related to medical data associated with the patient; [0126] the user can select the create new button and a text entry field will be displayed for entering a name for the new anatomic site); and a user can select instead of the shortcut, to generate a report or part of a report ([0076] The analysis result can be updated whenever new data (e.g., new diagnosis results, new biopsy results, etc.) is added for the patient; [0093] referring to operation 326 of FIG. 3C, upon detecting that an add icon 325 is activated, data entry interface 300 can display an additional sets of fields for the user to enter information about a new diagnostic). However, Barnard does not specifically teach:
- a thumbnail feature, which displays numerous thumbnail photos or diagrams, that a user can select instead of the shortcut, to generate a report or part of a report
Lyman teaches a medical scan viewing system to generate inference data [abstract]. Lyman also teaches a thumbnail feature, which displays numerous thumbnail photos or diagrams, that a user can select instead of the shortcut, to generate a report or part of a report (Figs. 13P-13Q; Fig. 26B; Fig. 27A; [0625] the ROC adjustment tool displays thumbnails of the validated set. The user can interact with the ROC adjustment tool to switch between and/or compare different operating points of the ROC curve; [0626] abnormal scan thumbnails appear in one column and normal scan thumbnails appear in the other. As operating point is toggled, the thumbnails move between the two columns. For example, two colors (i.e., red for “abnormal” and green for “normal”) overlay the thumbnails). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the oncology workflow for clinical decision support as taught by Barnard, with the medical scan system showing thumbnail images as taught by Lyman, to have achieved medical imaging devices and knowledge-based systems used in conjunction with client/server network architectures.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREA C. LEGGETT whose telephone number is (571)270-7700. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREA C LEGGETT/Primary Examiner, Art Unit 2171