DETAILED ACTION
Claims 21-45 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/26/2026 has been entered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 21-38 & 41-45 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 21 recites selecting or inferring a diagnosis based partially on “association of the selected image icons determined from the stored description text associated with the selected image icon”. The Examiner asserts that it is unclear as how an association of selected image icons is determined from a stored description text associated with the selected image icons. Furthermore, the claim specifies both the selected image icons and the selected image icon and the Examiner is unable to determine which image icons are being referred to. Appropriate clarification and correction is required. Dependent claims 22-38 & 41-45 are also rejected due to their dependency from Claim 21.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 21-24, 26-30, & 39-41 are rejected under 35 U.S.C. 103 as being unpatentable over Chernilo (US20120290957) in view of Gustafson (US20110137132), Collins (US20060274928), & Papier (US20140037162).
As per claim 21, Chernilo teaches a method comprising:
generating a diagnostic graphical user interface that is related to a patient test result presented to a user via a third-party interface (para. 67: interface used for diagnosing abnormalities associated with patient), wherein the graphical user interface comprises a number of image icons, each icon including at least a portion of a medical photographic images of an example characteristics (Fig. 4: user interface including plurality of representations in the form of illustrations);
receiving an anatomical location within the patient test result via the diagnostic graphical user interface, wherein the anatomical location identifies at least a portion of a patient presented in the patient test result (Figs. 2, 4: test result including x-ray image; anatomical location can be selected for specific portion of patient);
receiving a selection of the image icon to associate with at least one of the received locations, wherein the image icon is related to diagnosing patients (Fig. 4; para. 81-83: user can select representation image to associate with specific patient location for diagnosing specific condition);
wherein diagnosis is selected or inferred based at least partially on the selected image icons and the anatomical location (Fig. 7-8; para. 81, 84-85: diagnosis user interface includes fracture identified by selected fracture illustration representation and fracture classification placed on specific anatomical location).
Chernilo does not expressly teach automatically generating a diagnostic report that includes at least a portion of a diagnosis for the patient.
Gustafson teaches narrative report generation and the cataloging of the findings describing each ROI where the generated reports are based on the selected abnormalities by a user (para. 0017; Fig. 10).
It would have been obvious to one of ordinary skill in the art prior to the filing of the invention given the teachings of Chernilo and Gustafson that a method for using selectable diagnoses for patient test results would include automatically generating a report. With Chernilo and Gustafson disclosing for using selectable diagnoses for patient test results and Gustafson additionally disclosing automatically generating a report, one of ordinary skill in the art of implementing a method of using selectable diagnoses for patient test results would include automatically generating a report in order to expedite reports faster. One would therefore be motivated to combine these teachings as in doing so would create this method for using selectable diagnoses for patient test results.
Chernilo and Gustafson do not expressly teach automatically generating a structured diagnostic report.
Collins, however, teaches to generating a DICOM-structured report based on user selected inputs (para. 7376).
It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the aforementioned features in Collins with Chernilo and Gustafson based on the motivation of facilitate more accurate and improved diagnoses [Collins, para 0066].
Collins, Chernilo, and Gustafson do not expressly teach image icons from a previously diagnosed case of a different patient and depicting example characteristics selected as a typical presentation of the example characteristic, and wherein each image icon is associated with stored descriptive text identifying the example characteristics; wherein diagnosis is selected or inferred based at least partially on a semantic meaning of the stored descriptive text associated with the selected image icons and association of the selected image icons determined from the stored descriptive text associated with the selected image icon and the anatomical location
Papier, however, teaches to using cross-referenced knowledge and image databases to assist a user in making a diagnosis (abstract). Papier further teaches to image icons representing a diagnosis of a specific condition that have been previously associated with a finding by an author or editor (para. 104). Papier also teaches to where the images are tagged with textual labels identifying specific findings for the image (Fig. 7; para. 109). Papier also teaches to a user selecting various image icons and the associated text is used as a diagnosis for the patient (Fig. 7 & 8; para. 109-113). Papier further teaches to using a selected location as input to generate a diagnosis (Fig. 11 & 12; para. 111-113).
It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the aforementioned features in Papier with Chernilo, Gustafson, and Collins based on the motivation of assist users in identifying possible diagnoses based upon a set or constellation of patient findings [Papier, para 4].
As per claim 22, Chernilo, Gustafson, Collins, & Papier teach the method of claim 21. Chernilo does not expressly teach wherein the diagnostic graphical user interface includes a breast imaging diagnostic graphical user interface, and the report may include at least a portion of the diagnosis of the patient based on breast imaging.
Chernilo, however, teaches to diagnosing breast images using a graphical user interface and generating a report using the breast diagnosis data (Fig. 2; para. 17).
The motivations to combine the above mentioned references are discussed in the rejection of claim 21, and incorporated herein.
As per claim 23, Chernilo, Gustafson, Collins, & Papier teach the method of claim 22. Chernilo teaches wherein the image icon(s) include at least a portion of at least one of one or more CT scans, one or more mammograms, one or more radiographs, one or more MRI scans, one or more PET scans, one or more ultrasounds, and/or one or more other medical imaging exams (Fig. 4: image illustrations represent a portion of the x-ray image).
As per claim 24, Chernilo, Gustafson, Collins, & Papier teach the method of claim 21. Chernilo teaches further comprising:
restricting the selection of one or more image when at least one anatomic location has not been received (Figs. 2-4: image illustrations not displayed to user for selection until patient location has been selected).
As per claim 26, Chernilo, Gustafson, Collins, & Papier teach the method of claim 21. Chernilo does not expressly teach further comprising:
generating one or more indicia based on a previously generated report for a patient, wherein each indicia indicates at least one of an anatomic location in the patient or one or more image icons.
Gustafson, however, teaches to retrieving previous examination data including region of interest data present in the database (para. 12, 71, 118).
The motivations to combine the above mentioned references are discussed in the rejection of claim 21, and incorporated herein.
As per claim 27, Chernilo, Gustafson, Collins, & Papier teach the method of claim 21. Chernilo does not expressly teach further comprising:
receiving one or more follow-up test results for a plurality of patients; and
determining outcome(s) based at least partially on received image icon selections for the plurality of patients and the follow-up test results.
Gustafson, however, teaches to receiving updated results and where the system can update a diagnostic prediction based on the updated data and previously selected ROIs (para. 79, 87, 95).
The motivations to combine the above mentioned references are discussed in the rejection of claim 21, and incorporated herein.
As per claim 28, Chernilo, Gustafson, Collins, & Papier teach the method of claim 21. Chernilo teaches further comprising:
receiving a selection of image icon(s) for a new patient (Fig. 4; para. 81-83: user can select representation image to associate with specific patient location for diagnosing specific condition).
Chernilo does not expressly teach determining a probability of an outcome based on one or more of the received selections of image icons for the new patient and the outcome.
Gustafson, however, teaches to determining a probability of an outcome based on selected abnormalities by a user (Fig. 7; para. 87-88).
The motivations to combine the above mentioned references are discussed in the rejection of claim 21, and incorporated herein.
As per claim 29, Chernilo, Gustafson, Collins, & Papier teach the method of claim 1, but does not expressly teach wherein the graphical user interface further comprises association icon(s), wherein at least one of the association icons indicates a relationship between two or more selected image icons.
Collins, however, teaches to an association button allowing a user to save specific selected features for a particular image and lesion (Fig. 6; para. 67-69).
The motivations to combine the above mentioned references are discussed in the rejection of claim 21, and incorporated herein.
As per claim 30, Chernilo, Gustafson, Collins, & Papier teach the method of claim 29. Chernilo does not expressly teach wherein automatically generating a report further comprises:
retrieving template(s) that include words that include a diagnosis based on the selected image icon(s) and/or the association icon(s).
Gustafson, however, teaches to generating a diagnostic template report prefilled with words based on user selection of various objects including ROIs in the user interface (para. 113).
The motivations to combine the above mentioned references are discussed in the rejection of claim 21, and incorporated herein.
Claims 39-40 recite substantially similar limitations as those already addressed in claim 21, and, as such, are rejected for similar reasons as given above.
As per claim 41, Chernilo, Gustafson, Collins, & Papier teach the method of claim 21. Chernilo does not expressly teach wherein the medical photographic image of each image icon is selected by an administrative user from a set of previously diagnosed patient test results and stored in association with the stored descriptive text identifying the example characteristic.
Papier, however, teaches to using cross-referenced knowledge and image databases to assist a user in making a diagnosis (abstract). Papier further teaches to image icons representing a diagnosis of a specific condition that have been previously associated with a finding by an author or editor (para. 104). Papier also teaches to where the images are tagged with textual labels identifying specific findings for the image (Fig. 7; para. 109).
The motivations to combine the above mentioned references are discussed in the rejection of claim 21, and incorporated herein.
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Chernilo, Gustafson, Collins, & Papier, as applied to claim 21 above, and in further view of Dutta (US6983424).
As per claim 25, Chernilo, Gustafson, Collins, & Papier teach the method of claim 21, but do not expressly teach further comprising:
automatically adjusting the number of image icons on the generated graphical user interface m based at least partially on a screen dimension of a user device, where the graphical user interface is generated for presentation on the user device.
Dutta, however, teaches to adjusting the number of icons displayed on a graphical user interface based on a user’s device screen dimension parameters (col. 9, lines 6-16).
It would have been obvious to one of ordinary skill in the art at the time of the invention to combine the aforementioned features in Dutta with Chernilo, Gustafson, Collins, & Papier based on the motivation of provide an improved method of displaying icons on a video screen by scaling the icons within a minimum and maximum size to fit the available area of the Video Screen (Dutta – col. 2, lines 1-4).
Prior Art Rejection
All of the cited references fail to expressly teach or suggest, either alone or in combination, the features found within dependent claims 31-38 & 42-45.
The most relevant prior art of record includes:
Chernilo (US20120290957) teaches to displaying an image of a part of a patient's body on a display device of the apparatus, the image including an abnormality associated with that part of the patient's body; a library of representations which is accessible by a user, the representations depicting abnormalities associated with that part of the body, at least some of the representations being able to be displayed on the display arrangement; and a selection means operable by the user for selecting one of the representations and for overlying the selected representation on the image to enable the user to identify the abnormality. Gustafson (US20110137132) teaches to a method and system for analyzing and retrieving tissue abnormality tracking data, providing a tool for a radiologist that includes a report summarizing the statistical frequency of diagnosed patients. Collins (US20060274928) teaches to a system and method for computer-aided detection (“CAD”). The invention relates to computer-aided automatic detection of abnormalities in and analysis of medical images. Medical images are analyzed, to extract and identify a set of features in the image relevant to a diagnosis. Papier (US20140037162) taches to utilizing a relational database to dynamically respond to textual and visual findings as an aid to assist a user reaching a reasoned conclusion based upon information available by direct observation and comparison with stored image and textual data
Response to Arguments
Applicant’s arguments with respect to the 35 U.S.C. § 103(a) rejection on pages 11-16 in regards to claims 21-38 & 41-45 have been considered but are moot in view of the new grounds of rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Weiner (US20060264749) teaches to operating a plurality of user interfaces coupled to a plurality of medical devices through a communication network includes performing medical diagnostics on a patient using at least two of the plurality of medical devices, wherein the user interface is configured to control the at least two of the plurality of medical devices, and displaying a result of the medical diagnostics on at least one of the plurality of user interfaces
Morita (US US20080208631) teaches to comprehensive clinical documentation of patient lifetime via a unified interface. Certain embodiments provide a user interface system displaying an electronic patient record. The system includes a timeline representation of a patient record. The timeline includes a plurality of data points related to a patient over time. The plurality of data points provides patient data aggregated from a plurality of information sources
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jonathan K Ng whose telephone number is (571)270-7941. The examiner can normally be reached M-F 8 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached on 571-270-7949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jonathan Ng/Primary Examiner, Art Unit 3619