Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claim(s)
Clams 1-4, 8-9, 11, 15-20 have been examined. Claims 1-4, 8-9, 11,15 have been amended. Claims 5-7, 10,12-14 previous have been canceled. Claims 18-20 have been added.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 8-9, 11, 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sadeghi et al. (US 20140278448A1 hereinafter Sadeghi) in view of Tinsley (US 20100324927A1) and further in view of Delaney et al. (US20140280353A1 hereinafter Delany) and further in view of Chang et al. (US 20110082863A1 hereinafter Chang)
With respect to claim 1, Sadeghi teaches a computer implemented method for correcting an medical examination report, the method comprising:
extracting examination data from an-the examination report, the examination data relating to findings of an examination (‘448; Abstract: medical report; Para 0173: either an original or a re-formatted text narrative may be received by fact extraction component 104, which may perform processing to extract one or more medical facts (e.g., clinical facts) from the text narrative. The text narrative may be received from ASR engine 102, from medical transcriptionist 130, directly from clinician 120 via user interface 110, or in any other suitable way. Any suitable technique(s) for extracting facts from the text narrative may be used, as aspects of the present disclosure are not limited in this respect; Para 0033: a medical professional preparing a note regarding an examination or study of a patient's left leg may, somewhere in the note, refer instead to the patient's lower right extremity. Such errors are referred to herein as “laterality” errors.);
extracting semantic data from the examination report, the semantic data relating to semantic meanings of linguistic structures in the examination report (‘448; Para 0100: an ontology linked to medical terms may be used by a CLU engine in some embodiments and may facilitate identifying errors and/or critical results in a medical report. For instance, with respect to the example shown in FIG. 5, an ontology may indicate that the two terms “total hip antroplasty” and “THA” refer to the same concept in the ontology. Accordingly, both terms may be annotated by the CLU engine with the same entity type label “Procedure” (not shown), indicating that each of them is a mention of an entity of the type “Procedure.; Para 0135: automatic extraction of clinical facts from a textual representation of a clinician's free-form narration (e.g., from a text narrative) of a patient encounter may be enhanced by re-formatting the text narrative to facilitate the automatic extraction of the clinical facts. For example, in some embodiments a fact extraction component that performs the automatic fact extraction may make use of linguistic knowledge that has some dependency on accurate placement of sentence boundaries in the text narrative. Accordingly, in some embodiments, the fact extraction may be enhanced by adding, removing and/or correcting sentence boundaries in the text narrative to comply with the linguistic structure expected by the fact extraction component; Para 0145: when medical facts are extracted from a free-form narration, a fact extraction component may encounter situations in which disambiguation is desired between multiple facts that could potentially be extracted from the same portion of the free-form narration. In one example, a term in the free-form narration might be linked to two different concepts in a formal ontology (described below) used by the fact extraction component, and it might not be likely that both of those concepts were intended to coexist in the free-form narration. In another example, the fact extraction component may apply a statistical model (examples of which are described below) to identify facts to be extracted from a certain portion of text, and the statistical model may come up with multiple alternative hypotheses for a single fact to be extracted. In some embodiments, the statistical model may be used to score the alternative hypotheses based on probability, confidence, or any other suitable measure of an estimated likelihood that each alternative accurately represents an intended semantic meaning of the portion of text from which it is to be extracted. ; Para 0177: automatic extraction of medical facts from a clinician's free-form narration may involve parsing the free-form narration to identify medical terms that are represented in the lexicon of the fact extraction component. Concepts in the formal ontology linked to the medical terms that appear in the free-form narration may then be identified, and concept relationships in the formal ontology may be traced to identify further relevant concepts. Through these relationships, as well as the linguistic knowledge represented in the formal ontology, one or more medical facts may be extracted. For example, if the free-form narration includes the medical term “hypertension” and the linguistic context relates to the patient's past, the fact extraction component may automatically extract a fact indicating that the patient has a history of hypertension. On the other hand, if the free-form narration includes the medical term “hypertension” in a sentence about the patient's mother, the fact extraction component may automatically extract a fact indicating that the patient has a family history of hypertension. In some embodiments, relationships between concepts in the formal ontology may also allow the fact extraction component to automatically extract facts containing medical terms that were not explicitly included in the free-form narration. For example, the medical term “meningitis” can also be described as inflammation in the brain. If the free-form narration includes the terms “inflammation” and “brain” in proximity to each other, then relationships in the formal ontology between concepts linked to the terms “inflammation”, “brain” and “meningitis” may allow the fact extraction component to automatically extract a fact corresponding to “meningitis”, despite the fact that the term “meningitis” was not stated in the free-form narration. );
Tinsley Van Assel teaches
identifying a discrepancy in the examination report between the extracted examination data and the extracted semantic data using a neural network machine learning model trained using training examination data and training semantic data to identify relationships between extracted examination data and extracted semantic data (‘927; Para 0012: a system for utilizing and analyzing information to provide a desired outcome of the present disclosure, the at least one item of evidence within the evidence repository was extracted and provided to the evidence repository by establishing patterns for translation from text from at least one of the at least two evidentiary sources to at least one medical ontology by observing regularities in the text and mapping the irregularities to control structures in the at least one medical ontology. In an additional embodiment, the at least one reasoning approach is selected from the group consisting of rule-based reasoning, a Semantic Web inference engine, a Bayesian network model, a neural network, and case-based reasoning. In another embodiment, the at least one reasoning approach comprises two reasoning approaches comprising rule-based reasoning and a Bayesain network model. In yet another embodiment, the at least one outcome is selected from the group consisting of a member outcome, a case manager outcome, a cost/utility return-on-investment data, and a documented report.);
receiving a resolution strategy regarding how to resolve the identified discrepancy between the extracted examination data and the extracted semantic data (‘927; Para 0072: Knowledge Agents 304 exploit structure to extract useful entities and relationships for populating the domain/application ontology 302 automatically. Once created, they can be scheduled to automatically keep the domain/application ontology 302 updated with respect to changes in the knowledge sources. Semantic ambiguity resolution is an exemplary useful capability associated with this activity, as well as with the metadata extraction. The domain/application ontology 302 can be exported in RDF/RDFS barring some constraints that cannot be presented in RDF/RDFS.; Para 0175: When the associations are extracted, an expert is subjected to a structured interview to resolve the biases in the causal maps or given an adjacency matrix representation of the associations to specify the relations ); and
resolving the identified discrepancy between the extracted examination data and the extracted semantic data wherein resolving comprises automatically adjusting, based on the received resolution strategy, one or more of the examination data from the examination report and the semantic data from the examination report (‘927; Para 0175: When the associations are extracted, an expert is subjected to a structured interview to resolve the biases in the causal maps or given an adjacency matrix representation of the associations to specify the relations.).
It would have been obvious to one of ordinary skill in the art before the effective filing of claimed invention to modify the system, method of Sadeghi with the technique of senior care navigation systems and method for using the same as taught by Tinsley and the motivation is to resolve the discrepancy between the extracted examination data and the extracted semantic data in the examination report.
Delany teaches
wherein identifying the discrepancy comprises consulting an ontology to determine whether a relationship exists between the extracted examination data and the extracted semantic data, wherein the discrepancy is identified upon determining that a relationship does not exist in the ontology between the extracted examination data and the extracted semantic data (‘353; Para 0043: Delany describes fact extraction component may make use of one or more ontologies linked to one or more lexicons of medical terms. An ontology may be implemented as a relational database, or in any other suitable form, and may represent semantic concepts relevant to the medical domain. In some embodiments, such an ontology may also represent linguistic concepts related to ways the semantic concepts may be expressed in natural language.; Para 0138: Method 700 begins at act 710, at which the current token (i.e., the token currently to be processed) in a text portion being considered for entity labeling may be identified. At act 720, the current token may be matched with a matching concept in an ontology. As discussed above, the matching concept may represent a semantic meaning of the current token, and the current token may be one of a set of possible terms for the matching concept. At act 730, a number of concepts hierarchically related to the matching concept may be identified in the ontology. These hierarchically related concepts may be included in the current token's feature set at act 740. Method 700 ends at act 750, at which the feature set may be used to determine a measure related to a likelihood that the text portion including the current token corresponds to a particular entity type.. (The examiner interprets that the current token may be matched with a matching concept in an ontology. As discussed above, the matching concept may represent a semantic meaning of the current token, and the current token may be one of a set of possible terms for the matching concept to be considered as discrepancy is identified upon determining that a relationship does not exist between the extracted examination data and the extracted semantic data));
It would have been obvious to one of ordinary skill in the art before the effective filing of claimed invention to modify the system, method of Sadeghi/Tinsley with the technique of entity deterction as taught by Delany and the motivation is to resolve the discrepancy between the extracted examination data and the extracted semantic data in the examination report.
Chang discloses
wherein resolving comprises automatically adjusting, based on the received resolution strategy, one or more of the examination data from the examination report and the semantic data from the examination report (‘863; Para 0020: a semantic analyzer is configured to provide a ranked list of semantic terms that reflect the theme and topics of a document. Such ranked semantic terms can be selected by a user to be keywords for the document. Specifically, the text and the document can have no relationship to any pre-selected keywords before the semantic analyzer performs text extraction. The semantic analyzer is extracts text from the document and performs semantic analysis on the extracted text. The semantic analyzer provides a plurality of ranked semantic terms as a result of the semantic analysis and associates semantic terms with the document as semantic keywords. The semantic terms define content to be presented with the document where the content is an advertisement, a link to a remote information resource or a second document.).
It would have been obvious to one of ordinary skill in the art before the effective filing of claimed invention to modify the system, method of Sadeghi/Tinsley/Delany with the technique of semantic analysis of documents to rank terms as taught by Chang and the motivation is to resolve the discrepancy between the extracted examination data and the extracted semantic data in the examination report.
Claims 8 and 15 are rejected as the same reason with Claim 1.
With respect to claim 2, the combined art teaches the computer-implemented method of claim 1 wherein the examination report relates to a radiology examination (‘448; Para 0032:).
Claim 9 is rejected as the same reason with Claim 2.
With respect to claim 3, the combined art teaches the computer-implemented method of claim 1 further comprising presenting, using a user interface, the identified discrepancy to a user, wherein receiving the resolution strategy includes receiving, using the user interface, user feedback regarding how to resolve the identified discrepancy between the extracted examination data and the extracted semantic data (‘448; Paras 0057-0058).
With respect to claim 4, the combined art teaches the computer-implemented method of claim 1 wherein a user provides the examination data to the examination report and includes at least one ostaging identifications related to an examination (‘448; Para 0034: Examples of gender errors include, but are not limited to, pronoun mismatches (e.g., “he” vs. “she”), anatomy mismatches (e.g., the existence of a prostate in a female study), and pathology mismatches (e.g., ovarian cancer for a male patient). These errors may occur for various reasons. For example, a radiologist may simply be looking at images for a male patient X while dictating into a female patient Y's medical report)
Claim 11 is rejected as the same reason with Claim 4.
927; Abstract; Paras 0193, 0196).
Claim 12 is rejected as the same reason with Claim 5.
927; Paras 0012,0082, 0103 ).
Claim 13 is rejected as the same reason with Claim 6.
With respect to claim 16, the combined art teaches the method of claim 1 further comprising: receiving the examination report via a microphone of a user interface, the receiving including performing speech-to-text transcription (‘448; Para 0046, Para 0059).
With respect to claim 17, the combined art teaches the non-transitory computer-readable medium of claim 15 further comprising: computer-executable instructions for receiving the examination report via a microphone and speech-to-text transcription (‘448; Para 0046, Para 0059).
With respect to claim 18, the combined art teaches the computer-implemented method of claim 1, wherein identifying a discrepancy in the examination report further comprises consulting one or more medical guidelines (‘927; Para 0057: An ontology-driven extraction of linguistic patterns may then automatically reconstruct the knowledge captured from the online evidence based resources, facilitating a more effective modeling and authoring of evidence based practice guidelines.).
With respect to claim 19, the combined art teaches the computer-implemented method of claim 1, wherein extracting semantic data from the examination report comprises recognizing a semantic meaning of one or more medical words or phrases in the examination report (927; Para 0169: here NP1 and NP2 are noun phrases, can be extracted.).
With respect to claim 20, the combined art teaches the computer-implemented method of claim 1, wherein extracting examination data, extracting semantic data, and identifying a discrepancy are performed in real- time as a user generates the examination report, and further comprising: alerting the user in real-time of an identified discrepancy in the examination report (‘863; Para 0057).
With respect to claim 21, the combined art teaches the method of claim 20, wherein the alert comprises a prompt for a resolution strategy (927; Para 0072).
With respect to claim 22, the combined art teaches the system of claim 8, wherein, identifying a discrepancy in the examination report further comprises consulting one or more medical guidelines (448; Para 0193).
With respect to claim 23, the combined art teaches the system of claim 8, wherein extracting examination data, extracting semantic data, and identifying a discrepancy are performed in real-time as a user generates the examination report, and further comprising: alerting the user in real-time of an identified discrepancy in the examination report (‘863; Para 0057).
With respect to claim 24, the combined art teaches the system of claim 23, wherein the alert comprises a prompt for a resolution strategy (‘927; Para 0072).
With respect to claim 25, the combined art teaches the non-transitory computer-readable medium of claim 15, wherein identifying a discrepancy in the examination report further comprises consulting one or more medical guidelines (‘448; Para 0193).
With respect to claim 26, the combined art teaches the non-transitory computer-readable medium of claim 15, wherein extracting examination data, extracting semantic data, and identifying a discrepancy are performed in real-time as a user generates the examination report, and further comprising: alerting the user in real-time of an identified discrepancy in the examination report (‘863; Para 0057).
With respect to claim 27, the combined art teaches the non-transitory computer-readable medium of claim 26, wherein the alert comprises a prompt for a resolution strategy (‘448; Para 0015: FIG. 3 shows the illustrative user interface A200 of FIG. 2, with a popup window 300 to notify a user that one or more alerts have been triggered during a quality assurance check).
Response to Arguments
Applicant’s arguments with respect to claim 12/23/2025 have been considered
but are moot because the arguments do not apply to the references of Chang being used in the current rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
EP3511941A1, July 17, 2019; IONASEC RAZVAN; Method and system for evaluating medical examination results of a patient, computer program and electronically readable storage medium.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HIEP VAN NGUYEN whose telephone number is (571)270-5211. The examiner can normally be reached Monday through Friday between 8:00AM and 5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason B Dunham can be reached on 5712728109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HIEP V NGUYEN/Primary Examiner, Art Unit 3686