DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to amendments filed on 09/29/2025. Claims 1, 3, 5, 6, 8-10, 12, 14-16, 18-20, and 26 were amended. Claims 2, 4, 7, 11, 13, and 17 were cancelled. No claims were added. Therefore, claims 1, 3, 5, 6, 8-10, 12, 14-16, and 18-26 are currently pending and have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 5, 6, 8-10, 12, 14-16, and 18-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. a law of nature, a natural phenomenon, or an abstract idea), and does not include additional elements that either: 1) integrate the abstract idea into a practical application, or 2) that provide an inventive concept — i.e. element that amount to significantly more than the abstract idea. The Claims are directed to an abstract idea because, when considered as a whole, the plain focus of the claims is on an abstract idea.
STEP 1
The claims are directed to a method and system which are included in the statutory categories of invention.
STEP 2A PRONG ONE
The claims recite the abstract idea (based on claim 1) of:
A method to generate real-time practitioner guidance and a personalized medical summary (PMS) from a practitioner-patient conversation, the method comprising: capturing an ongoing conversation between a practitioner and a patient; transcribing the conversation into a textual transcription; streaming the transcription, in combination with patient data, to process the transcription and the patient data; generating at least one follow up query or conversational prompts for the practitioner guidance, in real-time, based on the processed transcription and patient data; and providing a structured personalized medical summary (PMS) incorporating the practitioner-patient conversation.
Independent claims 10 and 26 recite similar limitations and, therefore recite a similar abstract idea.
The claims, as illustrated by the limitations of Claim 1 above, recite an abstract idea within the “certain methods of organizing human activity” grouping — managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions.
The claims recite providing a personalized medical summary from a transcription of a conversation between a practitioner and patient. Generating a personalized medical summary from a transcription of a conversation between a practitioner and patient is a process that merely organizes human activity, as it involves following rules and instructions to capture the conversation, transcribe the conversation, and generate a summary. It also involves an interaction between a person and a computer. Interaction between a person and computer qualifies as interaction under certain methods of organizing human activity. See MPEP 2106.04(a)(2)(II). As such, the claims recite an abstract idea within the category of certain methods of organizing human activity.
The dependent claims 20-25 recite further abstract concepts of organizing human activity because they recite following rules and instructions, such as 20 suggests relevant information to the practitioner related to at least one of, potential diagnosis, treatment, planning, follow-up and communication with the patient; 21 record and save at least one of, patient data, previously generated PMS and past practitioner- patient conversations.; 22 the patient data is at least one of current patient condition, patient dental/medical disease history, physical and mental health, past dental/medical treatments, X-rays/scans, medical complaints and list of medication; 23 ensure the privacy and confidentiality of patient data; 24 retrieve and display relevant patient information during the practitioner-patient conversation; 25 synchronize and update patient data.
STEP 2A PRONG TWO
The claims recite additional elements beyond those that encompass the abstract idea above including:
Independent claim 1:
via a recording device
using automated speech recognition
a diagnosis-AI module (DAIM) comprising at least one large language model (LLM) configured to
Dependent claim 3:
the recording device is at least one of voice recorders, smart phones, smart & digital devices, microphones, cameras, audio or video recorder, PC, and digital transcription devices
Dependent claim 5:
the ASR is at least one of, off-the shelf, custom-built or a third-party service
Dependent claim 6:
the at least one LLM comprises a general-purpose large language model (LLM), a fine-tuned LLM trained for medical conversation or a custom-built LLM
Dependent claim 8:
integrating the diagnosis AI-module (DAIM) with a at least one of a conversational interface, chatbot or a voice-based AI-assistant
Dependent claim 9:
the DAIM integration is via by at least one of third-party API integration, file-based integration, screen scraping, and direct database integration
Independent claim 10:
a processor;
a recording device operably coupled to the processor, configured to
an automated speech recognition (ASR) module in communication with the processor, configured to
a non-transitory storage element coupled to the processor, stores encoded instructions that when implemented by the processor, configure the system to
to a diagnosis-AI module (DAIM) comprising at least one large language model (LLM) configured to
Dependent claim 12:
the recording device is at least one of voice recorders, smart phones, smart & digital devices, microphones, cameras, audio or video recorder, and digital transcription devices
Dependent claim 14:
the automated speech recognition is at least one of, an off-the shelf, custom-built or a third-party service
Dependent claim 15:
automated speech recognition is performed by at least one of acoustic modeling- based ASR or neural network-based ASR
Dependent claim 16:
the at least one LLM comprises a general-purpose large language model (LLM), a fine-tuned LLM trained for medical conversation or a custom-built LLM
Dependent claim 18:
integrating the DAIM with a at least one of a conversational interface, chatbot or a voice-based AI-assistant
Dependent claim 19:
the DAIM integration is via by at least one of third-party API integration, file-based integration, screen scraping, and direct database integration
Dependent claim 20:
the DAIM
Dependent claim 21:
a medical record storage module (MRSM) to
Dependent claim 23:
MRSM is securely encrypted to
Dependent claim 24:
a search functionality within the MRSM to
Dependent claim 25:
the MRSM is integrated with electronic health record (EHR) systems to
Independent claim 26:
via a recording device
a diagnosis-AI module (DAIM) comprising at least one large language model (LLM) selected from a general-purpose large language model (LLM), a fine-tuned LLM trained for medical conversation, or a custom-built LLM
by the DAIM
However, these additional elements do not integrate the abstract idea into a practical application of that idea in accordance with considerations laid out by the Supreme Court or the Federal Circuit. (see MPEP 2106.05 a-c and e) The additional elements integrate the abstract idea into a practical application when they: improve the functioning of a computer or improving any other technology, apply or use a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, apply the judicial exception with, or by use of, a particular machine, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. The additional limitations do not integrate the abstract idea into a practical application when they merely serve to link the use of the abstract idea to a particular technological environment or field of use — i.e. merely uses the computer as a tool to perform the abstract idea; or recite insignificant extra-solution activity (see MPEP 2106.05 f - h).
The recording device, automated speech recognition, AI module, large language model, integration component, processor, non-transitory storage element, medical record storage module, encryption, and search functionality are recited at a high level of generality such that it amounts to no more than instructions to apply the abstract idea using generic computer components. These elements merely add instructions to implement the abstract idea on a computer, and generally link the abstract idea to a particular technological environment. Nothing in the claim recites specific limitations directed to an improved recording device, automated speech recognition, AI module, large language model, integration component, processor, non-transitory storage element, medical record storage module, encryption, and search functionality. Similarly, the specification is silent with respect to these kinds of improvements. A general purpose computer that applies a judicial exception to computer functions, as is the case here, does not qualify as a particular machine, nor does the recitation of a basic computer impose meaningful limits in the claimed process. (see Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 716-17 (Fed. Cir. 2014)). As such, the additional elements recited in the claims do not integrate the abstract medical summary generation process into a practical application of that process.
STEP 2B
The additional elements identified above do not amount to significantly more than the abstract medical summary generation process. The additional structural elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of generic computer structure. Because the specification describes these additional elements in general terms, without describing particulars, Examiner concludes that the claim limitations may be broadly, but reasonably construed, as reciting basic computer components and techniques. The specification describes the elements in a manner that indicates that they are sufficiently straightforward such that the specification does not need to describe the particulars in order to satisfy U.S.C. 112. Considered as an ordered combination, the limitations recited in the claims add nothing that is not already present when the steps are considered individually.
The limitations recited in the dependent claims, in combination with those recited in the independent claims add nothing that integrates the abstract idea into a practical application, or that amounts to significantly more. For example, limitations 20 suggests relevant information to the practitioner related to at least one of, potential diagnosis, treatment, planning, follow-up and communication with the patient; 21 record and save at least one of, patient data, previously generated PMS and past practitioner- patient conversations.; 23 ensure the privacy and confidentiality of patient data; 24 retrieve and display relevant patient information during the practitioner-patient conversation; 25 synchronize and update patient data are directed to the abstract ideas of organizing human activity without integrating into a practical application or amounting to significantly more. Limitations 3 the recording device is at least one of voice recorders, smart phones, smart & digital devices, microphones, cameras, audio or video recorder, PC, and digital transcription devices; 5 the ASR is at least one of, off-the shelf, custom-built or a third-party service; 6 the at least one LLM comprises a general-purpose large language model (LLM), a fine-tuned LLM trained for medical conversation, or a custom-built LLM and rendering the PMS; 8 integrating the diagnosis AI-module (DAIM) with a at least one of a conversational interface, chatbot or a voice-based AI-assistant; 9 the DAIM integration is via by at least one of third-party API integration, file-based integration, screen scraping, and direct database integration; 12 the recording device is at least one of voice recorders, smart phones, smart & digital devices, microphones, cameras, audio or video recorder, and digital transcription devices; 14 the automated speech recognition is at least one of, an off-the shelf, custom-built or a third-party service; 15 automated speech recognition is performed by at least one of acoustic modeling- based ASR or neural network-based ASR; 16 the at least one LLM comprises a general-purpose large language model (LLM), a fine-tuned LLM trained for medical conversation, or a custom-built LLM to render the PMS; 18 integrating the DAIM with a at least one of a conversational interface, chatbot or a voice-based AI-assistant; 19 the DAIM integration is via by at least one of third-party API integration, file-based integration, screen scraping, and direct database integration; 22 the patient data is at least one of current patient condition, patient dental/medical disease history, physical and mental health, past dental/medical treatments, X-rays/scans, medical complaints and list of medication merely serve to further narrow the abstract idea above. As such, the additional elements do not integrate the abstract idea into a practical application, or provide an inventive concept that transforms the claims into a patent eligible invention. Therefore, the claims are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5, 6, 8-10, 12, 14-16, 18-22, 25 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Sivan, et al. (US 2023/0352127 A1) in view of Agassi, et al. (US 2022/0335942 A1).
With regards to claim 1, Sivan teaches a method to generate real-time practitioner guidance and a personalized medical summary (PMS) from a practitioner-patient conversation (see at least ¶ 0015, 0030, 0083, a knowledge engineered auto scribe, that is completely automatic and cloud enabled, listens to the conversation when a doctor see a patient and enters the data into the desired patient electronic health record), the method comprising: capturing, via a recording device, an ongoing conversation between a practitioner and a patient (see at least ¶ 0015, 0030, 0083, a knowledge engineered auto scribe, that is completely automatic and cloud enabled, listens to the conversation when a doctor see a patient and enters the data into the desired patient electronic health record); transcribing the conversation into a textual transcription, using automated speech recognition (ASR) (see at least ¶ 0048, 0092, transcribes the conversation between the patient and doctor using Automatic Speech Recognition (ASR)); streaming the transcription, in combination with patient data, to a diagnosis-AI module (DAIM) comprising at least one large language model (LLM) configured to process the transcription and the patient data (see at least ¶ 0029, artificial intelligence to create auto machine learning component which will be in the physician's computer that listens to the conversation between doctor and the patient, understands the context, creates the document in SOAP (Subjective, Objective, Assessment and Plan) format and completes the charting process in the EHR software without the doctor entering any data; ¶ 0032, training the Artificial Intelligence (AI) module to execute the unique workflow of each Physician. It includes deciphering what the doctor and patient is saying in a conversation, breaking down the passage into chunks and extracting medically relevant text from the passage; ¶ 0095, automated system identifies the subject and the intent (the patients need/chief complaint) of the subject and extracts medical concepts associated with intent. Based on the intent the appropriate clinical pathway model that have been trained to look for specific words in the chucked concepts and groups the broken-down chunks i.e., bucket to medically relevant ontologies and sub-sections. Specialty based NLP medical concepts algorithm looks at the un categorized data and maps to medical concepts from using a specialty NLP ML where the data is chunked under the concepts of chief complaints, past history, HPI, diagnosis, treatment, medication and diagnostics are auto identified); and generating a structured personalized medical summary (PMS) incorporating the practitioner-patient conversation (see at least ¶ 0015, entering the data into the desired electronic health record; ¶ 0017, generating a structured response; and sectioning the structured response to the medical text for the various type of clinical comments and documenting it into a Soap Note format which is signed by the medical practitioner after review).
Claims 10 and 26 recite similar limitations and are rejected for the same reasons.
Sivan also teaches, from claims 10 and 26: a processor (see at least ¶ 0029, physician’s computer [processor]); a non-transitory storage element coupled to the processor (see at least ¶ 0016, non-transitory storage medium; a set of executable software instructions); stores encoded instructions that when implemented by the processor (see at least ¶ 0016, non-transitory storage medium; a set of executable software instructions); prompting a diagnosis-AI module (DAIM) comprising at least one large language model (LLM) selected from a general-purpose LLM, a fine-tuned LLM trained for medical conversations, or a custom-built LLM, with the textual transcription in combination with patient data (see at least ¶ 0029, artificial intelligence to create auto machine learning component which will be in the physician's computer that listens to the conversation between doctor and the patient [prompting], understands the context [fine-tuned LLM trained for medical conversation], creates the document in SOAP (Subjective, Objective, Assessment and Plan) format and completes the charting process in the EHR software without the doctor entering any data; ¶ 0032, training the Artificial Intelligence (AI) module to execute the unique workflow of each Physician. It includes deciphering what the doctor and patient is saying in a conversation, breaking down the passage into chunks and extracting medically relevant text from the passage; ¶ 0095, automated system identifies the subject and the intent (the patients need/chief complaint) of the subject and extracts medical concepts associated with intent. Based on the intent the appropriate clinical pathway model that have been trained to look for specific words in the chucked concepts and groups the broken-down chunks i.e., bucket to medically relevant ontologies and sub-sections. Specialty based NLP medical concepts algorithm looks at the un categorized data and maps to medical concepts from using a specialty NLP ML where the data is chunked under the concepts of chief complaints, past history, HPI, diagnosis, treatment, medication and diagnostics are auto identified); analyzing, by the DAIM, the transcription and patient data to … and (ii) generate a report comprising a personalized medical summary (PMS) as a summary of the practitioner- patient conversation, the PMS being rendered in multiple structured formats selected from a template-filling format, a do-by-example format, or a free-form summary; and providing the PMS report for use in presentation to the practitioner or patient for clinical decision support, treatment planning, or follow-up (see at least ¶ 0015, entering the data into the desired electronic health record; ¶ 0017, generating a structured response; and sectioning the structured response to the medical text for the various type of clinical comments and documenting it into a Soap Note format [template-filing format] which is signed by the medical practitioner after review; ¶ 0093, unstructured medical text [free-form summary]).
Sivan does not explicitly teach generating at least one follow-up query or conversational prompts for the practitioner guidance, in real-time, based on the processed transcription and patient data and from claim 26 (i) generate at least one follow-up query or conversational prompt for the practitioner in real-time during the consultation, based on context extracted from the transcription and patient data to elicit additional clinically relevant information. Agassi teaches generating at least one follow-up query or conversational prompts for the practitioner guidance, in real-time, based on the processed transcription and patient data and from claim 26 (i) generate at least one follow-up query or conversational prompt for the practitioner in real-time during the consultation, based on context extracted from the transcription and patient data to elicit additional clinically relevant information (see at least ¶ 0060, validation sources, such as a patient's EHR, are used to verify that the conversation captured and output generated are complete and accurate. The one or more clinical concepts may be utilized with the patient's EHR to identify whether the scribe output is valid. By way of example, when asking a patient if they're taking any medications and they reply with “Yes, I'm taking Tylenol once daily”, the medication section of the patient's EHR is analyzed to identify whether Tylenol is listed as a medication. If no, a notification that Tylenol is not currently listed may be provided. An indicator to add Tylenol to the patient's EHR may be provided in the notification). It would have been obvious to one of ordinary skill in the art to combine the natural language conversation understanding system of Agassi with the electronic health record documentation of Sivan with the motivation of clinician efficiency (Agassi, ¶ 0002).
With regards to claim 3, Sivan teaches the method of claim 1, wherein the recording device is at least one of voice recorders, smart phones, smart & digital devices, microphones, cameras, audio or video recorder, PC, and digital transcription devices (see at least ¶ 0029, physician's computer that listens to the conversation between doctor and the patient, understands the context, creates the document in SOAP (Subjective, Objective, Assessment and Plan) format).
Claim 12 recites similar limitations and is rejected for the same reasons.
With regards to claim 5, Sivan teaches the method of claim 1, wherein the ASR is at least one of, off-the shelf, custom-built or a third-party service (see at least ¶ 0048, cross-language speaker diarisation engine is directly integrated into the Automatic Speech Recognition (ASR) pipeline; ¶ 0050, Tightly integrated speaker diarisation and ASR auto machine learning, improve the accuracy of speaker diarisation. The joint modelling approach leverages the inter-dependency between speaker diarisation and ASR to better perform both tasks; ¶ 0051, Diarisation and ASR use a medical featurization machine learning model in parallel that is trained on medical data to provide excellent result due to domain match; ¶ 0052, CLIE (Cross Lingual Inference engine) can handle cross language conversations of up to 15 leading languages of the world including English, Latin, Arabic, French, Germany Spanish, Malaysian Bahasa, Indonesian Bahasa, Hindi, Tamil, Telugu, Kannada and Malayalam [custom-built]).
Claim 14 recites similar limitations and is rejected for the same reasons.
With regards to claim 6, Sivan teaches the method of claim 1, wherein the at least one LLM comprises a general-purpose large language model (LLM), a fine-tuned LLM trained for medical conversation or a custom-built LLM and rendering the PMS (see at least ¶ 0015, entering the data into the desired electronic health record; ¶ 0017, generating a structured response; and sectioning the structured response to the medical text for the various type of clinical comments and documenting it into a Soap Note format which is signed by the medical practitioner after review; ¶ 0029, artificial intelligence to create auto machine learning component which will be in the physician's computer that listens to the conversation between doctor and the patient, understands the context [fine-tuned LLM trained for medical conversation], creates the document in SOAP (Subjective, Objective, Assessment and Plan) format and completes the charting process in the EHR software without the doctor entering any data; ¶ 0032, training the Artificial Intelligence (AI) module to execute the unique workflow of each Physician. It includes deciphering what the doctor and patient is saying in a conversation, breaking down the passage into chunks and extracting medically relevant text from the passage; ¶ 0092, transcribes the conversation between the patient and doctor; ¶ 0095, automated system identifies the subject and the intent (the patients need/chief complaint) of the subject and extracts medical concepts associated with intent. Based on the intent the appropriate clinical pathway model that have been trained to look for specific words in the chucked concepts and groups the broken-down chunks i.e., bucket to medically relevant ontologies and sub-sections. Specialty based NLP medical concepts algorithm looks at the un categorized data and maps to medical concepts from using a specialty NLP ML where the data is chunked under the concepts of chief complaints, past history, HPI, diagnosis, treatment, medication and diagnostics are auto identified)
Claim 16 recites similar limitations and is rejected for the same reasons.
With regards to claim 8, Sivan teaches the method of claim 1, further comprising integrating the diagnosis AI-module (DAIM) with a at least one of a conversational interface, chatbot or a voice-based AI-assistant (see at least ¶ 0029, artificial intelligence to create auto machine learning component which will be in the physician's computer that listens to the conversation between doctor and the patient, understands the context [conversational interface], creates the document in SOAP (Subjective, Objective, Assessment and Plan) format and completes the charting process in the EHR software without the doctor entering any data; ¶ 0032, training the Artificial Intelligence (AI) module to execute the unique workflow of each Physician. It includes deciphering what the doctor and patient is saying in a conversation, breaking down the passage into chunks and extracting medically relevant text from the passage [voice-based AI-assistant]; ¶ 0048, cross-language speaker diarisation engine is directly integrated into the Automatic Speech Recognition (ASR) pipeline; ¶ 0050, Tightly integrated speaker diarisation and ASR auto machine learning, improve the accuracy of speaker diarisation. The joint modelling approach leverages the inter-dependency between speaker diarisation and ASR to better perform both tasks; ¶ 0051, Diarisation and ASR use a medical featurization machine learning model in parallel that is trained on medical data to provide excellent result due to domain match; ¶ 0052, CLIE (Cross Lingual Inference engine) can handle cross language conversations of up to 15 leading languages of the world including English, Latin, Arabic, French, Germany Spanish, Malaysian Bahasa, Indonesian Bahasa, Hindi, Tamil, Telugu, Kannada and Malayalam).
Claim 18 recites similar limitations and is rejected for the same reasons.
With regards to claim 9, Sivan teaches the method of claim 8, wherein the DAIM integration is via by at least one of third-party API integration, file-based integration, screen scraping, and direct database integration (see at least ¶ 0093, After transcribing the conversation, the present invention infers medically relevant data from the unstructured text passage by passing through a plurality of databases [direct database integration]. The unstructured voice text passes through ML's that validate medical text patterns, spell checking, split and merge corrector models for formatting data accurately in the language spoken).
Claim 19 recites similar limitations and is rejected for the same reasons.
With regards to claim 15, Sivan teaches the system of claim 10, wherein automated speech recognition is performed by at least one of acoustic modeling-based ASR or neural network-based ASR (¶ 0048, cross-language speaker diarisation engine is directly integrated into the Automatic Speech Recognition (ASR) pipeline; ¶ 0049, Online speaker cross language diarisation machine learning uses neural network-based cross language diarisation).
With regards to claim 20, Sivan teaches the system of claim 10, wherein the DAIM suggests relevant information to the practitioner related to at least one of, potential diagnosis, treatment, planning, follow-up and communication with the patient (see at least ¶ 0095, automated system identifies the subject and the intent (the patients need/chief complaint) of the subject and extracts medical concepts associated with intent. Based on the intent the appropriate clinical pathway model that have been trained to look for specific words in the chucked concepts and groups the broken-down chunks i.e., bucket to medically relevant ontologies and sub-sections. Specialty based NLP medical concepts algorithm looks at the un categorized data and maps to medical concepts from using a specialty NLP ML where the data is chunked under the concepts of chief complaints, past history, HPI, diagnosis, treatment, medication and diagnostics are auto identified; ¶ 0098, the response is sectioned to the medical text for the various type of clinical comments like discharge summary, soap notes, progress notes as per the physician's way of doing and is put into a documented format (SOAP NOTE) [suggests relevant information]. After reviewing the response in the SOAP format, the doctor signs the document).
With regards to claim 21, Sivan teaches the system of claim 10, further comprising a medical record storage module (MRSM) to record and save at least one of, patient data, previously generated PMS and past practitioner- patient conversations (see at least ¶ 0033, the medically relevant text is mapped to different ontologies and grouped into a document under different subsections with respective codes (ICD, CPT, SNOMED, RXNORM) [patient data] in a matter of minutes. The document after physician's approval is entered into the physician's EHR software without integration).
With regards to claim 22, Sivan teaches the system of claim 21, wherein the patient data is at least one of current patient condition, patient dental/medical disease history, physical and mental health, past dental/medical treatments, X-rays/scans, medical complaints and list of medication (see at least ¶ 0095, automated system identifies the subject and the intent (the patients need/chief complaint) of the subject and extracts medical concepts associated with intent. Based on the intent the appropriate clinical pathway model that have been trained to look for specific words in the chucked concepts and groups the broken-down chunks i.e., bucket to medically relevant ontologies and sub-sections. Specialty based NLP medical concepts algorithm looks at the un categorized data and maps to medical concepts from using a specialty NLP ML where the data is chunked under the concepts of chief complaints, past history, HPI, diagnosis, treatment, medication and diagnostics are auto identified).
With regards to claim 25, Sivan teaches the system of claim 21, wherein the MRSM is integrated with electronic health record (EHR) systems to synchronize and update patient data (see at least ¶ the medically relevant text is mapped to different ontologies and grouped into a document under different subsections with respective codes (ICD, CPT, SNOMED, RXNORM) [patient data] in a matter of minutes. The document after physician's approval is entered into the physician's EHR software without integration [synchronize]; at least ¶ 0075, follow up documentation [update])
Claims 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Sivan, et al. (US 2023/0352127 A1) in view of Agassi, et al. (US 2022/0335942 A1) in further view of McEwing (US 2020/0043579 A1).
With regards to claim 23, Sivan fails to teach the system of claim 21, wherein MRSM is securely encrypted to ensure the privacy and confidentiality of patient data. McEwing teaches the system of claim 21, wherein MRSM is securely encrypted to ensure the privacy and confidentiality of patient data (see at least ¶ 0377, current health care provider can access the patient's EHR by several methods. The health care provider can create a search request, identifying the requestor, the patient name DOB, social security number, and if applicable, a hospital assigned ID number or similar. There should be include some type of security such as encryption to assure patient confidentiality is not breached). It would have been obvious to one of ordinary skill in the art to combine the patient history search of McEwing with the electronic health record documentation of Sivan with the motivation of facilitating accurate diagnosis and initiation of treatment and testing (McEwing, ¶ 0002).
With regards to claim 24, Sivan fails to teach the system of claim 21, further comprising a search functionality within the MRSM to retrieve and display relevant patient information during the practitioner-patient conversation. McEwing teaches the system of claim 21, further comprising a search functionality within the MRSM to retrieve and display relevant patient information during the practitioner-patient conversation (see at least ¶ 0041, current health care provider can receive the information of past (and alphanumeric code indexed) medical examination, etc., in real time during, for instance, the current initial examination of the patient; ¶ 0379, minimize the time and effort required of a current health care provider to obtain the past diagnosis/treatment/test EHR records for a patient relative to the current diagnosis (or for the process of establishing or confirming a diagnosis or treatment plan). This may be accomplished in several ways, including but not limited to the current health care provider being provided an option to search within a patient's EHR of past medical events during the diagnosing process. It will be appreciated that the patient may not be able to identify past health care providers or provide reliable dates as to when the past treatment was furnished. Also the current health care provider may search within the identified EHR for a past record of symptoms or conditions now observed in the patient. The health care provider may enter text of observed symptoms/conditions). It would have been obvious to one of ordinary skill in the art to combine the patient history search of McEwing with the electronic health record documentation of Sivan with the motivation of facilitating accurate diagnosis and initiation of treatment and testing (McEwing, ¶ 0002).
.
Response to Arguments
Applicant's arguments with respect to the 35 USC § 101 rejections set forth in the previous office action have been considered, but are not persuasive. In an effort to advance prosecution, the Examiner has provided a response to applicant's arguments. Applicant argues:
Applicant argues the limitations integrate any exception into a practical application of that exception and are significantly more because it provides an improvement to the technology.
Applicant argues the limitations are subject matter eligible for similar reasons as USPTO Examples 38 and 42.
Applicant argues the limitations are subject matter eligible for similar reasons to McRO.
In response to Applicant’s argument the limitations integrate any exception into a practical application of that exception and is significantly more because it provides an improvement to the technology, the Examiner respectfully disagrees. The application discloses a The recording device, automated speech recognition, AI module, large language model, integration component, processor, non-transitory storage element, medical record storage module, encryption, and search functionality which are recited at a high level of generality such that it amounts to no more than instructions to apply the abstract idea using generic computer components. These elements merely add instructions to implement the abstract idea on a computer, and generally link the abstract idea to a particular technological environment. Nothing in the claim recites specific limitations directed to an improved The recording device, automated speech recognition, AI module, large language model, integration component, processor, non-transitory storage element, medical record storage module, encryption, and search functionality. Similarly, the specification is silent with respect to these kinds of improvements. Furthermore, the specification discloses “[p]oor communication can lead to a medical error when a patient does not report their allergies or health history to a clinician, or when a clinician does not correctly or thoroughly record a medical history or medication list in patient's case …[p]roper medical record documentation …can be a time-consuming and tedious process.” See as-filed specification, ¶ 0008, 0014. Therefore, it appears the Applicant is applying generic computer components to the tasks of patient communication and patient record documentation to make a clinician more efficient. “As we have explained, ‘the fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.’ Bancorp Servs., 687 F.3d at 1278.” FairWarning IP, LLC v. Iatric Systems, _ F.3d _, 120 U.S.P.Q.2d 1293 (Fed. Cir. 2016).
In response to Applicant’s argument the limitations are subject matter eligible for similar reasons as USPTO Examples 38 and 42, the Examiner respectfully disagrees. First of all, Example 38 involves simulating an analog audio mixer and does not have any nexus to the instant claim limitations. Therefore Example 38 is considered to be irrelevant to the claims presented in this application. Secondly, Example 42 involves converting non-standardized patient information into a standardized format, automatically generating a message containing the updated information about the patient’s condition, and transmitting the message to all of the users over the computer network in real time so that each user has immediate access to the up-to date information. The claim as a whole integrates the method of organizing human activity into a practical application by allowing remote users to share information in real time in a standardized format regardless of the format the original information was input by the user. This was considered an improvement in the technology because it ensures that each of a group of health care providers is always given immediate notice and access to changes so they can readily adapt their own medical diagnostic and treatment strategy in accordance with other providers’ actions. In contrast, the instant invention merely considers the problems to be “[p]oor communication can lead to a medical error when a patient does not report their allergies or health history to a clinician, or when a clinician does not correctly or thoroughly record a medical history or medication list in patient's case …[p]roper medical record documentation …can be a time-consuming and tedious process” (See as-filed specification, ¶ 0008, 0014). And it simply uses conventional computer elements to make a clinician less error prone and more efficient. This does not equate to the same reasons Example 42 was considered to be subject matter eligible.
With regards to Applicant’s argument that the limitations are subject matter eligible for similar reasons to McRO, the Examiner respectfully disagrees. McRO automated a process that could not have formerly been automated because there was “no evidence the [manual] process previously used by the animators is the same as the process required by the claims.” McRO, Inc. v. Bandai Namco Games America Inc., 837 F.3d 1299, 120 U.S.P.Q.2d 1091 (Fed. Cir. 2016). In contrast, the instant invention is trying to automate a process that it clearly states is typically performed by clinicians, but is wrought with errors and inefficiency. See specification, ¶ 0008, 0014. The instant invention is merely applying conventional computer components to automate a process that a user can perform manually, which is dissimilar to the new process defined by McRO.
Applicant's arguments with respect to the 35 USC § 102 and § 103 rejections set forth in the previous office action have been considered, but are moot in view of the new grounds of rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lipton, et al. (US 2022/0375605 A1) which discloses systems and methods for automatic generation, by a data processing system, of formatted annotations of conversations, such as conversations between patients and doctors. The annotations are formatted to satisfy requirements of SOAP documentation that is present in EHR. The annotations describe the conversation and include specific information summarizing the conversation. The data processing system is configured to receive a dataset including conversation transcripts, post-visit summaries, corresponding supporting evidence (in the transcript), and structured labels. The data processing system is configured to recognize relevant diagnoses and abnormalities in the review of organ systems (RoS).
Gifford, et al. (US 2015/0379200 A1) which discloses methods, systems, and computer-readable media are provided for facilitating the voice-assisted creation of a shorthand clinical note on a mobile or tablet device. A microphone on the device is used to capture a conversation between a clinician and a patient. Clinically-relevant concepts in the conversation are identified, extracted, and temporarily presented on the device's touch screen interface. The concepts are selectable, and upon selection, the selected concept is populated into a clinical note display area of the touch screen interface. The shorthand clinical note may be used as a memory prompt for the later creation of a more comprehensive clinical note.
Klann JG, Szolovits P. An intelligent listening framework for capturing encounter notes from a doctor-patient dialog. BMC Med Inform Decis Mak. 2009 Nov 3;9 Suppl 1(Suppl 1):S3. doi: 10.1186/1472-6947-9-S1-S3. PMID: 19891797; PMCID: PMC2773918 which discloses capturing accurate and machine-interpretable primary data from clinical encounters is a challenging task, yet critical to the integrity of the practice of medicine. We explore the intriguing possibility that technology can help accurately capture structured data from the clinical encounter using a combination of automated speech recognition (ASR) systems and tools for extraction of clinical meaning from narrative medical text. Our goal is to produce a displayed evolving encounter note, visible and editable (using speech) during the encounter.
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joey Burgess whose telephone number is (571)270-5547. The examiner can normally be reached Monday through Friday 9-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marc Jimenez can be reached on 571-272-4320 The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH D BURGESS/ Primary Examiner, Art Unit 3681