DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 65 - 84 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract without significantly more.
When considering subject matter eligibility under 35 USC 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter.
Specifically, claims 65 - 84 are directed to a method/system. They hereby fall under at least one of the four statutory classes of invention.
If the claim does not fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea).
Claims 65 - 84 recite steps of observation, evaluation, and judgement that can be practically performed by a human, either mentally or with the use of pen and paper.
The limitation of “detect a state of an electronic record associated with the subject; apply the domain and the audio data as input to at least one machine learning model to cause the at least one machine learning model to generate speech data representative of the audio data that comprises (1) a command for navigation of the electronic record and (2) a value for entry in the electronic record” in claims 65 - 84, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “one or more processors, machine learning model”, nothing in the claim element precludes the steps from practically being performed in a human mind.
The mere nominal recitation of one or more generic processors, machine learning model do not take the claim limitations out of the mental processes grouping.
If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgement, and opinion). Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements “detect a domain of a procedure being performed on a subject; receive audio data during the procedure; select a field of the electronic record based at least on the state and the command; assign the value to the field.”.
The limitation “detect a domain of a procedure being performed on a subject; receive audio data during the procedure”, amount to data-gathering steps which is considered to be insignificant extra-solution activity, (See MPEP 2106.05(g)).
The limitation “select a field of the electronic record based at least on the state and the command; assign the value to the field.”, represents an extra-solution activity because it is a mere nominal or tangential addition to the claim, a mere generic transmission and presentation of collected and analyzed data. (See MPEP 2106.05 (g)).
The claimed “one or more generic processors, machine learning model” are recited at a high level of generality and are merely invoked as tool to perform an existing electronic health record.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, thus fail to integrate the abstract idea into a practical application. See MPEP 2106.05(g).
The insignificant extra-solution activities identified above, which include the data-gathering (receiving, and selecting), and assigning steps, are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II) (i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAPE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPO2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); (v) Presenting (displaying) offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPO2d at 1092- 93). The claims are not patent eligible.
Claims 65 – 84 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using one or more generic processors, machine learning model to perform the detecting, applying, selecting, and assigning steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
Even when considered in combination, these additional elements (one or more generic processors, machine learning model) represent mere instruction to apply an exception and insignificant extra-solution activity, which do not provide an inventive concept.
Claims 65 - 84 as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not affect an improvement to the functioning of a computer itself; and the claim do not move beyond a general link of the use of an abstract idea to a particular technological environment.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 65 - 84 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Strader et al. (US PAP 2022/0115134).
As per claim 65, Strader et al. teach a method of operating a voice assistant, comprising:
detecting, by one or more processors, a domain of a procedure being performed on a subject (“The medical topics relating to the patient could be such things as symptoms and attributes thereof such as onset, tempo, severity, location, etc., medications, complaints, etc.”; paragraphs 5 – 10);
receiving, by the one or more processors, audio data during the procedure (“audio recording of a patient-healthcare provider conversation”; paragraphs 5 – 10);
detecting, by the one or more processors, a state of an electronic record associated with the subject (“the electronic health record of a patient, including a multitude of tabs for showing various component parts of the record (medications, family history, vital signs, prior procedures, prior notes, etc.)”; paragraph 54);
applying, by the one or more processors, the domain and the audio data as input to at least one machine learning model to cause the at least one machine learning model to generate speech data representative of the audio data that comprises (1) a command for navigation of the electronic record and (2) a value for entry in the electronic record (“Once the healthcare provider has completed the process of reviewing and editing the transcript and note it can be downloaded and stored locally, e.g., in the electronic health record for the patient in the data store 212 shown in FIG. 2 or on the hard drive of the workstation 210.”; paragraphs 54 – 56);
selecting, by the one or more processors, a field of the electronic record based at least on the state and the command; and assigning, by the one or more processors, the value to the field (“The Note region 314 includes a history of present illness, which is generated from data in the electronic health record and/or from speech generated in the visit with the provider. The note includes current physical examination data, such as blood pressure as indicated at 316. The transcript area also includes a listing of current examination data, such as pulse and weight. The pulse and weight data (from recent vital signs in the patient's electronic health record) is generated in response to the highlighted passage at the top of the transcript where the doctor states “I'd like to take a look at the swelling.”; paragraph 42).
As per claim 75, Strader et al. teach a system, comprising:
one or more processors to (paragraph 38):
detect a domain of a procedure being performed on a subject(“The medical topics relating to the patient could be such things as symptoms and attributes thereof such as onset, tempo, severity, location, etc., medications, complaints, etc.”; paragraphs 5 – 10);
receive audio data during the procedure (“audio recording of a patient-healthcare provider conversation”; paragraphs 5 – 10);
detect a state of an electronic record associated with the subject (“the electronic health record of a patient, including a multitude of tabs for showing various component parts of the record (medications, family history, vital signs, prior procedures, prior notes, etc.)”; paragraph 54);
apply the domain and the audio data as input to at least one machine learning model to cause the at least one machine learning model to generate speech data representative of the audio data that comprises (1) a command for navigation of the electronic record and (2) a value for entry in the electronic record(“Once the healthcare provider has completed the process of reviewing and editing the transcript and note it can be downloaded and stored locally, e.g., in the electronic health record for the patient in the data store 212 shown in FIG. 2 or on the hard drive of the workstation 210.”; paragraphs 54 – 56);
select a field of the electronic record based at least on the state and the command; and assign the value to the field (“The Note region 314 includes a history of present illness, which is generated from data in the electronic health record and/or from speech generated in the visit with the provider. The note includes current physical examination data, such as blood pressure as indicated at 316. The transcript area also includes a listing of current examination data, such as pulse and weight. The pulse and weight data (from recent vital signs in the patient's electronic health record) is generated in response to the highlighted passage at the top of the transcript where the doctor states “I'd like to take a look at the swelling.”; paragraph 42).
As per claims 66, 76, Strader et al. further disclose detecting the domain comprises: identifying, by the one or more processors, an identifier of at least one of the electronic record or the subject; and selecting, by the one or more processors, the domain based at least on the identifier (“allow access to the audio recording and the transcript and at the same time navigating to other patient data. In this example, the workstation shows the electronic health record of a patient, including a multitude of tabs for showing various component parts of the record (medications, family history, vital signs, prior procedures, prior notes, etc.).”; paragraph 54).
As per claims 67, 77, Strader et al. further disclose the state comprises a location in the electronic record, the method further comprising applying the state as input to the at least one speech model to cause the at least one speech model to generate the speech data (“The result of the application of the named entity recognition model 112 as applied to the text generated by the speech to text conversion model 110…for note generation and classification of highlighted words or phrases into different regions or fields of a note, as indicated at 114.”; paragraphs 33 – 35).
As per claims 68, 78, Strader et al. further disclose the at least one speech model comprises one or more neural networks (Abstract).
As per claims 69, 79, Strader et al. further disclose the audio data comprises a duration of audio of at least five seconds (“patient-healthcare provider conversation” should last more than five seconds; paragraphs 33, 34).
As per claims 70, 80, Strader et al. further disclose the at least one speech model is configured using training data that includes at least one of noise or a speed change (“The machine learning models are trained from labeled training data such that as patients discuss their issues or symptoms, a suggested problem list is generated and displayed on the workstation to support doctors in decision making”; paragraph 52).
As per claims 71, 81, Strader et al. further disclose the at least one speech model is configured training data that includes a first subset of training data having context information corresponding to the domain and a second subset of training data not having context information (“transcripts are annotated (i.e., highlighted) with contextual information to promote trustworthiness and credibility.”; paragraph 62).
As per claims 72, 82, Strader et al. further disclose the domain comprises at least one of a dental, restorative, surgical, medical, cardiological, or gastrointestinal domain (“The medical topics relating to the patient could be such things as symptoms and attributes thereof such as onset, tempo, severity, location, etc., medications, complaints, etc…the electronic health record of a patient, including a multitude of tabs for showing various component parts of the record (medications, family history, vital signs, prior procedures, prior notes, etc.)”; paragraphs 5 – 10, 54).
As per claims 73, 83, Strader et al. further disclose selecting the field of the electronic record comprising generating one or more HTML commands corresponding to the command to navigate to the field, the one or more HTML commands representative of at least one of a keystroke or a mouseclick to apply to an interface of a client device on which the electronic record is accessible (“the workstation 210 (which may be present at the location, e.g., in the physician's office during the visit with the patient) may take the form of a desktop computer which includes an interface in the form of a display, a keyboard 214 and a mouse 216…the doctor can use the equivalent quick keys (“smart phrases” or “dot phrases”)”; paragraphs 36, 51).
As per claims 74, 84, Strader et al. further disclose assigning the value to the field comprises generating one or more HTML commands corresponding to the value and representative of at least one of a keystroke or a mouseclick to apply to an interface of a client device on which the electronic record is accessible (“the workstation 210 (which may be present at the location, e.g., in the physician's office during the visit with the patient) may take the form of a desktop computer which includes an interface in the form of a display, a keyboard 214 and a mouse 216…the doctor can use the equivalent quick keys (“smart phrases” or “dot phrases”)”; paragraphs 36, 45, 51).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wolf et al. teach intraoperative surgical event summary. Palakodety et al. teach automated generation of transcripts, and automatic information extraction. D’Agostino et al. teach generation and transmission of operative notes. Sivan et al. teach method for automatic electronic health record documentation.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEONARD SAINT-CYR whose telephone number is (571)272-4247. The examiner can normally be reached Monday- Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LEONARD SAINT-CYR/ Primary Examiner, Art Unit 2658