Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This is in response to Applicant’s amendment which was filed on 08/21/2025 and has been entered. Claims 1-8 and 20-22 are pending.
Response to Arguments
Applicant’s arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-8 and 20-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 and 20 recite the limitation "the gathered information". There is insufficient antecedent basis for this limitation in the claim. It is unclear what “the gathered information” is referring to in the claim since no information has been previously defined. Claims 2-8, 21 and 22 dependent on claims 1 or 20 and are rejected for being dependent upon a rejected base claim.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 2, 3, 4, 5, 8, 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over US Publication No. 2005/0261942 (“Wheeler”) in view of US Publication No. 20190251417 (“Bennett”).
Regarding claim 1, Wheeler discloses a medical practice management systems operating in a memory, incorporated for and within a single professional practice, the medical practice management system comprising:
a central management unit (server 20);
a kiosk communicatively coupled to the central management unit, the kiosk configured to be located at a physical location of the single professional practice (fig. 1, check-in kiosk 10a-n); and
an interactive voice unit (claim 1, an interactive voice response system) communicatively coupled to the central management unit ([0040]); and
the medical practice management system is incorporated within and for a single professional practice when the patient is at the physical location of the single professional practice, the kiosk is configured to interface locally with the patient ([0031] Kiosks 10a-10n are located, for example, in the reception area of a medical facility, such as a hospital, clinic, doctor's office, etc. When a patient enters a facility that is equipped with the system, the patient is greeted by a number of individual self-service kiosks).
Wheeler does not specify the interactive voice unit performing natural language processing; wherein: the interactive voice unit accepts phone calls from a patient and generates natural conversations with each patient response being analyzed using an artificial intelligence unit trained on a plurality of conversations in a specific medical discipline to generate at least one new question to present to the patient based on the patient's a complete spoken response of the patient to a prior question and a determined intent of the patient's complete spoken response of the patient, the central management unit performs at least one task based on the gathered information, and
In the same field of endeavor, Bennett et al. discloses a techniques for enabling an artificial intelligence system to infer grounded intent from user input, wherein a user engages in a voice conversation with a digital assistant running on a device and automatically suggest and/or execute actions associated with the predicted intent. For example, in Fig. 4, following User A's input 120, from the content 410 of dialog box 405, it is seen that the device has inferred various parameters of User A's intent to purchase movie tickets based on block 120, e.g., the identity of the movie, possible desired showing times, a preferred movie theater, etc. Based on the inferred intent, the device may have proceeded to query the Internet for local movie showings, e.g., “would you like me to book two tickets to the 345PM showing?” Note that text may correspond to content of a speech exchange ([0024]). From conversation 300, it will be appreciated that an intent inference system may desirably supplement and customize any identified actionable task with implicit contextual details, e.g., as may be available from the user's cumulative interactions with the device, parameters of the user's digital profile, parameters of a digital profile of another user with whom the user is currently communicating, and/or parameters of one or more cohort models ([0027]). While Bennett uses keywords to identify action items, the context of the customer’s entire communication is used for a comprehensive analysis of the intent.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the voice response system described in Bennett to receive calls from the user when the user is accessing the system while not physically at a kiosk. Additionally, the voice response system’s ability to analyze the patient’s full spoken response to generate further questions to present to the patient, the AI dialog method providing intent inference, wherein a device may infer certain types of user intent by analyzing the content of user communications, and further take relevant and timely actions responsive to the inferred intent without requiring the user to issue any explicit commands.
Regarding claim 2, Wheeler in view of Bennett discloses the medical practice management system of claim 1, wherein the at least one task is scheduling an appointment with the patient and a doctor (Bennett, [0026] To execute the task of making an appointment, DA 304 is further able to retrieve and perform the specific actions required. For example, DA 304 may automatically launch an appointment scheduling application).
Regarding claim 3, Wheeler in view of Bennett discloses the medical practice management system of claim 2, wherein the central management unit sends appointment reminders and receives appointment confirmations from the patient (Wheeler, [0037] the system permits patients to retrieve data regarding future pending appointments during the check-in procedure as a reminder to patients).
Regarding claim 4, Wheeler in view of Bennett discloses the medical practice management system of claim 2, wherein the central management unit allows the patient to reschedule appointments with the doctor (Bennett, rescheduling an appointment falling under the same procedure as scheduling, [0026] To execute the task of making an appointment, DA 304 is further able to retrieve and perform the specific actions required. For example, DA 304 may automatically launch an appointment scheduling application).
Regarding claim 5, Wheeler in view of Bennett discloses the medical practice management system of claim 1, wherein records related to the patient are stored, updated and distributed from the central management unit (Wheeler, [0036] each kiosk, 10a-10n, is connected via bi-directional communications links to various medical records networks, DB1-DBn).
Regarding claim 8, Wheeler in view of Bennett discloses the medical practice management of claim 1, wherein central management system analyzes information gathered from the patient to customize and personalize the responses given to the patient (Bennett, [0027] From conversation 300, it will be appreciated that an intent inference system may desirably supplement and customize any identified actionable task with implicit contextual details, e.g., as may be available from the user's cumulative interactions with the device).
Regarding claims 21 and 22, Wheeler in view of Bennett discloses the medical practice management system of claim 1, wherein the medical practice management system is configured to interface with at least one additional software system utilized by the single professional practice (Wheeler, [0005-0006] various medical databases and legacy systems are interfaced to provide medical data. A brief description of a few exemplary existing databases and systems is provided below. Various other databases and systems can be interfaced in accordance with the present invention to provide additional access to other data when necessary, such as the described Composite Health Care System (CHCS).
Claims 6, 7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US Publication No. 2005/0261942 (“Wheeler”) in view of US Publication No. 20190251417 (“Bennett”) and further in view of WO/2020113506 (“Zhang et al.”).
Regarding claim 6, Wheeler does not specify the medical practice management system of claim 5, wherein the central management unit is configured to allow a doctor to view, add, amend, and delete patient records.
In a similar field of endeavor, Zhang et al. discloses Doctors and other users can access the electronic medical record data stored on the hospital local server 120 of the affiliated hospital through the doctor terminal 130, and can browse, add, modify and delete the electronic medical record data stored on the hospital local server 120 of the affiliated hospital , And upload the medical record change record generated by the operation to the local server 120 of the hospital ([0027]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Wheeler such that the patient record accessed by the patient through the kiosks and voice recognition information be viewed by doctors, as well as be added to, modified and deleted as disclosed by Zhang et al. in order to dynamically maintain patient database.
Regarding claim 7, Wheeler discloses the medical practice management system of claim 6, wherein the kiosk gathers information from the patient that is used to update the patient's record.
Regarding claim 20, Wheeler discloses a medical practice management systems operating in a memory, incorporated for and within a single professional practice, the medical practice management system comprising:
a central management unit (server 20);
a kiosk communicatively coupled to the central management unit, the kiosk configured to be located at a physical location of the single professional practice (fig. 1, check-in kiosk 10a-n); and
an interactive voice unit (claim 1, an interactive voice response system) communicatively coupled to the central management unit ([0040]); and
the medical practice management system is incorporated within and for a single professional practice when the patient is at the physical location of the single professional practice, the kiosk is configured to interface locally with the patient ([0031] Kiosks 10a-10n are located, for example, in the reception area of a medical facility, such as a hospital, clinic, doctor's office, etc. When a patient enters a facility that is equipped with the system, the patient is greeted by a number of individual self-service kiosks).
Wheeler does not specify the interactive voice unit performing natural language processing; wherein: the interactive voice unit accepts phone calls from a patient and generates natural conversations with each patient response being analyzed using an artificial intelligence unit trained on a plurality of conversations in a specific medical discipline to generate at least one new question to present to the patient based on the patient's a complete spoken response of the patient to a prior question and a determined intent of the patient's complete spoken response of the patient, the central management unit performs at least one task based on the gathered information, records related to the patient are stored, updated and distributed from the central management unit.
In the same field of endeavor, Bennett et al. discloses a techniques for enabling an artificial intelligence system to infer grounded intent from user input, and automatically suggest and/or execute actions associated with the predicted intent. For example, in Fig. 4, following User A's input 120, from the content 410 of dialog box 405, it is seen that the device has inferred various parameters of User A's intent to purchase movie tickets based on block 120, e.g., the identity of the movie, possible desired showing times, a preferred movie theater, etc. Based on the inferred intent, the device may have proceeded to query the Internet for local movie showings, e.g., “would you like me to book two tickets to the 345PM showing?” Note that text may correspond to content of a speech exchange ([0024]). From conversation 300, it will be appreciated that an intent inference system may desirably supplement and customize any identified actionable task with implicit contextual details, e.g., as may be available from the user's cumulative interactions with the device, parameters of the user's digital profile, parameters of a digital profile of another user with whom the user is currently communicating, and/or parameters of one or more cohort models ([0027]). While Bennett uses keywords to identify action items, the context of the customer’s entire communication is used for a comprehensive analysis of the intent.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the voice response system described in Bennett to receive calls from the user when the user is accessing the system while not physically at a kiosk. Additionally, the voice response system’s ability to analyze the patient’s full spoken response to generate further questions to present to the patient, the AI dialog method providing intent inference, wherein a device may infer certain types of user intent by analyzing the content of user communications, and further take relevant and timely actions responsive to the inferred intent without requiring the user to issue any explicit commands.
Wheeler does not specify the central management unit is configured to allow a doctor to view, add, amend, and delete patient records.
In a similar field of endeavor, Zhang et al. discloses Doctors and other users can access the electronic medical record data stored on the hospital local server 120 of the affiliated hospital through the doctor terminal 130, and can browse, add, modify and delete the electronic medical record data stored on the hospital local server 120 of the affiliated hospital , And upload the medical record change record generated by the operation to the local server 120 of the hospital ([0027]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Wheeler such that the patient record accessed by the patient through the kiosks and voice recognition information be viewed by doctors, as well as be added to, modified and deleted as disclosed by Zhang et al. in order to dynamically maintain patient database.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIRAPON TULOP whose telephone number is (571)270-7491. The examiner can normally be reached Monday to Friday, 10:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached at 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIRAPON TULOP/Examiner, Art Unit 2693