Prosecution Insights
Last updated: April 19, 2026
Application No. 17/874,693

Handsfree Communication System and Method

Final Rejection §103
Filed
Jul 27, 2022
Examiner
RASNIC, HUNTER J
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Edera L3C
OA Round
4 (Final)
11%
Grant Probability
At Risk
5-6
OA Rounds
4y 7m
To Grant
32%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
9 granted / 81 resolved
-40.9% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
39.1%
-0.9% vs TC avg
§103
37.3%
-2.7% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 81 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS’s) submitted on 15 September 2025 and 06 January 2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS’s are being considered by the Examiner in this Office Action. Response to Amendment Claims 1-9, 11-19, & 21-29 were previously pending in this application. The amendment filed 17 November 2025 has been entered and the following has occurred: Claims 1, 11, & 21 have been amended. No claims have been added or cancelled. Claims 1-9, 11-19, & 21-29 remain pending in the application. Claim Analysis - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The claims recite subject matter within a statutory category as a process (claims 1-9), machine (claims 21-29), and manufacture (claims 11-19) (Subject Matter Eligibility (SME) Test Step 1: Yes) which recite steps of: interfacing a generic virtual assistant with a medical management system; and monitoring the diction of a medical specialist using the generic virtual assistant; processing at least a portion of the diction to identify at least one task to be performed within a medical management system; and in response to detecting at least one task, effectuating the at least one task one the medical management system, wherein effectuating the at least one task on the medical management system includes commandeering a local user interface, normally used by the medical specialist of the medical management system through remote manipulation of the local user interface to effectuate the at least one task on the medical management system; commandeering the local user interface of the medical management system includes processing the at least task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system, wherein the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physical adjust one or more portions of the hospital bed. These steps of interfacing a generic virtual assistant with a medical management system and monitoring the diction of a medical specialist using the generic virtual assistant, effectuating the at least one task one the medical management system, effectuating the at least one task on the medical management system includes commandeering a local user interface system through remote manipulation of the local user interface to effectuate the at least one task and as drafted, under the broadest reasonable interpretation (BRI), includes performance of the limitation in the mind but for recitation of generic computer components. That is, other than reciting steps as performed by the generic computer components, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the interfacing a generic virtual assistant with a medical management system language, interfacing a generic virtual assistant in the context of this claim encompasses a process of the user physically programming a computer to include a virtual assistant or downloading software onto a medical management system, which includes use of a generic computer as a tool for performing the step recited. While the process may be recited for a computer functionality, performing a mental process, such as performing and/or processing a task, or programming an interface at a local environment in response to the performing of a task/programming from a remote environment on a generic computer and allowing manipulation of said generic interface and/or using a computer as a tool to perform said tasks still constitutes a mental process under BRI, see MPEP 2106.04(a)(2)(III)(C). Similarly, the limitation of monitoring the diction of a medical specialist as drafted, is a process that, under its BRI, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its BRI, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. These steps of interfacing a generic virtual assistant with a medical management system and monitoring the diction of a medical specialist using the generic virtual assistant, as drafted, under the BRI, includes methods of organizing human activity. MPEP 2106.04(a)(2)(II) defines various methods of organizing human activity including fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations); and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The limitations found in the independent claims include and managing personal behavior or relationships or interactions between people under broadest reasonable interpretation. For instance, the inventive concept of the Specification and Claims relate to providing a system for monitoring diction/speech of a medical specialist and performing/processing actions, i.e. tasks, and/or providing outputs/manipulation of a generic user interface based on said diction/speech of the medical specialist. Under broadest reasonable interpretation, this effectively manages personal behavior or relationships or interactions between people by performing tasks and/or providing outputs based on analyzed, i.e. organized human activity, i.e. medical specialist speech or control/actions of a user interface by said specialist. If a claim limitation, under its broadest reasonable interpretation, includes managing personal behavior or relationships or interactions between people then it falls within the “Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claims 2-9, 12-19, & 22-29, reciting particular aspects of how monitoring the diction of varying medical specialists, processing the diction, parsing a list of tasks and subtasks, and/or completing varying tasks and subtasks based on said diction analysis/processing may be performed in the mind but for recitation of generic computer components) (SME Test Step 2A, Prong 1: Yes). The judicial exception(s) found in the independent claims is integrated into a practical application. In particular, the additional elements of the independent claims integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception at least by the following limitation found in independent claims 1, 11, & 21”: “the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physical adjust one or more portions of the hospital bed”. As such, independent claims 1, 11, & 21 are integrated into a practical application and therefore are not directed to the recited abstract idea(s) recited therein, constituting patent-eligible subject matter under 35 U.S.C. 101. Furthermore, by virtue of dependency from independent claims 1, 11, & 21, dependent claims 2-9, 12-19, & 22-29 are also directed towards patent-eligible subject matter (SME Test Step 2A, Prong 2: Yes). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9, 11-19, & 21-29 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (U.S. Patent No. 11,024,304), hereinafter “Smith”, in view of Gallopyn et al. (U.S. Patent Publication No. 2020/0243186), hereinafter “Gallopyn”, further in view of Judy et al. (U.S. Patent Publication No. 2020/0005783), hereinafter “Judy”. Claim 1 – Regarding Claim 1, Smith discloses the computer-implemented method, executed on a computing device, comprising: interfacing a generic virtual assistant with a medical management system (See Smith Col. 3, ll. 35-65 which discloses the use of a virtual assistant which interprets natural language input in spoken and/or textual form to deduce user intent and perform actions/tasks based on the deduced user intent; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and monitoring the diction of a medical specialist using the generic virtual assistant (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); processing at least a portion of the diction to identify at least one task to be performed within a medical management system (See Smith Col. 5, ll. 62 – Col. 6, ll. 6 which discloses the use of natural language processing that recognizes one or more of the predetermined speech inputs that a virtual assistant companion can speak; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and in response to detecting at least one task, effectuating the at least one task on the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks in response to a command received by speech or other input), wherein effectuating the at least one task on the medical management system includes commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface to effectuate the at least one task on the medical management system (See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks in response to a command received by speech or other input), wherein commandeering the local user interface of the medical management system includes processing the at least one task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks, i.e. performing processing at least one task in response to a command received by speech or other input). While Smith discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist, Smith does not seem to explicitly mention allowing the user to commandeer a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface. However, Gallopyn discloses commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface (See Gallopyn Par [0023]-[0026] which discloses a configuration including one or more network accessible computers that are accessed using telephony, wherein the one or more network computers hosts the virtual medical assistant, i.e. tasks effectuated, and the functionality of the assistant is accessed via a separate interface device, such that a medical professional may call into a system capable of accessing one or more host devices using any suitable telephony device and conduct interactions with the virtual medical assistant using speech, such that the interface device (e.g., the telephony component used by the medical professional) interacts with, but does not itself necessarily host, the virtual medical assistant, effectively constituting commandeering a local user interface through remote manipulation of the local user interface; See Gallopyn Par [0033] which discloses by implementing at least some functionality of the virtual medical assistant on a virtual medical assistant server, i.e. that is remote from the local medical system, the computer footprint of the virtual medical assistant may be reduced to some extent) commandeering the local user interface of the medical management system includes processing the at least one task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (See Gallopyn Par [0033] which discloses by implementing at least some functionality of the virtual medical assistant on a virtual medical assistant server, i.e. that is remote from the local medical system, the computer footprint of the virtual medical assistant may be reduced to some extent; See Gallopyn Par [0023]-[0026] which discloses a configuration including one or more network accessible computers that are accessed using telephony, wherein the one or more network computers hosts the virtual medical assistant, i.e. tasks effectuated, and the functionality of the assistant is accessed via a separate interface device, such that a medical professional may call into a system capable of accessing one or more host devices using any suitable telephony device and conduct interactions with the virtual medical assistant using speech, such that the interface device (e.g., the telephony component used by the medical professional) interacts with, but does not itself necessarily host, the virtual medical assistant, effectively constituting commandeering a local user interface by processing at least one task on the local user interface and rendering remote manipulation of the local user interface). The disclosure of Gallopyn is directly applicable to the disclosure of smith because both disclosures share limitations and capabilities, such as being directed towards virtual medical assistants and operation thereof for interaction by a medical professional. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Smith, which already discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist to further include commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface, as disclosed by Gallopyn, because this allows the system to be implemented using a mobile device and/or server implementation for reduction of the computer footprint of the virtual medical assistant in the local computing environment (See Gallopyn Par [0033]). While Smith and Gallopyn generally disclose the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist, Smith and Gallopyn are generally silent on the tasks including commands for controlling a hospital bed the following limitations: the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed. However, Judy discloses the at least one task includes a command for controlling a hospital bed (See Judy Par [0035]-[0036] which discloses a patient room equipped with a patient support apparatus, including a stretcher, a chair, a wheelchair, a bench, a hospital bed, etc.; See Judy Par [0039] which discloses the voice command server controlling various functions within the patient’s room, including bed controls; See Judy Par [0052]-[0055] which discloses voice recognition software determining which controllable device is related to the voice command and converting the voice command into a control signal, such as for a hospital bed when the patient or caregiver may voice a command to the electronic controller, e.g. “raise bed”), wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed (See Judy Par [0035]-[0036] which discloses a patient room equipped with a patient support apparatus, including a stretcher, a chair, a wheelchair, a bench, a hospital bed, etc.; See Judy Par [0039] which discloses the voice command server controlling various functions within the patient’s room, including bed controls; See Judy Par [0052]-[0055] which discloses voice recognition software determining which controllable device is related to the voice command and converting the voice command into a control signal, such as for a hospital bed when the patient or caregiver may voice a command to the electronic controller, e.g. “raise bed”). The disclosure of Judy is directly applicable to the combined disclosure of Smith and Gallopyn because the disclosures share limitations and capabilities, such as being directed towards systems and programs for providing assistance in a medical setting by interacting with a medical professional. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Smith and Gallopyn which already discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist to further include the dictated task including a command for controlling a hospital bed via a voice-based control operation to physically adjust one or more portions of the hospital bed, as disclosed by Judy, because this allows for medical providers in a predetermined distance to remotely control a controllable device, such as a patient support apparatus, without physical interaction with a hospital bed interface (See Judy Par [0052]-[0055]). Claim 2 – Regarding Claim 2, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, wherein: interfacing a generic virtual assistant with a medical management system includes: enabling functionality on the generic virtual assistant to effectuate cloud-based communication between the generic virtual assistant and the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 3 – Regarding Claim 3, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, wherein: monitoring the diction of the medical specialist using the generic virtual assistant includes: monitoring the diction of the medical specialist using the generic virtual assistant to listen for the utterance of a wake-up word (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 4 – Regarding Claim 4, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, wherein: monitoring the diction of a medical specialist using the generic virtual assistant includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): monitoring the diction of a claim processing specialist using the generic virtual assistant (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); monitoring the diction of a billing specialist using the generic virtual assistant (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); monitoring the diction of a data processing specialist using the generic virtual assistant (the broadest reasonable interpretation of a “data processing specialist” includes a health care provider that collects and processes health data, therefore see Smith Col 16, ll. 42-60 which discloses the use of the system in the health care setting and collection of health data, i.e. by a health care provider that collects and processes health data); and monitoring the diction of an ordering specialist using the generic virtual assistant (the broadest reasonable interpretation of a “ordering specialist” includes a health care provider that orders medical tests, therefore see Smith Col 16, ll. 42-60 which discloses health data being collected at a command of a health care provider that orders a certain type of test). Claim 5 – Regarding Claim 5, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, further comprising: processing at least a portion of the diction to identify at least one task to be performed within a medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and if at least one task is detected, effectuating the at least one task on the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input; See Smith Col 13, ll. 49 – Col. 14, ll. 8 which discloses the command causes the device or machine to perform a specified task or tasks). Claim 6 – Regarding Claim 6, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, wherein: processing at least a portion of the diction to identify at least one task to be performed within a medical management system includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): processing at least a portion of the diction using natural language processing (See Smith Col. 5, ll. 62 – Col. 6, ll. 6 which discloses the use of natural language processing that recognizes one or more of the predetermined speech inputs that a virtual assistant companion can speak; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); processing at least a portion of the diction to identify one or more task-indicative trigger words (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and processing at least a portion of the diction to identify one or more task-indicative conversational structures (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 7 – Regarding Claim 7, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, wherein: processing at least a portion of the diction to identify at least one task to be performed within a medical management system includes one or more of: processing at least a portion of the diction on a cloud-based computing resource to identify at least one task to be performed within a medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 8 – Regarding Claim 8, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, wherein: the medical management system includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): a medical office management system (“medical office management” under broadest reasonable interpretation can include a system for obtaining medical data in a medical office/institution unless otherwise specified; See Smith Col. 10, ll. 44 – Col. 12, ll. 28 which discloses receiving and management of healthcare/medical data; See Smith Col. 16, ll. 42-47 which discloses the implementation of the system within a healthcare institution); a medical office billing system (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); and a pharmacy management system (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation). Claim 9 – Regarding Claim 9, Smith, Gallopyn, and Judy disclose the computer-implemented method of claim 1 in its entirety. Smith further discloses a method, wherein: effectuating the at least one task on the medical management system includes: parsing the at least one task into a plurality of subtasks (While not “subtask” per se, See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like, and the task flow has a plurality of “steps” and/or “parameters to accomplish the deduced user intent); and effectuating the plurality of subtasks on the medical management system (While not “subtask” per se, See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like, and the task flow has a plurality of “steps” and/or “parameters to accomplish the deduced user intent). Claim 11 – Regarding Claim 11, Smith discloses a computer program product on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations (See Smith Col. 15, ll. 27-49 which discloses a computer readable medium, computer, processor, and performing the operations) comprising: interfacing a generic virtual assistant with a medical management system (See Smith Col. 3, ll. 35-65 which discloses the use of a virtual assistant which interprets natural language input in spoken and/or textual form to deduce user intent and perform actions/tasks based on the deduced user intent; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and monitoring the diction of a medical specialist using the generic virtual assistant (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); processing at least a portion of the diction to identify at least one task to be performed within a medical management system (See Smith Col. 5, ll. 62 – Col. 6, ll. 6 which discloses the use of natural language processing that recognizes one or more of the predetermined speech inputs that a virtual assistant companion can speak; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and in response to detecting at least one task, effectuating the at least one task on the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks in response to a command received by speech or other input), wherein effectuating the at least one task on the medical management system includes commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface to effectuate the at least one task on the medical management system (See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks in response to a command received by speech or other input), wherein commandeering the local user interface of the medical management system includes processing the at least one task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks, i.e. performing processing at least one task in response to a command received by speech or other input) wherein the at least one task includes a command for controlling a hospital bed (See), wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed (See). While Smith discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist, Smith does not seem to explicitly mention allowing the user to commandeer a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface. However, Gallopyn discloses commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface (See Gallopyn Par [0023]-[0026] which discloses a configuration including one or more network accessible computers that are accessed using telephony, wherein the one or more network computers hosts the virtual medical assistant, i.e. tasks effectuated, and the functionality of the assistant is accessed via a separate interface device, such that a medical professional may call into a system capable of accessing one or more host devices using any suitable telephony device and conduct interactions with the virtual medical assistant using speech, such that the interface device (e.g., the telephony component used by the medical professional) interacts with, but does not itself necessarily host, the virtual medical assistant, effectively constituting commandeering a local user interface through remote manipulation of the local user interface; See Gallopyn Par [0033] which discloses by implementing at least some functionality of the virtual medical assistant on a virtual medical assistant server, i.e. that is remote from the local medical system, the computer footprint of the virtual medical assistant may be reduced to some extent) commandeering the local user interface of the medical management system includes processing the at least one task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (See Gallopyn Par [0033] which discloses by implementing at least some functionality of the virtual medical assistant on a virtual medical assistant server, i.e. that is remote from the local medical system, the computer footprint of the virtual medical assistant may be reduced to some extent; See Gallopyn Par [0023]-[0026] which discloses a configuration including one or more network accessible computers that are accessed using telephony, wherein the one or more network computers hosts the virtual medical assistant, i.e. tasks effectuated, and the functionality of the assistant is accessed via a separate interface device, such that a medical professional may call into a system capable of accessing one or more host devices using any suitable telephony device and conduct interactions with the virtual medical assistant using speech, such that the interface device (e.g., the telephony component used by the medical professional) interacts with, but does not itself necessarily host, the virtual medical assistant, effectively constituting commandeering a local user interface by processing at least one task on the local user interface and rendering remote manipulation of the local user interface). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Smith, which already discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist to further include commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface, as disclosed by Gallopyn, because this allows the system to be implemented using a mobile device and/or server implementation for reduction of the computer footprint of the virtual medical assistant in the local computing environment (See Gallopyn Par [0033]). While Smith and Gallopyn generally disclose the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist, Smith and Gallopyn are generally silent on the tasks including commands for controlling a hospital bed the following limitations: the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed. However, Judy discloses the at least one task includes a command for controlling a hospital bed (See Judy Par [0035]-[0036] which discloses a patient room equipped with a patient support apparatus, including a stretcher, a chair, a wheelchair, a bench, a hospital bed, etc.; See Judy Par [0039] which discloses the voice command server controlling various functions within the patient’s room, including bed controls; See Judy Par [0052]-[0055] which discloses voice recognition software determining which controllable device is related to the voice command and converting the voice command into a control signal, such as for a hospital bed when the patient or caregiver may voice a command to the electronic controller, e.g. “raise bed”), wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed (See Judy Par [0035]-[0036] which discloses a patient room equipped with a patient support apparatus, including a stretcher, a chair, a wheelchair, a bench, a hospital bed, etc.; See Judy Par [0039] which discloses the voice command server controlling various functions within the patient’s room, including bed controls; See Judy Par [0052]-[0055] which discloses voice recognition software determining which controllable device is related to the voice command and converting the voice command into a control signal, such as for a hospital bed when the patient or caregiver may voice a command to the electronic controller, e.g. “raise bed”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Smith and Gallopyn which already discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist to further include the dictated task including a command for controlling a hospital bed via a voice-based control operation to physically adjust one or more portions of the hospital bed, as disclosed by Judy, because this allows for medical providers in a predetermined distance to remotely control a controllable device, such as a patient support apparatus, without physical interaction with a hospital bed interface (See Judy Par [0052]-[0055]). Claim 12 – Regarding Claim 12, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, wherein: interfacing a generic virtual assistant with a medical management system includes: enabling functionality on the generic virtual assistant to effectuate cloud- based communication between the generic virtual assistant and the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 13 – Regarding Claim 13, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, wherein: monitoring the diction of the medical specialist using the generic virtual assistant includes: monitoring the diction of the medical specialist using the generic virtual assistant to listen for the utterance of a wake-up word (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 14 – Regarding Claim 14, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, wherein: monitoring the diction of a medical specialist using the generic virtual assistant includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): monitoring the diction of a claim processing specialist using the generic virtual assistant (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); monitoring the diction of a billing specialist using the generic virtual assistant (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); monitoring the diction of a data processing specialist using the generic virtual assistant (the broadest reasonable interpretation of a “data processing specialist” includes a health care provider that collects and processes health data, therefore see Smith Col 16, ll. 42-60 which discloses the use of the system in the health care setting and collection of health data, i.e. by a health care provider that collects and processes health data); and monitoring the diction of an ordering specialist using the generic virtual assistant (the broadest reasonable interpretation of a “ordering specialist” includes a health care provider that orders medical tests, therefore see Smith Col 16, ll. 42-60 which discloses health data being collected at a command of a health care provider that orders a certain type of test). Claim 15 – Regarding Claim 15, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, further comprising: processing at least a portion of the diction to identify at least one task to be performed within a medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and if at least one task is detected, effectuating the at least one task on the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input; See Smith Col 13, ll. 49 – Col. 14, ll. 8 which discloses the command causes the device or machine to perform a specified task or tasks). Claim 16 – Regarding Claim 16, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, wherein: processing at least a portion of the diction to identify at least one task to be performed within a medical management system includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): processing at least a portion of the diction using natural language processing (See Smith Col. 5, ll. 62 – Col. 6, ll. 6 which discloses the use of natural language processing that recognizes one or more of the predetermined speech inputs that a virtual assistant companion can speak See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); processing at least a portion of the diction to identify one or more task-indicative trigger words (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and processing at least a portion of the diction to identify one or more task-indicative conversational structures (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 17 – Regarding Claim 17, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, wherein: processing at least a portion of the diction to identify at least one task to be performed within a medical management system includes one or more of: processing at least a portion of the diction on a cloud-based computing resource to identify at least one task to be performed within a medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 18 – Regarding Claim 18, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, wherein: the medical management system includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): a medical office management system (“medical office management” under broadest reasonable interpretation can include a system for obtaining medical data in a medical office/institution unless otherwise specified; See Smith Col. 10, ll. 44 – Col. 12, ll. 28 which discloses receiving and management of healthcare/medical data; See Smith Col. 16, ll. 42-47 which discloses the implementation of the system within a healthcare institution); a medical office billing system (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); and a pharmacy management system (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation). Claim 19 – Regarding Claim 19, Smith, Gallopyn, and Judy disclose the computer program product of claim 11 in its entirety. Smith further discloses a computer program product, wherein: effectuating the at least one task on the medical management system includes: parsing the at least one task into a plurality of subtasks (While not “subtask” per se, See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like, and the task flow has a plurality of “steps” and/or “parameters to accomplish the deduced user intent); and effectuating the plurality of subtasks on the medical management system (While not “subtask” per se, See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like, and the task flow has a plurality of “steps” and/or “parameters to accomplish the deduced user intent). Claim 21 – Regarding Claim 21, Smith discloses a computing system including a processor and memory configured to perform operations (See Smith Col. 15, ll. 27-49 which discloses a computer readable medium, computer, processor, and performing the operations) comprising: interfacing a generic virtual assistant with a medical management system (See Smith Col. 3, ll. 35-65 which discloses the use of a virtual assistant which interprets natural language input in spoken and/or textual form to deduce user intent and perform actions/tasks based on the deduced user intent; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and monitoring the diction of a medical specialist using the generic virtual assistant (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); processing at least a portion of the diction to identify at least one task to be performed within a medical management system (See Smith Col. 5, ll. 62 – Col. 6, ll. 6 which discloses the use of natural language processing that recognizes one or more of the predetermined speech inputs that a virtual assistant companion can speak; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and in response to detecting at least one task, effectuating the at least one task on the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks in response to a command received by speech or other input), wherein effectuating the at least one task on the medical management system includes commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface to effectuate the at least one task on the medical management system (See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks in response to a command received by speech or other input), wherein commandeering the local user interface of the medical management system includes processing the at least task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (commandeering the local user interface of the medical management system includes processing the at least one task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like; See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a local virtual assistant can perform a variety of tasks, i.e. performing processing at least one task in response to a command received by speech or other input) wherein the at least one task includes a command for controlling a hospital bed (See), wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed (See). While Smith discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist, Smith does not seem to explicitly mention allowing the user to commandeer a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface. However, Gallopyn discloses commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface (See Gallopyn Par [0023]-[0026] which discloses a configuration including one or more network accessible computers that are accessed using telephony, wherein the one or more network computers hosts the virtual medical assistant, i.e. tasks effectuated, and the functionality of the assistant is accessed via a separate interface device, such that a medical professional may call into a system capable of accessing one or more host devices using any suitable telephony device and conduct interactions with the virtual medical assistant using speech, such that the interface device (e.g., the telephony component used by the medical professional) interacts with, but does not itself necessarily host, the virtual medical assistant, effectively constituting commandeering a local user interface through remote manipulation of the local user interface; See Gallopyn Par [0033] which discloses by implementing at least some functionality of the virtual medical assistant on a virtual medical assistant server, i.e. that is remote from the local medical system, the computer footprint of the virtual medical assistant may be reduced to some extent) commandeering the local user interface of the medical management system includes processing the at least one task on the local user interface and rendering remote manipulation of the local user interface by the virtual assistant during the effectuating of the task on the medical management system (See Gallopyn Par [0033] which discloses by implementing at least some functionality of the virtual medical assistant on a virtual medical assistant server, i.e. that is remote from the local medical system, the computer footprint of the virtual medical assistant may be reduced to some extent; See Gallopyn Par [0023]-[0026] which discloses a configuration including one or more network accessible computers that are accessed using telephony, wherein the one or more network computers hosts the virtual medical assistant, i.e. tasks effectuated, and the functionality of the assistant is accessed via a separate interface device, such that a medical professional may call into a system capable of accessing one or more host devices using any suitable telephony device and conduct interactions with the virtual medical assistant using speech, such that the interface device (e.g., the telephony component used by the medical professional) interacts with, but does not itself necessarily host, the virtual medical assistant, effectively constituting commandeering a local user interface by processing at least one task on the local user interface and rendering remote manipulation of the local user interface). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Smith, which already discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist to further include commandeering a local user interface, normally used by the medical specialist, of the medical management system through remote manipulation of the local user interface, as disclosed by Gallopyn, because this allows the system to be implemented using a mobile device and/or server implementation for reduction of the computer footprint of the virtual medical assistant in the local computing environment (See Gallopyn Par [0033]). While Smith and Gallopyn generally disclose the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist, Smith and Gallopyn are generally silent on the tasks including commands for controlling a hospital bed the following limitations: the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed. However, Judy discloses the at least one task includes a command for controlling a hospital bed (See Judy Par [0035]-[0036] which discloses a patient room equipped with a patient support apparatus, including a stretcher, a chair, a wheelchair, a bench, a hospital bed, etc.; See Judy Par [0039] which discloses the voice command server controlling various functions within the patient’s room, including bed controls; See Judy Par [0052]-[0055] which discloses voice recognition software determining which controllable device is related to the voice command and converting the voice command into a control signal, such as for a hospital bed when the patient or caregiver may voice a command to the electronic controller, e.g. “raise bed”), wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physically adjust one or more portions of the hospital bed (See Judy Par [0035]-[0036] which discloses a patient room equipped with a patient support apparatus, including a stretcher, a chair, a wheelchair, a bench, a hospital bed, etc.; See Judy Par [0039] which discloses the voice command server controlling various functions within the patient’s room, including bed controls; See Judy Par [0052]-[0055] which discloses voice recognition software determining which controllable device is related to the voice command and converting the voice command into a control signal, such as for a hospital bed when the patient or caregiver may voice a command to the electronic controller, e.g. “raise bed”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined disclosure of Smith and Gallopyn which already discloses the use of a generic virtual assistant that effectuates at least one task on a medical management system in response to identified tasks given to the system via diction from a medical specialist to further include the dictated task including a command for controlling a hospital bed via a voice-based control operation to physically adjust one or more portions of the hospital bed, as disclosed by Judy, because this allows for medical providers in a predetermined distance to remotely control a controllable device, such as a patient support apparatus, without physical interaction with a hospital bed interface (See Judy Par [0052]-[0055]). Claim 22 – Regarding Claim 22, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, wherein: interfacing a generic virtual assistant with a medical management system includes: enabling functionality on the generic virtual assistant to effectuate cloud-based communication between the generic virtual assistant and the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 23 – Regarding Claim 23, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, wherein: monitoring the diction of the medical specialist using the generic virtual assistant includes: monitoring the diction of the medical specialist using the generic virtual assistant to listen for the utterance of a wake-up word (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 24 – Regarding Claim 24, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, wherein: monitoring the diction of a medical specialist using the generic virtual assistant includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): monitoring the diction of a claim processing specialist using the generic virtual assistant (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); monitoring the diction of a billing specialist using the generic virtual assistant (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); monitoring the diction of a data processing specialist using the generic virtual assistant (the broadest reasonable interpretation of a “data processing specialist” includes a health care provider that collects and processes health data, therefore see Smith Col 16, ll. 42-60 which discloses the use of the system in the health care setting and collection of health data, i.e. by a health care provider that collects and processes health data); and monitoring the diction of an ordering specialist using the generic virtual assistant (the broadest reasonable interpretation of a “ordering specialist” includes a health care provider that orders medical tests, therefore see Smith Col 16, ll. 42-60 which discloses health data being collected at a command of a health care provider that orders a certain type of test). Claim 25 – Regarding Claim 25, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, further comprising: processing at least a portion of the diction to identify at least one task to be performed within a medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and if at least one task is detected, effectuating the at least one task on the medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input; See Smith Col 13, ll. 49 – Col. 14, ll. 8 which discloses the command causes the device or machine to perform a specified task or tasks). Claim 26 – Regarding Claim 26, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, wherein: processing at least a portion of the diction to identify at least one task to be performed within a medical management system includes one or more of (according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): processing at least a portion of the diction using natural language processing (See Smith Col. 5, ll. 62 – Col. 6, ll. 6 which discloses the use of natural language processing that recognizes one or more of the predetermined speech inputs that a virtual assistant companion can speak See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); processing at least a portion of the diction to identify one or more task-indicative trigger words (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input); and processing at least a portion of the diction to identify one or more task-indicative conversational structures (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 27 – Regarding Claim 27, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, wherein: processing at least a portion of the diction to identify at least one task to be performed within a medical management system includes one or more of: processing at least a portion of the diction on a cloud-based computing resource to identify at least one task to be performed within a medical management system (See Smith Col. 9, ll. 35- Col. 10, ll. 42 which discloses upon recognition of a wake word, subsequent words are streamed to the cloud and analyzed by the virtual assistant and a virtual assistant can perform a variety of tasks in response to a command received by speech or other input). Claim 28 – Regarding Claim 28, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, wherein: the medical management system includes one or more of according to the “one or more of” language found above the BRI of the claim only requires one of the enumerated limitations below to be met): a medical office management system (“medical office management” under broadest reasonable interpretation can include a system for obtaining medical data in a medical office/institution unless otherwise specified; See Smith Col. 10, ll. 44 – Col. 12, ll. 28 which discloses receiving and management of healthcare/medical data; See Smith Col. 16, ll. 42-47 which discloses the implementation of the system within a healthcare institution); a medical office billing system (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation); and a pharmacy management system (in view of the above “one or more of” language, under BRI, Smith does not have to read on this limitation). Claim 29 – Regarding Claim 29, Smith, Gallopyn, and Judy disclose the computing system of claim 21 in its entirety. Smith further discloses a system, wherein: parsing the at least one task into a plurality of subtasks (While not “subtask” per se, See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like, and the task flow has a plurality of “steps” and/or “parameters to accomplish the deduced user intent); and effectuating the plurality of subtasks on the medical management system (While not “subtask” per se, See Smith Col. 3, ll. 35-52 which discloses identifying a task flow and executing a task flow by invoking programs (software applications), methods, services, APIs (i.e. application program interface), or the like, and the task flow has a plurality of “steps” and/or “parameters to accomplish the deduced user intent). Response to Arguments Applicant's arguments filed 17 November 2025 have been fully considered but they are not persuasive: Regarding 35 U.S.C. 101 rejections of claims 1-30, Applicant argues on p. 9-11 of Arguments/Remarks that the newly amended limitations found in the independent claims overcome previous 35 U.S.C. 101 rejections at least by amounting to a practical application of any alleged abstract idea under Step 2A, Prong 2. As such, Applicant argues that the 35 U.S.C. 101 rejections of claims 1-9, 11-19, & 21-29 should be withdrawn. Examiner agrees with Applicant’s arguments. More specifically, the newly amended additional elements of the independent claims integrate the abstract idea into a practical application, other than the abstract idea per se, because these elements amount to applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception at least by the following limitation found in independent claims 1, 11, & 21”: “the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice-based control operation to physical adjust one or more portions of the hospital bed”. As such, independent claims 1, 11, & 21 are integrated into a practical application and therefore are not directed to the recited abstract ideas recited therein, constituting patent-eligible subject matter under 35 U.S.C. 101. Furthermore, by virtue of dependency from independent claims 1, 11, & 21, dependent claims 2-9, 12-19, & 22-29 are also directed towards patent-eligible subject matter. As such, the 35 U.S.C. 101 rejections of claims 1-9, 11-19, & 21-29 have been withdrawn. Regarding 35 U.S.C. 103 rejections of claims 1-30, Applicant argues on p. 12-16 of Arguments/Remarks that Smith and Gallopyn do not disclose the newly amended limitations found in independent claims 1, 11, & 21 regarding “wherein the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice- based control operation to physically adjust one or more portions of the hospital bed”. Applicant further argues that the 35 U.S.C. 103 rejections for these independent claims 1, 11, & 21 and claims dependent therefrom should be withdrawn. Examiner respectfully disagrees with Applicant’s arguments. In particular, Examiner agrees that Smith and Gallopyn are relatively silent on “the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice- based control operation to physically adjust one or more portions of the hospital bed”. Therefore, the previous 35 U.S.C. 103 rejections have been withdrawn. However, upon further consideration, a new ground of rejection has been made under 35 U.S.C. 103 over Smith, Gallopyn, and Judy. This new ground of rejection relies on newly cited Judy to read on the newly amended limitation “wherein the at least one task includes a command for controlling a hospital bed, wherein effectuating the command for controlling the hospital bed includes effectuating a voice- based control operation to physically adjust one or more portions of the hospital bed”. As such, pending claims independent claims 1, 11, & 21 and claims dependent from claims 1, 11, & 21 (i.e. claims 2-9, 12-19, & 22-29) remain rejected under 35 U.S.C. 103 over Smith in view of Gallopyn, further in view of Judy. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: Receveur et al. (U.S. Patent Publication No. 2022/0101847) discloses a system for voice control of medical devices in a healthcare facility, such as hospital beds; Moreno et al. (U.S. Patent Publication No. 2019/0008708) discloses a voice recognition system for controlling a patient support apparatus via an input command signal; Bhimavarapu et al. (U.S. Patent Publication No. 2018/0369038) discloses a patient support system for providing improved guidance tools for operating said support system, including receiving voice-controlled operations thereof. Applicant's amendment necessitated the new ground of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNTER J RASNIC whose telephone number is 571-270-5801. The examiner can normally be reached M-F 8am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached on (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.R./Examiner, Art Unit 3684 /KENNETH BARTLEY/Primary Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Jul 27, 2022
Application Filed
May 15, 2024
Non-Final Rejection — §103
Nov 25, 2024
Response Filed
Feb 11, 2025
Final Rejection — §103
May 20, 2025
Request for Continued Examination
May 21, 2025
Response after Non-Final Action
Jun 13, 2025
Non-Final Rejection — §103
Nov 17, 2025
Response Filed
Feb 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12142364
SYSTEMS AND METHODS THAT PROVIDE A POSITIVE EXPERIENCE DURING WEIGHT MANAGEMENT
2y 5m to grant Granted Nov 12, 2024
Patent 11961606
Systems and Methods for Processing Medical Images For In-Progress Studies
2y 5m to grant Granted Apr 16, 2024
Patent 11908558
PROSPECTIVE MEDICATION FILLINGS MANAGEMENT
2y 5m to grant Granted Feb 20, 2024
Patent 11875904
IDENTIFICATION OF EPIDEMIOLOGY TRANSMISSION HOT SPOTS IN A MEDICAL FACILITY
2y 5m to grant Granted Jan 16, 2024
Patent 11862314
METHODS AND SYSTEMS FOR PATIENT CONTROL OF AN ELECTRONIC PRESCRIPTION
2y 5m to grant Granted Jan 02, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
11%
Grant Probability
32%
With Interview (+20.5%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 81 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month