DETAILED ACTION
Claims 1 – 20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 23 January 2023 has been entered.
Response to Amendment
With regard to the Final Office Action from 25 July 2025, the Applicant has filed a response on 23 January 2026.
Response to Arguments
The Examiner rejected independent claim 1 under 35 U.S.C. 103, and the Applicant argues against the Examiner’s use of Carlson et al. as applied to the currently-amended limitation:
initiate a confirmation request as a continuation of the first assistance action implemented by presenting the audio for the first input request to the user, the confirmation request including at least one voice query for additional information associated with execution of the voice command protocol and capture of confirmation data that confirms any determination for the negative response event, the positive response event, and the non-response event.
The Applicant indicates (Remarks: page 9 par 2) that the limitation of a ‘confirmation request including at least one voice query for additional information associated with execution of the voice command protocol’ is not taught by the applied prior art. The Applicant also further indicates (Remarks: page 9 par 3) that the limitation of a ‘confirmation request as a continuation of the first assistance action implemented by presenting the audio for the first input request to the user’ is not taught by the applied reference of Carlson et al. The Examiner now refers to [0045] of Carlson et al. to show that at least one natural language prompt can be generated for the user/patient, with ‘at least one’ here being understood to mean the possibility of being more than one, while also teaching that the output prompt may be a spoken question in a language of the user, to show the presentation of a voice query. [0059] of this same reference shows that a series of prompts, being presented to the user, one after the other, showing that one question can be presented to the user, following the presentation of a previous question. The presentation of a series of questions would indicate obtaining further information from an initial query to continue to confirm a particular situation.
By this, the Examiner maintains the applied prior art of record, and this will be further presented in the following section in a way that appropriately addressed the claim limitations.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 6, 7, 8, 10, 11, 12, 13, 16, 17, 18 and 20 are rejected under 35 U.S.C. 103 as being obvious over Geman et al. (US 2016/0106627 A1: hereafter — Geman) in view of Carlson et al. (US 2016/0004831 A1: hereafter — Carlson) further in view of Mihailidis et al. (US 2013/0100268 A1: hereafter — Mihailidis), further in view of HIROE et al. (US 2020/0066254 A1: hereafter — Hiroe), and further in view of Hopkins et al. (US 2015/0269827 A1: hereafter — Hopkins).
For claim 1, Geman discloses a system for providing assistance to a user (Geman: Abstract — the embodiments are to provide healthcare schedules, home health care, assisted living (forms of providing assistance to a user)), the system comprising:
at least one server computer comprising at least one processor and at least one memory (Geman: [0072] — a webserver which can store information (memory) and a database server which runs applications (possessing a processor)),
an assistance device in a caregiving environment of the user, the assistance device comprising a processor, a memory, a speaker, and a microphone (Geman: [0088] — a microphone; [0009] — a patient answering a telephone (a telephone has a speaker); [0066] — memory and processor).
The reference of Geman fails to teach the further limitations of this claim for which Carlson is now introduced to teach as:
wherein the system is configured to:
implement an assistance action associated with a voice routine for providing assistance to the user, the assistance action conforming to a voice command protocol for presenting input requests to the user and processing voice responses of the user (Carlson: [0034] — an action for requesting assistance on behalf of a patient; [0083] — obtaining a patient’s verbal response to queries as health survey questions (this being the presentation of input requests to the user and processing the user’s responses)),
in response to receiving the first action instruction, initiate the first assistance action by generating and communicating a first invocation command from an assistance device, implement a first assistance action by instructing the assistance device to present audio for a first input request to the user, processing any of a first voice input of the user from an audio capture, and determining that the audio capture or any of the first voice input is at least one of: a negative response event, a positive response event, and a non-response event (Carlson: [0140] — prompts being delivered to a user; [0143] — the system detects a response to the prompt within a predetermined time period; [0144] — determining the response type (which could be indicative of positive or negative response events); [0083] — performing speech recognition (showing that user can provide speech responses); [0085] — various user responses (as an indication of positive or negative response events)),
selectively notifying a caregiver by causing a notification to be sent to a device of the caregiver [[based on a safety profile]] and a determination that the first voice input corresponds to a negative response event, a positive response event, or a non-response event (Carlson: [0058] — notifying a target recipient with [0039] showing that a target recipient could be a care giver; [0145] — if the response is addressable, a third party (such a caregiver) may be prompted to take action regarding the patient; [0034] — contacting an external entity on behalf of the patient when the patient requires assistance [0058] — the target recipient includes a device associated with a healthcare provider or care taker); and
wherein the assistance device is configured to:
receive information about the first input request and present audio of the first input request to the user based on executing the instructions (Carlson: [0140] — synthesised speech as prompts for the user; [0045] — prompts can take the form of questions (an input request) which gets presented to the user), and
receive the first voice input of the user [[and transmit the first voice input to the at least one server computer]] (Carlson: [0045] — user responds to the prompts through the use of a microphone; [0083] — performing speech recognition)); and
presenting the audio of at least one query for additional information based on communicating instructions for presenting the audio of at least one query (Carlson: [0138] — the system is able to issue one or more prompts based on the triggered target event to be delivered to the target recipients (teaching of the further presentation of a query for additional information based on the communication of the instructions as the detection of the target event)); and
wherein the system is further configured to:
initiate a confirmation request as a continuation of the first assistance action implemented by presenting the audio for the first request to the user, the confirmation request including at least one voice query for additional information associated with execution of the voice command protocol and capture of confirmation data that confirms any determination for the negative response event, the positive response event, and the non-response event (Carlson: [0138] — the system is able to issue one or more prompts; [0140] — synthesised speech as prompts for the user (to show the presentation of audio as a request to the user); [0045] — presenting at least one prompt (indicating the possibility of a series of prompts, so more than one, with one being a follow-up to the other) to the user which may be a spoken question in the language which the user understands (indicating presenting audio for a request to the user); [0059], [0124] — presenting a series of prompts as survey questions to a user (indicating the presence of one question after another, such that one question follows the other so as to confirm a particular situation regarding the user); [0118] — the presentation of a series of questions; [0145] — an initial response by the user may lead to further prompting the user to take an action (as a request for confirmation); FIG. 10 Step 1016 — detecting a user response (indicative of a determination of a negative, positive, or non-response event)).
The reference of Geman provides a system able to provide assistance to a user at a server device and an assistance device able to store information about a schedule of assistance actions. This reference differs from the claimed invention in that the claimed invention further provides the presenting of an audio for an input request to the user in order to derive a positive, negative or non-responsive event from the user and effectively notify the caregiver about it. This isn’t new to the art as the reference of Carlson is seen to provide such teaching as shown above.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Carlson which requests an audible response from a user to notify a caregiver, with the assistance device of Geman, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of automatically monitoring the health status of a user through speech which is convenient and easily accessible over having the user physically input medical readings to transmit to a caregiver. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
The combination of Geman in view of Carlson fails to teach the further limitation of this claim, for which Mihailidis is now introduced to teach as:
wherein the system is configured to:
transmit, by the at least one server computer, to the assistance device, a first action instruction for implementing a first assistance action (Mihailidis: [0109] — a central server may tell the active emergency detection and response (EDR) unit to initiate a dialogue with the user (teaching of the server transmitting a first action instruction that initiates a dialogue, to the assistance device));
in response to receiving the first action instruction, initiate the first assistance action by generating and communicating a first invocation command from an assistance device, implement a first assistance action by instructing the assistance device to present audio for a first input request to the user, processing any of a first voice input of the user from an audio capture, [[and determining that the audio capture or any of the first voice input is at least one of: a negative response event, a positive response event, and a non-response event]] (Mihailidis: [0109] — initiating a dialogue with the user that is adaptable and appropriate for the particular situation to determine if assistance is required, such as asking the user if everything is ok, making use of TTS and ASR for recognising the user’s speech (the server passes the information to the assistance device, the EDR unit, telling the assistance device to present an adaptable message in audio form); [0058] — ‘the EDR unit 14 can communicate with the subject via the microphone 34 and loudspeaker 48 and initiate a dialog using speech recognition software’ (indicating that the user can provide a response to the EDR’s request));
wherein the assistance device is configured to:
transmit, the first invocation command (Mihailidis: [0108] — when the EDR unit determines that the user does require assistance, this information may be relayed to the central server (the information being relayed from the user device to the server is an indication of the invocation command requesting assistance for the user));
receive the first voice input of the user and transmit the first voice input to the at least one server computer (Mihailidis: [0110] — received user speech can be transmitted to be processed by a central server).
The combination of Geman in view of Carlson provides teaching for a system environment that assists a user, such that a response is obtained from a user which is then used to notify a caregiver. It differs from the claimed invention in that the claimed invention further provides teaching for a server sending an action instruction to a device for instructing the assistance device to present audio for an input request to the user, and capturing audio from the user. This is not new to the art as it is seen to be taught by Mihailidis above.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Mihailidis which teaches a server sending an action instruction to a device for instructing the assistance device to present audio for an input request to the user, and capturing audio from the user, with the teaching of the combination of Geman in view of Carlson which provides a system environment that assists a user, such that a response is obtained from a user which is then used to notify a caregiver, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of having a system that proactively checks on the user to be able to quickly offer assistance if needed at the point of checking. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
The combination of Geman in view of Carlson further in view Mihailidis fails to teach the further limitations of this claim, for which the reference of Hiroe is now introduced to teach as:
generate, by the at least one server computer a request and transmit the request to a speech processing server to obtain information for presenting audio of the first assistance action (Hiroe: [0096] — a dialog control server 12 which may present a response text to a speech synthesising server (the speech synthesising server being understood here to be able to present audio to the client or terminal));
communicate from the at least one server computer or the speech processing server instructions for presenting the audio of the first assistance action (Hiroe: [0118] — the robot (which is the terminal client in this case) is able to receive synthesised speech information from a speech synthesising server; [0071] — a speech conversation between the human and the robot (the presence of such a system that performs speech dialogue indicates the inherent presence of instructions to present the received audio to the user, for the user to be able to listen to).
The combination of Geman in view of Carlson further in view of Mihailidis provides teaching for implementing an assistance action for providing assistance, by presenting input requests to a user. It differs from the claimed invention in that the claimed invention further provides teaching for generating the request and transmitting to a speech processing server for communication. This is however seen to be taught by the Hiroe reference as presented above.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Hiroe which generates a response, presents it to a speech processing server, and communicates it from the server, with the teaching of the combination of Geman in view of Carlson further in view of Mihailidis which delivers assistance, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of generating speech requests in natural language that are directly applicable to the needed assistance at hand. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
The combination of Geman in view of Carlson further in view of Mihailidis and further in view of Hiroe fails to teach the further limitation of this claim, for which Hopkins is now introduced to teach as:
selectively notifying a caregiver by causing a notification to be sent to a device of the caregiver based on a safety profile and a determination that the first voice input corresponds to a negative response event, a positive response event, or a non-response event (Hopkins: [0009] — a database to store information; [0025] — configuring a caregiver to be able to receive an alert on the caregiver’s device based on rules set by stored profile information of a client (thereby teaching of a safety profile capable of notifying the caregiver of an adverse negative or no-response situation)).
The combination of Geman in view of Carlson further in view of Mihailidis and further in view of Hiroe provides teaching for sending a notification to be sent to the device of a caregiver. This combination however fails to teach of the presence of a safety profile for a user to be used in notifying a caregiver. This is however not new to the art as the reference of Hopkins goes to show above.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Hopkins which maintains a user safety profile, with the teaching of the combination of Geman in view of Carlson further in view of Mihailidis and further in view of Hiroe which provides teaching for sending a notification to be sent to the device of a caregiver, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of providing a caregiver with the adequate tools to be able to assist in keeping track of a patient/client’s health condition based on the situations that arise. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
For claim 3, claim 1 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins discloses the system, wherein the system comprises one or more sensing pods in the caregiving environment of the user, wherein each sensing pod comprises a microphone and a speaker (Carlson: [0071] — system interface components for sensing, including audio sensors; [0078] — speakers and microphones being deployed as components available to the user).
For claim 6, claim 1 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis and further in view of Hiroe and further in view of Hopkins discloses the system, wherein the assistance device transmits the first invocation command in response to an occurrence of a first time (Mihailidis: [0108] — when the EDR unit (assistance device) determines that the user does require assistance at a particular time, this information may be relayed to the central server (the information being relayed from the user device to the server is an indication of the invocation command requesting assistance for the user).
For claim 7, claim 1 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins discloses the system, wherein the assistance device initiates the first assistance action by transmitting the first invocation command at a first time (Mihailidis: [0108] — when the EDR unit (assistance device) determines that the user does require assistance at a particular time, this information may be relayed to the central server (the information being relayed from the user device to the server is an indication of the invocation command requesting assistance for the user).
For claim 8, claim 1 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins discloses the system, wherein the at least one server computer comprises a scheduling server, an emergency server, and an automated assistance server (Geman: [0072] — medication adherence schedule being stored in a database server; [0060] — a database including emergency contact information (functions of an emergency server); [0047] — conducting automated reminders and monitoring compliance (functions of an automated assistance server)).
For claim 10, Geman discloses a computer-implemented method for providing assistance to a user, the method comprising:
storing, by at least one processor, information about a plurality of assistance actions for providing assistance to the user, [[wherein each assistance action conforms to a voice command protocol for presenting input requests to the user and processing voice responses of the user]] (Geman: FIG. 3 Part 320 — a processor; [0072] — caregiver can enter information regarding a medication adherence schedule (being an assistance action) that gets stored in a database server which performs tasks as generating reminders for patients by calling specified phone numbers and medication adherence schedule being stored in a database server);
The reference of Geman fails to disclose the further limitations of this claim, for which Carlson is now introduced to teach as:
storing, by at least one processor, information about a plurality of assistance actions for providing assistance to the user, wherein each assistance action conforms to a voice command protocol for presenting input requests to the user and processing voice responses of the user (Carlson: [0034] — an action for requesting assistance on behalf of a patient; [0083] — obtaining a patient’s verbal response to queries as health survey questions (this being the presentation of input requests to the user and processing the user’s responses));
in response to receiving the first action instruction, initiate the first assistance action by generating and communicating a first invocation command from an assistance device in a caregiving environment of the user, implementing a first assistance action by instructing the assistance device to present audio based on executing instruction, including audio for a first input request to the user, processing any of a first voice input of the user, and determining that the first voice input corresponds to at least one of: a negative response event, positive response event, and a non-response event (Carlson: [0140] — prompts being delivered to a user; [0143] — the system detects a response to the prompt within a predetermined time period; [0144] — determining the response type (which could be indicative of positive or negative response events); [0083] — performing speech recognition (showing that user can provide speech responses); [0085] — various user responses (as an indication of positive or negative response events));
selectively notifying, by the at least one processor, a caregiver by causing a notification to be sent to a device of the caregiver [[based on a safety profile]] and at least one of the negative response event, the positive response event, or the non-response event (Carlson: [0058] — notifying a target recipient with [0039] showing that a target recipient could be a care giver; [0145] — if the response is addressable, a third party (such a caregiver) may be prompted to take action regarding the patient; [0034] — contacting an external entity on behalf of the patient when the patient requires assistance [0058] — the target recipient includes a device associated with a healthcare provider or care taker); and
communicating from the at least one server computer or the speech processing server instructions for presenting audio of a confirmation request as a continuation of the first assistance action implemented by presenting the audio for the first input request to the user, the confirmation request including at least one voice query for additional information associated with execution of the voice command protocol and capture of confirmation data, wherein the confirmation request confirms any determination for the negative response event, the positive response event, and the non-response event (Carlson: [0117] — communicating customised prompts to a medical device of a user (indicating the transmission of the selected survey questions from over a server to the user’s device); [0118] — the presentation of a series of questions; [0140] — the prompts may be delivered as speech; [0138] — the system is able to issue one or more prompts; [0140] — synthesised speech as prompts for the user (to show the presentation of audio as a request to the user); [0045] — presenting at least one prompt (indicating the possibility of a series of prompts, so more than one, with one being a follow-up to the other) to the user which may be a spoken question in the language which the user understands (indicating presenting audio for a request to the user); [0059], [0124] — presenting a series of prompts as survey questions to a user (indicating the presence of one question after another, such that one question follows the other so as to confirm a particular situation regarding the user); [0145] — an initial response by the user may lead to further prompting the user to take an action (as a request for confirmation); FIG. 10 Step 1016 — detecting a user response (indicative of a determination of a negative, positive, or non-response event).
The motivation for combination as applied to the incorporation of Carlson into the reference of Geman as applied above to claim 1 is applicable here still.
The combination of Geman in view of Carlson fails to disclose the further limitations of this claim, for which Mihailidis is now introduced to teach as:
transmitting, by the at least one server computer, to the assistance device, a first action instruction for implementing a first assistance action (Mihailidis: [0109] — a central server may tell the active emergency detection and response (EDR) unit to initiate a dialogue with the user (teaching of the server transmitting a first action instruction that initiates a dialogue, to the assistance device));
in response to receiving the first action instruction, initiate the first assistance action by generating and communicating a first invocation command from an assistance device in a caregiving environment of the user, implementing a first assistance action by instructing the assistance device to present audio based on executing instruction, including audio for a first input request to the user, processing any of a first voice input of the user, [[and determining that the first voice input corresponds to at least one of: a negative response event, positive response event, and a non-response event]] (Mihailidis: [0109] — initiating a dialogue with the user that is adaptable and appropriate for the particular situation to determine if assistance is required, such as asking the user is everything is ok making use of TTS and ASR for recognising the user’s speech (the server passes the information to the assistance device, the EDR unit, telling the assistance device to present an adaptable message in audio form); [0058] — ‘the EDR unit 14 can communicate with the subject via the microphone 34 and loudspeaker 48 and initiate a dialog using speech recognition software’ (indicating that the user can provide a response to the EDR’s request)).
The motivation for combination as applied to the incorporation of Mihailidis into the combination of Geman in view of Carlson as applied above to claim 1 is applicable here still.
The combination of Geman in view of Carlson further in view of Mihailidis fails to disclose the further limitations of this claim, for which Hiroe is now introduced to teach as:
generating, by the at least one processor, a request and transmit the request to a speech processing server to obtain information for presenting audio of the first assistance action (Hiroe: [0096] — a dialog control server 12 which may present a response text to a speech synthesising server (the speech synthesising server being understood here to be able to present audio to the client or terminal));
communicating from the at least one server computer or the speech processing server the instructions for presenting the audio of the first assistance action (Hiroe: [0118] — the robot (which is the terminal client in this case) is able to receive synthesised speech information from a speech synthesising server; [0071] — a speech conversation between the human and the robot (the presence of such a system that performs speech dialogue indicates the inherent presence of instructions to present the received audio to the user, for the user to be able to listen to).
The motivation for combination as applied to the incorporation of Hiroe into the combination of Geman in view of Carlson further in view of Mihailidis as applied above to claim 1 is applicable here still.
The combination of Geman in view of Carlson further in view of Mihailidis and further in view of Hiroe provides teaching for sending a notification to be sent to the device of a caregiver. This combination however fails to teach of the presence of a safety profile for a user.
This teaching is not new to the art as the reference of Hopkins is made available to be able to teach of storing a safety profile related to a user (Hopkins: [0009] — a database to store information; [0025] — configuring a caregiver to be able to receive an alert on the caregiver’s device based on rules set by stored profile information of a client (thereby teaching of a safety profile capable of notifying the caregiver of an adverse negative or no-response situation)).
The motivation for combination as applied to the incorporation of Hopkins into the combination of Geman in view of Carlson in view of Mihailidis, and further in view of Hiroe as applied above to claim 1 is applicable here still.
For claim 11, claim 10 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins discloses the method, wherein the safety profile of the user is updated by the caregiver (Hopkins: [0065], [0009] — a database which stores information related to the functionality of an alert system, also containing profile information).
For claim 12, claim 10 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins discloses the method, wherein at least one of the invocation commands comprises speech (Mihailidis: [0108] — when the EDR unit (assistance device) determines that the user does require assistance at a particular time, this information may be relayed to the central server (the information being relayed from the user device to the server is an indication of the invocation command requesting assistance for the user; [0116] — the system may be designed so the occupant may activate the system using the keyword ‘Help!’ which is speech).
For claim 13, claim 10 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins discloses the method, wherein determining that a non-response event has occurred comprises setting a timer after the third input request is presented to the user (Carlson: [0143] — attempting to detect a response within a predetermined time period (so as to detect a non-response event has occurred)).
For claim 16, claim 10 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins discloses the method, wherein the safety profile of the user indicates that a notification is to be sent to the caregiver after at least one of the following: a specified number of negative response events; or non-response events within a time period (Geman: [0082] — contacting a caregiver associated with a patient when the patient does not respond due to a lack of acknowledgement within a period of time).
As for claim 17, computer program product claim 17 and method claim 10 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Geman in [0010] provides computer-readable media to read upon the limitations of this claim. Accordingly, claim 17 is similarly rejected under the same rationale as applied above with respect to method claim 10.
For claim 18, claim 17 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe and further in view of Hopkins discloses the one or more non-transitory computer-readable media, wherein the assistance device transmits the first invocation command in response to an occurrence of a first time (Mihailidis: [0108] — when the EDR unit (assistance device) determines that the user does require assistance at a particular time, this information may be relayed to the central server (the information being relayed from the user device to the server is an indication of the invocation command requesting assistance for the user).
For claim 20, claim 17 is incorporated and the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe and further in view of Hopkins discloses the one or more non-transitory computer-readable media, wherein the invocation command comprises speech (Mihailidis: [0108] — when the EDR unit (assistance device) determines that the user does require assistance at a particular time, this information may be relayed to the central server (the information being relayed from the user device to the server is an indication of the invocation command requesting assistance for the user; [0116] — the system may be designed so the occupant may activate the system using the keyword ‘Help!’ which is speech).
Claim 2 is rejected under 35 U.S.C. 103 as being obvious over Geman (US 2016/0106627 A1) in view of Carlson (US 2016/0004831 A1) further in view of Mihailidis (US 2013/0100268 A1), further in view of Hiroe (US 2020/0066254 A1), and further in view of Hopkins (US 2015/0269827 A1) as applied to claim 1, and further in view of Zerhusen et al. (US 2003/0052787 A1: hereafter — Zerhusen).
For claim 2, claim 1 is incorporated but the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins fails to disclose the limitation of this claim, for which Zerhusen is now introduced to teach as, the system, wherein the first assistance action assists the user in ordering dinner (Zerhusen: [0127], [0191], [0192] — a meal service option giving a patient the opportunity to be able to order dinner).
The combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins provides for a system that is able to assist a user. It differs from the claimed invention in that the claimed invention now provides for a system that is able to provide the assistance action of ordering dinner, to the user. This is however not new to the at as the reference of Zerhusen is seen to provide a system capable of this function.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Zerhusen which provides a system capable of ordering dinner for the user, with the assistance device of the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of serving as an extra care service in order to make a patient feel more comfortable. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
Claims 5 and 19 are rejected under 35 U.S.C. 103 as being obvious over Geman (US 2016/0106627 A1) in view of Carlson (US 2016/0004831 A1) further in view of Mihailidis (US 2013/0100268 A1), further in view of Hiroe (US 2020/0066254 A1), and further in view of Hopkins (US 2015/0269827 A1) as applied to claim 1, and further in view of PARUNDEKAR et al. (US 2016/0041811 A1: hereafter — Parundekar).
For claim 5, claim 1 is incorporated but the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins fails to disclose the limitation of this claim, for which Parundekar is now introduced to teach as, the system, wherein the assistance device stores the first invocation command (Parundekar: [0065] — storing the received speech dialog data that is used to invoke a function of a device).
The combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins provides for an assistance system which receives an invocation command. It differs from the claimed invention in that the claimed invention further provides storing the invocation command. This is however not new to the art as Parundekar is seen to provide teaching for storing such a command.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Parundekar which provides storing the invocation command, with the presence of an assistance system which receives an invocation command as taught by the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of being able to properly log the interactions which a user has with the assistance device. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
As for claim 19, computer program product claim 19 and method claim 5 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to method claim 5.
Claim 9 is rejected under 35 U.S.C. 103 as being obvious over Geman (US 2016/0106627 A1) in view of Carlson (US 2016/0004831 A1) further in view of Mihailidis (US 2013/0100268 A1), further in view of Hiroe (US 2020/0066254 A1), and further in view of Hopkins (US 2015/0269827 A1) as applied to claim 1, and further in view of Letzt et al. (U.S. 5,612,869: hereafter — Letzt).
For claim 9, claim 1 is incorporated but the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins fails to disclose the limitations of this claim, for which Letzt is now introduced to teach as the system, wherein the assistance device is configured to, in response to not receiving a voice input of the user in response to the first speech within a time period:
75present the audio of the first input request a second time with increased volume (Letzt: Col 23 lines 7-11 — if the user fails to answer to an initial prompt within a time period, the device increases the volume and replays the audio prompt); or
present the audio of the first input request from another speaker.
The combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins provides that speech is obtained from a user in response to an audio being played by the user device. It differs from the claimed invention in that the claimed invention further provides that if the user does not respond to speech communication within a time period, the volume of the system prompt is increased, with the prompt played back to the user. This is however not new to the art as the reference of Letzt provides teaching for such.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Letzt which provides that if the user does not respond to speech communication within a time period, the volume of the system prompt is increased, with provides that speech is obtained from a user in response to an audio being played by the user device s taught by the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of adjusting for a user of the device who might be hard of hearing, or possibly slightly moved out of a hearing range, or whose mine mind might have been carried away from the user device, the increase in volume being an attempt to allow the user continue an interaction with the device. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
Claim 14 is rejected under 35 U.S.C. 103 as being obvious over Geman (US 2016/0106627 A1) in view of Carlson (US 2016/0004831 A1) further in view of Mihailidis (US 2013/0100268 A1), further in view of Hiroe (US 2020/0066254 A1), and further in view of Hopkins (US 2015/0269827 A1) as applied to claim 10, and further in view of Jung et al. (US 2006/0288225 A1: hereafter — Jung).
For claim 14, claim 10 is incorporated but the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins fails to disclose the limitations of this claim, for which Jung is now introduced to teach as the method, wherein determining that a negative response event has occurred comprises:
determining that the first voice input of the user is an incorrect answer to a question (Jung: [0037] — a situation whereby one or more answers include at least one incorrect answer, causing the system to disallow authentication (indicating a negative event));
determining that the first voice input of the user includes a word associated with a negative state; or
analyzing a tone or cadence of the second voice input of the user.
The combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins provides the occurrence of a negative response but fails to disclose that the user provides an incorrect answer to a question. This is however not new to the art as the reference of Jung provides teaching for a situation whereby one or more provided answers includes at least one incorrect answer.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Jung which provides for a situation whereby a user provides an incorrect answer that results in a negative response, with the occurrence of a user simply providing a negative response as taught by the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of classifying such responses for the purpose of not providing improper access to confidential information to unauthorised users. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
Claim 15 is rejected under 35 U.S.C. 103 as being obvious over Geman (US 2016/0106627 A1) in view of Carlson (US 2016/0004831 A1) further in view of Mihailidis (US 2013/0100268 A1), further in view of Hiroe (US 2020/0066254 A1), and further in view of Hopkins (US 2015/0269827 A1) as applied to claim 10, and further in view of Walsh et al. (US 2016/0052391 A1: hereafter — Walsh).
For claim 15, claim 10 is incorporated but the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins fails to disclose the limitations of this claim, for which Walsh is now introduced to teach as the method, wherein the first assistance action presents a puzzle, a quiz, or a trivia question to the user (Walsh: [0049] — prompting a user by voice to answer one or more questions or asking to complete a puzzle).
The combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins provides for having an assistance action. It differs from the claimed invention in that the assistance action is one of presenting a puzzle to the user. This isn’t new to the art as the reference of Walsh is seen to provide such an assistance action of presenting a user with a puzzle.
Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to combine the known teaching of Walsh which provides an assistance action of presenting a user with a puzzle, with the system which provides an assistance action as taught by the combination of Geman in view of Carlson further in view of Mihailidis, further in view of Hiroe, and further in view of Hopkins provides for having an assistance action, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of ensuring that the user maintains activity with the assistance device, for example, as a sleep prevention technique. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007).
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure.
See PTO-892.
Any inquiry concerning this communication or earlier communications from the Examiner should be directed to OLUWADAMILOLA M. OGUNBIYI whose telephone number is (571)272-4708. The Examiner can normally be reached Monday - Thursday (8:00 AM - 5:30 PM Eastern Standard Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, PARAS D. SHAH can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWADAMILOLA M OGUNBIYI/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
02/17/2026