DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In communications filed on 07/28/2025. Claims 1, 9, 13, and 18 are amended. Claim 24 is cancelled. Claim 26 newly added. Claims 1-23, and 25-26 are pending in this examination.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Arguments
Applicant’s arguments with respect to claims 1, 13, and 18 for newly added limitation have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection.
Applicant's arguments filed 07/28/2025 have been fully considered but they are not persuasive:
Applicant submits on pages 9-10 of remarks filed on 07/28/2025 regarding claim 18 that Independent claim 18 recites, in part, "upon receipt of the command, determine whether programmatic access to the personal data is restricted on the personal electronic device, “in response to a determination that programmatic access to the personal data is restricted: display an authorization request to access the personal data via a display or play an audio request for authorization to access the personal data via a speaker device, and in response to receipt of authorization to access the personal data, process at least a portion of the command on behalf of the communal electronic device, the portion being for the programmatic access to the personal data of the user," and "in response to a determination that the programmatic access to the personal data is not restricted, forgo the display of the authorization request and the play of the audio request and process at least a portion of the command on behalf of the communal electronic device, the portion being for the programmatic access to the personal data of the user." The proposed combination of the cited portions of the applied references does not disclose or suggest at least these features of independent claim 18.
Examiner respectfully disagrees with applicant argument for claim 18 filed on 07/28/2025 on pages 9-10 of remarks.
The combination of Prakash, and Cheyer discloses:
Prakash discloses the programmatic access to personal data as: [¶21, a portable electronic device (also referred to as a "portable device") generally refers to a handheld device that is capable of storing and/or accessing information and providing services related to information. Examples of such services can include the storage of personal data such as calendar, contacts, and notes; Internet access; mobile telephony and videoconferencing…].
Furthermore, Cheyer discloses the programmatic access to personal data as: [¶31, In some implementations, when an unauthenticated user (e.g., a user that has not been authenticated yet) attempts to access features of or provide input to device 100, authentication of the user can be performed. For example, when a user attempts to place a telephone call, access an e-mail application, address book or calendar on a password locked device, the user interface of FIG. 3 can be presented to the user to allow the user to enter a password, code, or other user authenticating input. In some implementations, if the user enters a password or code that is known to device 100, the user can be authenticated and the device 100 and/or features of device 100 can be unlocked. If the user enters a password or code that is unknown to the device 100, the user cannot be authenticated and device 100 and/or features of device 100 can remain locked. In some implementations, device 100 can be configured to perform voice authentication of a user, as described with reference to FIG. 4].
Cheyer discloses upon receipt of the command, determine whether programmatic access to the personal data is restricted on the personal electronic device, in response to a determination that programmatic access to the personal data is restricted:
[Abstract, the speech input can include a command for accessing a restricted feature of the device], and [¶2, Many of today's computers and other electronic devices include a feature that allows a user to lock the computer or device from access by others. Some of the devices provide a mechanism for unlocking a locked device through a graphical user interface of the device. For example, the graphical user interface can provide a mechanism that allows a user to input authentication information, such as a password or code], and [¶6, A device can include a more user-friendly authentication process for accessing a locked device], and [¶10, see FIG. 3 and corresponding text , which illustrates an example locked device that can be configured for voice authentication].
Cheyer discloses display an authorization request to access the personal data via a display or play an audio request for authorization to access the personal data via a speaker device, and in response to receipt of authorization to access the personal data, process at least a portion of the command on behalf of the communal electronic device, the portion being for the programmatic access to the personal data of the user," and "in response to a determination that the programmatic access to the personal data is not restricted, forgo the display of the authorization request and the play of the audio request and process at least a portion of the command on behalf of the communal electronic device, the portion being for the programmatic access to the personal data of the user." [Abstract, A device can be configured to receive speech input from a user. The speech input can include a command for accessing a restricted feature of the device. The speech input can be compared to a voiceprint (e.g., text-independent voiceprint) of the user's voice to authenticate the user to the device. Responsive to successful authentication of the user to the device, the user is allowed access to the restricted feature without the user having to perform additional authentication steps or speaking the command again. If the user is not successfully authenticated to the device, additional authentication steps can be request by the device (e.g., request a password).], and [¶10, see FIG. 3, and corresponding text, which illustrates an example locked device that can be configured for voice authentication], and [¶20, In some implementations, generating a voiceprint can be performed only when device 100 is in an unlocked state. For example, generating a voiceprint can be performed only when the user providing the speech input has been authenticated to device 100 as the owner or an authorized user of device 100 to prevent generating a voiceprint based on an unauthorized user's or intruder's voice], and [¶35, At step 404, the speech input is used to perform user authentication. In some implementations, the speech input can be used to authenticate a user to device 100 using speaker recognition analysis. For example, if device 100 is locked, the voice of the speech input can be analyzed using speaker recognition analysis to determine if the user issuing the speech input is an authorized user of device 100. For example, the voice characteristics of the voice in the speech input can be compared to voice characteristics of a voiceprint of an authorized user stored on device 100 or by a network service. If the voice can be matched to the voiceprint, the user can be authenticated as an authorized user of device 100. If the voice cannot be matched to the voiceprint, the user will not be authenticated as an authorized user of device 100. If a user cannot be authenticated to device 100 based on the speech input, an error message can be presented (e.g., audibly and/or visually, vibration) to the user. For example, if the user cannot be authenticated based on the speech input, device 100 can notify the user of the authentication error with sound (e.g., alarm or synthesized voice message) presented through speaker 110 or loudspeaker 112 or a vibration provided by a vibrating source. Device 100 can present a visual error by presenting on touch interface 104 a prompt to the user to provide additional authentication information (e.g., password, code, touch pattern, etc.)], and [¶37, At step 408, the command can be executed when the voice is authenticated. In some implementations, if the user's voice in the speech input can be matched to a voiceprint of an authorized user, the user's voice can be authenticated, and the device can execute the determined command. In some implementations, device 100 can execute the determined command while device 100 is locked. For example, device 100 can remain locked while device 100 executes the command such that additional voice (or non-voice) input received by device 100 will require authentication of the user providing such input. In some implementations, locked device 100 can be unlocked in response to authenticating a user to locked device 100 using voice authentication processes described above. For example, locked device 100 can be unlocked when a user's voice is authenticated as belonging to an authorized user of device 100 such that subsequent input or commands do not require additional authentication], and [ ¶¶14-15, 22, 30].
Examiner respectfully disagrees with applicant argument for claims 1, and 13 filed on 07/28/2025 on pages 10-11 of remarks.
The same mapping as mentioned above for claim 18 applies for claims 1, and 13.
Claim Rejections - 35 USC § 103
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12, and 18-23, and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application US2013/0332172 issued to Prakash el at. Hereinafter referred to as “Prakash” and in view of US Patent No. 2018/0309866 issued to Devaraj, and further in view of US Patent Application Publication No. (2012/0245941) issued to Cheyer.
Regarding claim 1, Prakash discloses A data processing system on a communal electronic device, the data processing system comprising: a memory device to store instructions; one or more processors to execute the instructions stored on the memory device, the instructions to cause the one or more processors to provide a virtual assistant associated with the communal electronic device to[ see FIGS 1 and 2 and corresponding text for more detail, accessory (204), ¶40, Controller 250 can include, e.g., one or more single-core or microprocessors and/or microcontrollers executing program code to perform various functions associated with accessory 204], and [¶5, Embodiments of the present invention provide methods and apparatus for an accessory to access an automated assistant feature of a portable electronic device. An exemplary embodiment provides an accessory that receives a voice request. The accessory can transmit data associated with the voice request to a portable electronic device. The accessory can receive a report responsive to the voice request from the portable electronic device. The report may be generated by an automated assistant of the portable electronic device. The accessory can present the report to a user]; and
receive a command at the communal electronic device [¶17, A request may be received at an accessory via a user input device of the accessory user interface. For example, a user may issue a voice request which can be detected by a microphone of the accessory], and [¶40, when a user provides a request to user interface 254 of accessory 204, controller 250 can determine that a request was received and responsively invoke functionality of accessory 204]; and
determine, by the one or more processors and based at least in part on natural language processing [¶22, The automated assistant can interpret requests expressed in natural language and determine an action to take or task to perform based on its interpretation, and perform the task or action]; and
whether the command is to access data that is specific to a particular user of a plurality of users of the communal electronic device and is stored on a personal electronic device of the particular user [Abstract, an accessory is configured to receive a request. The accessory transmits information associated with the request to a portable device. An automated assistant application executed by the portable device can interpret the request and provide a report. The portable device can transmit the report to the accessory. The report may include one or more results determined by the automated assistant], and [¶40, when a user provides a request to user interface 254 of accessory 204, controller 250 can determine that a request was received and responsively invoke functionality of accessory 204, in some instances, the invoked functionality can include sending information associated with the request to and/or receiving results associated with the request from portable device 202], and [¶21, a portable electronic device (also referred to as a "portable device") generally refers to a handheld device that is capable of storing and/or accessing information and providing services related to information. Examples of such services can include the storage of personal data such as calendar, contacts, and notes; Internet access; mobile telephony and videoconferencing…], and [¶33], and [¶¶82-92, At block 504, accessory 204 can determine whether a request has been received at user interface 254 of the accessory… If a request is detected, accessory 204 can transmit the request (e.g., request 412 described with reference to FIG. 4) to portable device 202, as indicated at block 506. At block 508, an automated assistant (e.g., automated assistant 300 described with reference to FIG. 3) executed by portable device 202 can provide a report based on the request. For example, the automated assistant can interpret the request, perform a task associated with the request, and provide a report including one or more results obtained by performing the task. At block 510, report 418 can be transmitted from portable device 202 to accessory 204…In FIG. 6B, accessory 204 displays result 602-606. This may indicate that accessory 204 has received a request input and transmitted the request to portable device 202 (e.g., via a SendRequest message). In the example, automated assistant 300 has interpreted the request and determined that the task involves locating contacts named Peter. Automated assistant 300 has performed the task by searching the contacts stored on portable electronic device 202 matching the parameter "contacts named Peter" and has located three results in the stored contacts that satisfy the task criteria. Portable device 202 has transmitted a response (e.g., a SendReport message) to accessory 204 including results 602-606… In FIG. 6C, accessory 204 displays the status "Dialing Peter Parker." This may indicate that accessory 204 has received a user selection of result 604 and transmitted the user selection (e.g., via a SelectResult message) to portable device 202. Automated assistant 300 has performed the function indicated by the intent "place a call" associated with result 604. Portable device 202 has sent a message (e.g., SendAutoAsstState) to accessory 204 indicating information associated with the function performed by automated assistant 300], [¶7]; and
in response to a determination that the command is to access data that is specific to the particular user and is stored on the personal electronic device of the particular user, identify the personal electronic device of the particular user, and send a request to the personal electronic device of the particular user of the plurality of users [¶¶37-39, The remote system 126, once the sender profile and the recipient profile are set, may cause the communal device 124 to query the user 130 for the message to be sent… then the sender profile may also be identified as the communal profile. Setting the sender profile and the recipient profile as such may allow for a group communication to occur between the members of the communal profile… The sent message data may be displayed on devices associated with the sender profile, such as the first device 120, and/or on devices associated with the recipient profile, such as the second device 122. Audio corresponding to the message data may additionally, or alternatively, be output by one or more speakers 138 of the communal device 124 and/or the mobile device 120 associated with the sending user 130, and/or by one or more speakers 144 of the communal device 125 and/or the mobile device 122 associated with the recipient user 132…]. [ see FIG.1 and corresponding text for more details], and [ ¶¶ 14, 30, 41]; and
to process at least a portion of the command on behalf of the communal electronic device [¶17, A request may be received at an accessory via a user input device of the accessory user interface. For example, a user may issue a voice request which can be detected by a microphone of the accessory. The accessory can transmit information associated with the request to a portable electronic device. An application running on the portable electronic device, such as an automated assistant application, can provide a report including one or more results associated with the request. The report may have an associated intent (e.g., navigate to location, place telephone call, schedule event in calendar, send e-mail, send text message, check stock value, etc.). The results can be transmitted from the portable electronic device to the accessory. The accessory can present the results to the user (e.g., the results may be displayed on a display of the accessory)], and [¶¶27, 82-83]; and
and process, by the one or more processors, the command without sending the command to the personal electronic device in response to a determination that the command does not require access to data that is specific to the particular [¶46, The portable device and/or accessory may have other capabilities not specifically described herein (e.g., mobile phone, global positioning system (GPS), broadband data communication, Internet connectivity, etc.], and [¶85].
Examiner Note: It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to indicate that since the accessory device is a smart device which can execute receive and request command and since it has access to both the mobile device and internet, hence it will have the capability to process some of the request which it has nothing to do with any data in any of the particular mobile devices and instead use Internet , High speed Internet connection via broadband data communication to access information/data which mostly used for general public information and requests.
Examiner Note: furthermore, Devaraj in his application discloses this limitation as: [¶31, The remote system 126 may be further configured to receive from the communal device 124, audio data corresponding to user speech from a user, such as the user 130. The user 130 may speak, and that user speech may be captured by one or more microphones 134 of the communal device 124. The one or more microphones 134 may generate audio data corresponding to the audio. The remote system 126 may generate text data corresponding to the audio data using, for example, automatic speech recognition techniques, as described more fully with respect to FIG. 9. The remote system 126 may then determine, from the text data, an intent to send a communication such as message data. Determining the intent may be performing using natural language understanding techniques as described with respect to FIG. 9], and [¶173, As an illustrative example, a communications session between two devices is described below to illustrate how the communications session may be established. In one example embodiment, an individual may speak an utterance (e.g., “Alexa, send a message to John: ‘Want to have dinner at my place?’”) to their electronic device. In response to detecting the device's wakeword (e.g., “Alexa”), the electronic device may begin sending audio data representing the utterance to the remote system 126], and [¶28 each of the network interface(s) 114, network interface(s) 116, and network interface(s) 118 may include a wide area network (WAN) component to enable communication over a wide area network. The network 128 may represent an array of wired networks, wireless networks, such as WiFi, or combinations thereof], and [¶14].
Even though Prakash discloses the accessory as: [¶23, An accessory can be any device capable of communicating with a portable electronic device and having a user interface to receive user input and to provide results received from the portable electronic device to a user. Accessories can include an in-vehicle entertainment system or head unit, an in-vehicle navigation device or standalone navigation device, a refreshable braille display, a video display system, and so on].
The accessory in Prakash application can broadly interpreted as a communal electronic device, however, does not explicitly disclose as a communal device, however, Devaraj discloses communal electronic device as: [¶ 14, Users may interact with communal devices and personal mobile devices to send and receive communications, such as messages and calls. When sending a communication with a communal device, such as a voice-assistant device, it may be difficult to determine which sender profile to send the communication from and which recipient profile to send the communication to....], and [¶30]; and
while Prakash discloses this limitation whether the command is to access data that is specific to a particular user of the communal electronic device and is stored on a personal electronic device of the particular user [Abstract, an accessory is configured to receive a request. The accessory transmits information associated with the request to a portable device. An automated assistant application executed by the portable device can interpret the request and provide a report. The portable device can transmit the report to the accessory. The report may include one or more results determined by the automated assistant], and [¶40, when a user provides a request to user interface 254 of accessory 204, controller 250 can determine that a request was received and responsively invoke functionality of accessory 204, in some instances, the invoked functionality can include sending information associated with the request to and/or receiving results associated with the request from portable device 202], and [¶21, a portable electronic device (also referred to as a "portable device") generally refers to a handheld device that is capable of storing and/or accessing information and providing services related to information. Examples of such services can include the storage of personal data such as calendar, contacts, and notes; Internet access; mobile telephony and videoconferencing…], and [¶33], and [¶¶82-92, At block 504, accessory 204 can determine whether a request has been received at user interface 254 of the accessory… If a request is detected, accessory 204 can transmit the request (e.g., request 412 described with reference to FIG. 4) to portable device 202, as indicated at block 506. At block 508, an automated assistant (e.g., automated assistant 300 described with reference to FIG. 3) executed by portable device 202 can provide a report based on the request. For example, the automated assistant can interpret the request, perform a task associated with the request, and provide a report including one or more results obtained by performing the task. At block 510, report 418 can be transmitted from portable device 202 to accessory 204…In FIG. 6B, accessory 204 displays result 602-606. This may indicate that accessory 204 has received a request input and transmitted the request to portable device 202 (e.g., via a SendRequest message). In the example, automated assistant 300 has interpreted the request and determined that the task involves locating contacts named Peter. Automated assistant 300 has performed the task by searching the contacts stored on portable electronic device 202 matching the parameter "contacts named Peter" and has located three results in the stored contacts that satisfy the task criteria. Portable device 202 has transmitted a response (e.g., a SendReport message) to accessory 204 including results 602-606… In FIG. 6C, accessory 204 displays the status "Dialing Peter Parker." This may indicate that accessory 204 has received a user selection of result 604 and transmitted the user selection (e.g., via a SelectResult message) to portable device 202. Automated assistant 300 has performed the function indicated by the intent "place a call" associated with result 604. Portable device 202 has sent a message (e.g., SendAutoAsstState) to accessory 204 indicating information associated with the function performed by automated assistant 300], and [¶7].
Prakash does not explicitly disclose, however, Devaraj discloses whether the command is to access data that is specific to a particular user pf the plurality of users of the communal electronic device and is stored on a personal electronic device of the particular user [¶30, the remote system 126 is configured to store information indicating that one or more user profiles are associated with a communal device, such as the third device 124. The one or more user profiles may be registered with the communal device 124 and/or may be associated with a user account that is registered with the communal device 124. In examples, the user profiles that are associated with the communal device may correspond to members of a family and/or a group of users that reside in the same environment, such as a home. The remote system 126 may also be configured to store and/or access one or more contact lists that correspond respectively to the one or more user profiles. Each of the multiple user profiles may be associated with their own respective contact list, which may include contact information, including name designations, for one or more contact names that the user communicates with. By way of example, the contact information for a contact name associated with the contact lists may include a name and/or nickname of a person corresponding to the contact name, one or more telephone numbers, and/or one or more devices that the contact name is associated with, including communal devices, for example. A contact name may be identified on only one contact list associated with one user profile of the communal device 124, or the contact name may be identified on multiple contact lists associated with multiple user profiles of the communal device 124. When the contact is identified on multiple contact lists, the contact information associated with that contact name may differ], and [¶41, Additionally, or alternatively, the memory 110 on the communal device 124 may, when executed by the processor(s) 104, cause the processor(s) 104 to perform operations similar to those described above with respect to the remote system 126. For example, the communal device 124 may be configured to store information indicating that one or more user profiles are associated with the communal device 124. The one or more user profiles may be registered with the communal device 124 and/or may be associated with a user account that is registered with the communal device 124. In examples, the user profiles that are associated with the communal device 124 may correspond to members of a family and/or a group of users that reside in the same environment, such as a home. The communal device 124 may also be configured to store and/or access one or more contact lists that correspond respectively to the one or more user profiles], and [¶¶14, … Alvin may provide a command, such as an audible command, to the communal device to “send a message to Bob.” The communal device, through automatic speech recognition and natural language understanding techniques, may determine from audio data corresponding to the audible command that a command has been given to send a communication and that the communication should be sent to a user profile associated with Bob …].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Prakash with the teaching of Devaraj in order to provide improvements in technology that will improve, among other things, the user experience when using a voice user interface and/or communal device to send message [Devaraj, ¶2].
wherein the personal electronic device is configured to display an authorization request to access the data specific to the particular user via a display or play an audio request for authorization to access the data specific to the particular user via a speaker device when programmatic access to the data specific to the particular user is restricted by the personal electronic device
It is found that Prakash teaches [¶26, accessory 104 can have a user interface including one or more components for providing output to the user, such as display 106 and/or speakers 124. The user interface of accessory 104 can also include one or more components to receive user input. For example, accessory 104 may include a microphone 110 that is capable of receiving vocal input, such as voice requests. In some embodiments, display 106 may be a touchscreen display that allows the user to enter input by selecting touchscreen buttons such as buttons 112-120. It will be recognized that accessory 104 may receive user input through other input devices such as physical buttons, a keypad, or other user input devices], and [¶34, user interface 214 can include input devices such as a touch pad, touch screen, scroll wheel, click wheel, dial, button, switch, keypad, microphone, or the like, as well as output devices such as a video screen, indicator lights, speakers, headphone jacks, or the like, together with supporting electronics (e.g., digital-to-analog or analog-to-digital converters, signal processors, or the like). A user can operate input devices of user interface 214 to invoke the functionality of portable device 202 and can view and/or hear output from portable device 202 via output devices of user interface 214], and [¶53, Every accessory and every portable device that use the accessory protocol can be required to support at least the general message set. This message set can include messages enabling the portable device and the accessory to identify and authenticate themselves to each other and to provide information about their respective capabilities, including which (if any) of the messages in the optional set each supports. For example, the general message set can include a message the accessory can send to the portable device to list every message in the optional set that the accessory is capable of sending and every message in the optional set that the accessory is capable of receiving and acting on. The general message set can also include authentication messages that the portable device can use to verify the purported identity and capabilities of the accessory (or vice versa), and the accessory (or portable device) may be blocked from invoking certain (or all) of the optional messages if the authentication is unsuccessful].
And while, requesting authorization includes displaying an authorization request via a display or playing an audio request for authorization via a speaker device. Prakash describes the authentication [¶53, and although it also expressly teaches the well-known in the art user interfaces of the accessory 104 including one or more components to receive user input, i.e. a microphone 110 that is capable of receiving vocal input, such as voice requests, display 106 a touchscreen display that allows the user to enter input by selecting touchscreen buttons or receive user input through other input devices such as physical buttons, a keypad, or other user input devices, it does not expressly provides the details of “display[ing]” the authorization request via a display or “play[ing]” an audio request for authorization via a speaker device]; and
Furthermore, Prakash discloses the programmatic access to personal data as: [¶21, a portable electronic device (also referred to as a "portable device") generally refers to a handheld device that is capable of storing and/or accessing information and providing services related to information. Examples of such services can include the storage of personal data such as calendar, contacts, and notes; Internet access; mobile telephony and videoconferencing…].
Cheyer also discloses the programmatic access to personal data as: [¶31, In some implementations, when an unauthenticated user (e.g., a user that has not been authenticated yet) attempts to access features of or provide input to device 100, authentication of the user can be performed. For example, when a user attempts to place a telephone call, access an e-mail application, address book or calendar on a password locked device, the user interface of FIG. 3 can be presented to the user to allow the user to enter a password, code, or other user authenticating input. In some implementations, if the user enters a password or code that is known to device 100, the user can be authenticated and the device 100 and/or features of device 100 can be unlocked. If the user enters a password or code that is unknown to the device 100, the user cannot be authenticated and device 100 and/or features of device 100 can remain locked. In some implementations, device 100 can be configured to perform voice authentication of a user, as described with reference to FIG. 4].
While Prakash and Devaraj do not provide expressly, however, Cheyer discloses
wherein the personal electronic device is configured to display an authorization request to access the data specific to the particular user via a display or play an audio request for authorization to access the data specific to the particular user via a speaker device when programmatic access to the data specific to the particular user is restricted by the personal electronic device [Abstract, A device can be configured to receive speech input from a user. The speech input can include a command for accessing a restricted feature of the device. The speech input can be compared to a voiceprint (e.g., text-independent voiceprint) of the user's voice to authenticate the user to the device. Responsive to successful authentication of the user to the device, the user is allowed access to the restricted feature without the user having to perform additional authentication steps or speaking the command again. If the user is not successfully authenticated to the device, additional authentication steps can be request by the device (e.g., request a password).], and [¶10, see FIG. 3, and corresponding text, which illustrates an example locked device that can be configured for voice authentication], and [0020] In some implementations, generating a voiceprint can be performed only when device 100 is in an unlocked state. For example, generating a voiceprint can be performed only when the user providing the speech input has been authenticated to device 100 as the owner or an authorized user of device 100 to prevent generating a voiceprint based on an unauthorized user's or intruder's voice], and [¶35, At step 404, the speech input is used to perform user authentication. In some implementations, the speech input can be used to authenticate a user to device 100 using speaker recognition analysis. For example, if device 100 is locked, the voice of the speech input can be analyzed using speaker recognition analysis to determine if the user issuing the speech input is an authorized user of device 100. For example, the voice characteristics of the voice in the speech input can be compared to voice characteristics of a voiceprint of an authorized user stored on device 100 or by a network service. If the voice can be matched to the voiceprint, the user can be authenticated as an authorized user of device 100. If the voice cannot be matched to the voiceprint, the user will not be authenticated as an authorized user of device 100. If a user cannot be authenticated to device 100 based on the speech input, an error message can be presented (e.g., audibly and/or visually, vibration) to the user. For example, if the user cannot be authenticated based on the speech input, device 100 can notify the user of the authentication error with sound (e.g., alarm or synthesized voice message) presented through speaker 110 or loudspeaker 112 or a vibration provided by a vibrating source. Device 100 can present a visual error by presenting on touch interface 104 a prompt to the user to provide additional authentication information (e.g., password, code, touch pattern, etc.)], and [¶37, At step 408, the command can be executed when the voice is authenticated. In some implementations, if the user's voice in the speech input can be matched to a voiceprint of an authorized user, the user's voice can be authenticated, and the device can execute the determined command. In some implementations, device 100 can execute the determined command while device 100 is locked. For example, device 100 can remain locked while device 100 executes the command such that additional voice (or non-voice) input received by device 100 will require authentication of the user providing such input. In some implementations, locked device 100 can be unlocked in response to authenticating a user to locked device 100 using voice authentication processes described above. For example, locked device 100 can be unlocked when a user's voice is authenticated as belonging to an authorized user of device 100 such that subsequent input or commands do not require additional authentication], and [ ¶¶14-15, 22, 30].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify further the teachings of Prakash and Devaraj by incorporating “display a user interface to allow a user to enter a passcode to unlock device, and collecting voice samples (analyzed/ modeled) and generating a voiceprint to device to be used by device when authenticating a user using speaker recognition analysis. ” as taught by Cheyer. All the claimed elements were known in Chyer application and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention.
One could have been motivated to do so in order to authenticate a user to a device by comparing the speech input which includes command for accessing a restricted feature of the device with a voiceprint of the user’s voice, include a more user-friendly authentication process, where user's voice can be authenticated at the same time that a voice command is processed, and authenticating the user, by user touching the display to enter the passcode to unlock the device. [ Cheyer, Abstract, ¶¶ 1, 5-6, 15, 22].
Regarding claim 2, Prakash discloses the data processing system as in claim 1, wherein personal data of the particular user includes a contact list, text message, e-mail, call history, alarm, reminder, communication history, settings, preferences, or location history, and the communal electronic device includes a smart speaker device [¶21, a portable electronic device (also referred to as a "portable device") generally refers to a handheld device that is capable of storing and/or accessing information and providing services related to information. Examples of such services can include the storage of personal data such as calendar, contacts, and notes; Internet access; mobile telephony and videoconferencing…], and [¶33].
Regarding claim 3, Prakash discloses The data processing system as in claim 1, wherein the virtual assistant is to request DB2/ 41362376.12Application No. 16/147,096Docket No.: 122202-7576 (P36285US1) the personal electronic device to access personal data on behalf of the communal electronic device [¶23, An accessory can be any device capable of communicating with a portable electronic device and having a user interface to receive user input and to provide results received from the portable electronic device to a user], and [¶¶ 5,17].
Regarding claim 4, Prakash discloses the data processing system as in claim 1, the virtual assistant to receive output of a request sent to the personal electronic device of the particular user and to complete processing of a command based on the output, wherein the command is a voice command or a text command [¶5, an accessory that receives a voice request. The accessory can transmit data associated with the voice request to a portable electronic device. The accessory can receive a report responsive to the voice request from the portable electronic device. The report may be generated by an automated assistant of the portable electronic device. The accessory can present the report to a user], and [ ¶37, Network interface 216 can provide voice and/or data communication capability for portable device 202], and [ ¶72, accessory 204 may convert part or all of a voice request to text and transmit text associated with the voice request to portable device 202. In some embodiments, a request may be received at accessory 404 as text input (e.g., via a touchscreen input device, keyboard input device, etc.) and a SendRequest message may include the text input], and [¶¶66, 73].
Regarding claim 5, Prakash discloses the data processing system as in claim 1, wherein to send a request to the personal electronic device of the particular user includes to redirect a command to the personal electronic device of the particular user, the personal electronic device to process the command on behalf of the communal electronic device [¶23, An accessory can be any device capable of communicating with a portable electronic device and having a user interface to receive user input and to provide results received from the portable electronic device to a user], and [¶¶ 5,17, 82-83].
Regarding claim 6, Prakash discloses the data processing system as in claim 5, the virtual assistant to receive an audio response generated by the personal electronic device and play the audio response as a response to the command [¶76, when eyes-free mode is activated, portable device 202 may transmit audio data to accessory 204. For example, if a report includes multiple results, a SelectResult message can include an audio file of synthesized speech corresponding to each result of the list of results. The list or results can be "spoken" aloud to the user as accessory 204 reproduces the audio received from portable device 202].
Regarding claim 7, Prakash discloses the data processing system as in claim 6, wherein the command is a command to send a message to a contact of the particular user and the audio response is a notification of a received reply to the message [¶77, a SendResultResponse message indicating information and/or a status associated with the selected result may be sent from portable device 202 to accessory 204. For example, if a user has selected a contact name from a list of contact results in a report, the user selection is sent to portable device 202 in a SelectResult message. Automated assistant 300 may perform a function associated with the user selection, such as calling the selected contact. Portable device 402 may send a SendAutoAsstState message indicating that automated assistant 300 is placing a call to the selected contact], and [¶76].
Regarding claim 8, Prakash discloses The data processing system as in claim 7, the communal electronic device to establish a trust relationship with the personal electronic device before the communal electronic device is enabled to send the request to the personal electronic device, the virtual assistant to send the request to the personal electronic device of the particular user over a verified data connection with the personal electronic device, the verified data connection established based on the trust relationship with the personal electronic device [¶53, In some embodiments, the messages can be logically grouped into a "general" message set and an "optional" message set. Every accessory and every portable device that use the accessory protocol can be required to support at least the general message set. This message set can include messages enabling the portable device and the accessory to identify and authenticate themselves to each other and to provide information about their respective capabilities, including which (if any) of the messages in the optional set each supports. For example, the general message set can include a message the accessory can send to the portable device to list every message in the optional set that the accessory is capable of sending and every message in the optional set that the accessory is capable of receiving and acting on. The general message set can also include authentication messages that the portable device can use to verify the purported identity and capabilities of the accessory (or vice versa), and the accessory (or portable device) may be blocked from invoking certain (or all) of the optional messages if the authentication is unsuccessful], and [¶54].
Regarding claim 9, Prakash discloses the data processing system as in claim 8, the verified data connection to be verified via data exchanged during establishment of the trust relationship, the verified data connection comprising a persistent low-latency messaging system that enables communication between the communal electronic device and the personal electronic device. [¶25, FIG. 1 illustrates a portable electronic device 102 communicatively coupled to an accessory 104 according to an embodiment of the present invention. Communications between portable electronic device 102 and accessory can occur via a communication interface. For example, the portable device and the accessory can each include RF transceiver components coupled to an antenna to support wireless communications. In an illustrative embodiment, antenna 106 of portable device 102 transmits wireless communications to and receives wireless communications from antenna 108 of accessory 104], and [ see FIG 2, [¶49, Accessory I/O interface 218 of portable device 202 and device I/O interface 258 of accessory 204 allow portable device 202 to be connected with accessory 204 and subsequently disconnected from accessory 204. As used herein, a portable device and an accessory are "connected" whenever a communication channel is established between their respective interfaces and "disconnected" when the channel is terminated. Such connection can be achieved via direct physical connection, e.g., with mating connectors; indirect physical connection, e.g., via a cable; and/or wireless connection, e.g., via Bluetooth], and [¶53, In some embodiments, the messages can be logically grouped into a "general" message set and an "optional" message set. Every accessory and every portable device that use the accessory protocol can be required to support at least the general message set. This message set can include messages enabling the portable device and the accessory to identify and authenticate themselves to each other and to provide information about their respective capabilities, including which (if any) of the messages in the optional set each supports. For example, the general message set can include a message the accessory can send to the portable device to list every message in the optional set that the accessory is capable of sending and every message in the optional set that the accessory is capable of receiving and acting on. The general message set can also include authentication messages that the portable device can use to verify the purported identity and capabilities of the accessory (or vice versa), and the accessory (or portable device) may be blocked from invoking certain (or all) of the optional messages if the authentication is unsuccessful], and [¶¶ 38,54].
Regarding claim 10, Prakash discloses the data processing sy