Prosecution Insights
Last updated: April 19, 2026
Application No. 18/232,163

PERSONALIZED VOICE RECOGNITION SERVICE PROVIDING METHOD USING ARTIFICIAL INTELLIGENCE AUTOMATIC SPEAKER IDENTIFICATION METHOD, AND SERVICE PROVIDING SERVER USED THEREIN

Non-Final OA §103§112
Filed
Aug 09, 2023
Examiner
DESAI, RACHNA SINGH
Art Unit
3992
Tech Center
3900
Assignee
Kaifi LLC
OA Round
2 (Non-Final)
45%
Grant Probability
Moderate
2-3
OA Rounds
4y 3m
To Grant
72%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
50 granted / 111 resolved
-15.0% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
15 currently pending
Career history
126
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
45.3%
+5.3% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
19.1%
-20.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 111 resolved cases

Office Action

§103 §112
Detailed Action 1. The 12/22/2025 Office action is being remailed in Order to restart the shortened statutory period (SSP) for response since the prior Office action was sent to the incorrect correspondence address/email for reissue applicant. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 3. Reissue application 18/232,163 was filed 08/09/2023 as a reissue of Application 16477330, filed 07/11/2019, now US 11,087,768 B2, claiming priority to South Korean Patent Application 10-2017-0004094, filed on 01/11/2017 and PCT/KR2017/003807 filed on 04/07/2017. 4. For reissue applications filed on or after September 16, 2012, all references to 35 U.S.C. 251 and 37 CFR 1.172, 1.175, and 3.73 are to the current provisions. 5. Claims 1-26 are pending. Claims 5-26 are newly added claims. Reissues 6. Applicant is reminded of the continuing obligation under 37 CFR 1.178(b), to timely apprise the Office of any prior or concurrent proceeding in which Patent No. US 11,087,768 B2 is involved. These proceedings would include any trial at the Patent Trial and Appeal Board, interferences, reissues, reexaminations, supplemental examinations, and litigation. Applicant is further reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is material to patentability of the claims under consideration in this reissue application. These obligations rest with each individual associated with the filing and prosecution of this application for reissue. See also MPEP §§ 1404, 1442.01 and 1442.04. Manner of Making Amendments 7. Applicant is notified that any subsequent amendment to the specification and/or claims must comply with 37 CFR 1.173(b)(c). The amendment filed 08/09/2023 proposes amendments t that do not comply with 37 CFR 1.173(c), which sets forth the requirement for providing an explanation of support for added or changed subject matter. The Applicant refers to paragraph numbers in the ‘768 Patent; however, Applicant is required to provide the specific column and line numbers alleged to support the changed and added subject matter in order to align with the format of the Patent as there are no corresponding paragraph numbers in the patent. U.S.C. 251 8. Claims 1-26 are rejected as being based upon a defective reissue declaration under 35 U.S.C. 251 as set forth below. See 37 CFR 1.175. Claims 11-12 and 22-23 are rejected under 35 U.S.C. 251 as being based upon new matter added to the patent for which reissue is sought. The added material which is not supported by the prior patent is as follows: the reference to “an additional speaker” in claim 11 and “determining a second customized content for the speaker and an additional speaker based on a request from the second customized content by additional speaker is received” in claim 12. Recapture 9. Claims 5-26 are rejected under 35 U.S.C. 251 as being an improper recapture of broadened claimed subject matter surrendered in the application for the patent upon which the present reissue is based. See Greenliant Systems, Inc. et al v. Xicor LLC, 692 F.3d 1261, 103 USPQ2d 1951 (Fed. Cir. 2012); In re Shahram Mostafazadeh and Joseph O. Smith, 643 F.3d 1353, 98 USPQ2d 1639 (Fed. Cir. 2011); North American Container, Inc. v. Plastipak Packaging, Inc., 415 F.3d 1335, 75 USPQ2d 1545 (Fed. Cir. 2005); Pannu v. Storz Instruments Inc., 258 F.3d 1366, 59 USPQ2d 1597 (Fed. Cir. 2001); Hester Industries, Inc. v. Stein, Inc., 142 F.3d 1472, 46 USPQ2d 1641 (Fed. Cir. 1998); In re Clement, 131 F.3d 1464, 45 USPQ2d 1161 (Fed. Cir. 1997); Ball Corp. v. United States, 729 F.2d 1429, 1436, 221 USPQ 289, 295 (Fed. Cir. 1984). A broadening aspect is present in the reissue which was not present in the application for patent. The record of the application for the patent shows that the broadening aspect (in the reissue) relates to claimed subject matter that applicant previously surrendered during the prosecution of the application. Accordingly, the narrow scope of the claims in the patent was not an error within the meaning of 35 U.S.C. 251, and the broader scope of claim subject matter surrendered in the application for the patent cannot be recaptured by the filing of the present reissue application. It is noted that the following is the three step test for determining recapture in reissue applications (see: MPEP 1412.02(I)): “(1) first, we determine whether, and in what respect, the reissue claims are broader in scope than the original patent claims; (2) next, we determine whether the broader aspects of the reissue claims relate to subject matter surrendered in the original prosecution; and (3) finally, we determine whether the reissue claims were materially narrowed in other respects, so that the claims may not have been enlarged, and hence avoid the recapture rule.” (Step 1: MPEP 1412.02(A)) In the instant case, and by way of the amendment, Applicant seeks to broaden independent claims 5 and 16 in this reissue at least by deleting/omitting the patent claim language requiring: In claim 5: generating, by the service providing server, a customized service proposal message based on the customized content and a result of the voice analysis; transmitting, by the service providing server, the generated customized service proposal message to the user terminal to output the customized service proposal message to the speaker; receiving, by the service providing server, a customized service approval message from the user terminal; and transmitting, by the service providing server, the generated control command to an external electronic device to provide the customized service. In claim 16: generate a customized service proposal message based on the customized content and a result of the voice analysis; transmit the generated customized service proposal message to the user terminal to output the customized service proposal message to the speaker; receive a customized service approval message from the user terminal; and transmit the control command to an external electronic device to provide the customized service. (Step 2: MPEP 1412.02(B)) The record of the prior 16/477,330 application prosecution indicates that in a response filed on 04/01/2021, Applicant amended original claims 1 and 2 to recite the features above stating these features were not taught by cited prior art references (see pages 7 and 11). Thus, the limitations omitted in the reissue claim(s) were added in the original application claims for the purpose of making the application claims allowable over a rejection made in the application. The applicant made an argument on the record that the limitation was added to obviate the rejection. The nature of adding limitations to obviate the rejection was made in the amendments filed 04/01/2021 subsequently leading to an Allowance. See the NOA mailed on 05/25/2021 also citing these limitations as obviating the rejections. Applicant is bound by applicant’s revision of the application claims See MPEP 1412.02. Therefore, in the instant case the claimed limitations below are surrendered subject matter and the broadening of the reissue claims, as noted above, are in the area of the surrendered subject matter. In claim 5: generating, by the service providing server, a customized service proposal message based on the customized content and a result of the voice analysis; transmitting, by the service providing server, the generated customized service proposal message to the user terminal to output the customized service proposal message to the speaker; receiving, by the service providing server, a customized service approval message from the user terminal; and transmitting, by the service providing server, the generated control command to an external electronic device to provide the customized service. In claim 16: generate a customized service proposal message based on the customized content and a result of the voice analysis; transmit the generated customized service proposal message to the user terminal to output the customized service proposal message to the speaker; receive a customized service approval message from the user terminal; and transmit the control command to an external electronic device to provide the customized service. (Step 3: MPEP 1412.02(C)) It is noted that reissue claims 5-26 were not materially narrowed in other respects that relate to the surrendered subject matter to avoid recapture. Claim Interpretation 10. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “one or more processors configured to: receive a voice recognition request…perform a mapping of voice information…receive a service provision request message…analyze the voice…determine at least a part of the preferred content information…generate a customized service proposal message…transmit the generated customized service proposal message…receive a customized service approval message…generate a control command…” in claim 2. “one or more processors are configured to analyze a call portion in a message…” in claim 4. “the service providing server is configured to receive the registration information from an external electronic device” in claim 13. “one or more processors configured to: receive a voice recognition request…perform a mapping of voice information…receive a service provision request message…analyze the voice…determine at least a part of the preferred content information… generate a control command…” in claim 16. “one or more processors are further configured to: analyze a call portion…analyze a request portion” as in claim 17. “one or more processors are further configured to analyze the voice…” in claim 18. “one or more processors are further configured to analyze the voice…” in claim 19. “one or more processors are further configured to identify the speaker…” in claim 20. “one or more processors are further configured to identify the speaker…” in claim 21. “one or more processors are further configured to determine the at least part of the preferred content information…” in claim 23. “one or more processors are further configured to receive the registration information…” in claim 24. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 11. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. 12. Claims 11-12 and 22-23 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. It is unclear where the subject matter of an “additional speaker” is supported in the ‘168 Patent. Clarification is requested. 13. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 14. Claims 11-12 and 22-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Specifically, with respect to claim 11, it is unclear how ”the identified user includes an additional speaker.” With respect to claim 12, there additionally appears to be a typographical error as the phrase “based on a request for the second customized content by additional speaker is received” is grammatically incorrect. Correction and/or clarification is requested. Claim Rejections - 35 USC § 103 15. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 16. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 17. Claims 1-26 are rejected under 35 U.S.C. 103 as being unpatentable over Niewczas, US 11,223,699 B1, 01/11/2022 (filed on 12/21/2016 in view of Hansen et al., US 9,407,751 B2, 08/02/2016 Regarding claim 1, Niewczas discloses a method for providing a personalized voice recognition service, the method comprising. See column 1, lines 7-8 and column 2, lines 27-61 disclosing a method of using voice recognition in a social-networking environment and a method of recording and analyzing a user’s voice to determine a digital voiceprint for the user. See also column 17, lines 27-65. Niewczas discloses receiving, by a service providing server, a voice registration request of each of a plurality of users from a user terminal. See column 2, lines 27-61 where Niewczas discloses receiving a voiceprint by a client system, stored on the social-networking system. A user at a client system, such as a smartphone, may establish a voiceprint by speaking words and phrases into the microphone of the smartphone which records the user’s speech as audio input. A voiceprint is generated based on the audio input and stored in the data store as a user’s voiceprint. Multiple users are able to create voiceprints (first and second users). See columns 2-3. See also column 17, lines 27-65. Niewczas discloses performing, by the service providing server, a mapping of voice information of each of the plurality of users included in the voice registration request with user information including identification information and preferred content information, and storing the mapped information in the service providing server. See column 2, lines 49-51 disclosing that a voiceprint generated based on the audio input is stored in a data store a user’s voiceprint. The social-networking system includes one or more user-profile stores for storing user profiles such as biographic information, demographic information, social information, educational and work history, hobbies, interests, affinities, etc. See column 8, lines 14-28. See also column 17, lines 27-65. Niewczas discloses that when the social networking system first receives a first audio input from an unknown user, the social-networking system receives identify information for the unknown user, generates a new voiceprint based on the first audio input and stores the new voiceprint in association with identify information for subsequent access by the social networking system. See column 25, lines 50-57. Column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. See also column 6, lines 42-55 disclosing the information stored in data stores may be organized according to specific data structures. In particular embodiments, each data store may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system, a social-networking system, or a third-party system to manage, retrieve, modify, add, or delete, the information stored in data store. Niewczas discloses receiving, by the service providing server, a service provision request message including a voice of a speaker from the user terminal; analyzing, by the service providing server, the voice included in the service provision request message to identify the speaker of the voice, which is to be corresponded to one of the plurality of users. See column 2, lines 51-57 disclosing that when a user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Additionally, the social-networking system may use the user's social-networking information when identifying or authenticating the user based on the voiceprint, and when performing actions based on voice commands. See also column 17, lines 52-65. Niewczas discloses determining, by the service providing server, at least a part of the preferred content information of an identified user as a customized content for the speaker. See column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. Niewczas discloses providing customized content to the users in the form of ads, newsfeeds and push notifications in response to a voice analysis; however, to the extent Niewczas may not disclose generating a customized service proposal message, Hansen discloses generating, by the service providing server, a customized service proposal message based on the customized content and a result of the voice analysis; transmitting, by the service providing server, the generated customized service proposal message to the user terminal to output the customized service proposal message to the speaker. See column 11, lines 25-36 disclosing the family interaction engine may automatically sense the presence of individual users (e.g., using voice identification or face recognition), and the family interaction engine may automatically show or hide containers, based on the preferences of those detected users. See column 20, lines 25-41 and figure 7 disclosing presenting new recommendations based on the detected user. The family interaction engine may use the content recommendation module when determining content for the family channel and when determining content for each individual user's home screen. As indicated above with regard to the recommended content panel, the home screen may include a container for presenting content recommendations, based on the user profile for the current user. In particular, the content recommendation module may present recommendations based on a preference model for the current user. The preference models for the users may be stored in the respective user profiles, or in any other suitable location. Hansen discloses presenting the recommendations on the tablet. Hansen discloses receiving, by the service providing server, a customized service approval message from the user terminal; generating, by the service providing server, a control command needed to provide a customized service for the speaker; and transmitting, by the service providing server, the generated control command to an external electronic device to provide the customized service. See column 31, lines 7-column 32, lines 27 disclosing a scenario in which a Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. When Mom first picks up the tablet to turn on the Olympics, the application tailoring module may use the tablet's camera to automatically detect that Mom is the current user. The application tailoring module may then consult the user profile for the current user, to identify any preferences for the current user that are relevant to the application. The video player application can be configured to cooperate or integrate with a separate home theater system in the living room. Alternatively, applications may be configured to cooperate or integrate with other external or remote device, including without limitation external media devices, such as televisions, video game consoles, streaming video players, audio receivers, etc. In another scenario, applications may be configured to present both media and supplemental data on the tablet. In the example scenario, Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. It would have been obvious to a skilled artisan at the time of the invention to have incorporated Hansen’s features of providing a customized service proposal message and providing the customized service based on a user’s approval within Niewczas’s system of providing customized content because both Niewczas and Hansen were directed to presenting customized content to users based on voice recognition and a skilled artisan at the time of the invention would have been motivated to implement Hansen’s features of prompting a user with a proposal method as a way of facilitating direct content selection of a user’s choice to an external device and the results would have been predictable. Regarding claim 2, Niewczas discloses a service providing server comprising one or more processors. See column 1, lines 7-8 and column 2, lines 27-61 disclosing a system of using voice recognition in a social-networking system and a system of recording and analyzing a user’s voice to determine a digital voiceprint for the user in order to tailor customized content. See column 3, lines 43-47, column 17, lines 27-65, and column 32, lines 10-28. See figure 1 and column 48 disclosing a computer system including a processor, memory, storage, I/O interface, communication interface and a bus. Niewczas discloses receive a voice registration request of each of a plurality of users from a user terminal. See column 2, lines 27-61 where Niewczas discloses receiving a voiceprint by a client system, stored on the social-networking system. A user at a client system, such as a smartphone, may establish a voiceprint by speaking words and phrases into the microphone of the smartphone which records the user’s speech as audio input. A voiceprint is generated based on the audio input and stored in the data store as a user’s voiceprint. Multiple users are able to create voiceprints (first and second users). See columns 2-3. See also column 17, lines 27-65. This element is interpreted under 35 U.S.C. 112(f) as the service providing server that receives the registration information from an external device described in the specification at column 4, lines 51-column 5, line 5. Niewczas discloses perform a mapping of voice information of each of the plurality of users included in the voice registration request with user information including identification information and preferred content information, and storing the mapped information in the service providing server. See column 2, lines 49-51 disclosing that a voiceprint generated based on the audio input is stored in a data store a user’s voiceprint. The social-networking system includes one or more user-profile stores for storing user profiles such as biographic information, demographic information, social information, educational and work history, hobbies, interests, affinities, etc. See column 8, lines 14-28. See also column 17, lines 27-65. Niewczas discloses that when the social networking system first receives a first audio input from an unknown user, the social-networking system receives identify information for the unknown user, generates a new voiceprint based on the first audio input and stores the new voiceprint in association with identify information for subsequent access by the social networking system. See column 25, lines 50-57. Column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. See also column 6, lines 42-55 disclosing the information stored in data stores may be organized according to specific data structures. In particular embodiments, each data store may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system, a social-networking system, or a third-party system to manage, retrieve, modify, add, or delete, the information stored in data store. This element is interpreted under 35 U.S.C. 112(f) as the service providing server, with the identification and determination unit for identifying a user and determining the customized content described in the specification at column 4, line 66-column 6. Niewczas discloses receive a service provision request message including a voice of a speaker from the user terminal; analyze the voice included in the service provision request message to identify the speaker of the voice, which is to be corresponded to one of the plurality of users. See column 2, lines 51-57 disclosing that when a user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Additionally, the social-networking system may use the user's social-networking information when identifying or authenticating the user based on the voiceprint, and when performing actions based on voice commands. See also column 17, lines 52-65. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the identification unit of the service providing server that analyzes the voice of the speaker and extracts speaker voice data having the same format as the registered voice data described in the specification at columns 6, lines 37-42 and Table 1 in column 5. Niewczas discloses determine at least a part of the preferred content information of an identified user as a customized content for the speaker. See column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the determination unit for determining the customized content described in the specification at column 6. Niewczas discloses providing customized content to the users in the form of ads, newsfeeds and push notifications in response to a voice analysis; however, to the extent Niewczas may not disclose generating a customized service proposal message, Hansen discloses generate, by the service providing server, a customized service proposal message based on the customized content and a result of the voice analysis; transmit the generated customized service proposal message to the user terminal to output the customized service proposal message to the speaker. See column 11, lines 25-36 disclosing the family interaction engine may automatically sense the presence of individual users (e.g., using voice identification or face recognition), and the family interaction engine may automatically show or hide containers, based on the preferences of those detected users. See column 20, lines 25-41 and figure 7 disclosing presenting new recommendations based on the detected user. The family interaction engine may use the content recommendation module when determining content for the family channel and when determining content for each individual user's home screen. As indicated above with regard to the recommended content panel, the home screen may include a container for presenting content recommendations, based on the user profile for the current user. In particular, the content recommendation module may present recommendations based on a preference model for the current user. The preference models for the users may be stored in the respective user profiles, or in any other suitable location. Hansen discloses presenting the recommendations on the tablet. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the determination unit for determining the customized content described in the specification at columns 6-7 Hansen discloses receive a customized service approval message from the user terminal; generate a control command needed to provide a customized service for the speaker; and transmit the generated control command to an external electronic device to provide the customized service. See column 31, lines 7-column 32, lines 27 disclosing a scenario in which a Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. When Mom first picks up the tablet to turn on the Olympics, the application tailoring module may use the tablet's camera to automatically detect that Mom is the current user. The application tailoring module may then consult the user profile for the current user, to identify any preferences for the current user that are relevant to the application. The video player application can be configured to cooperate or integrate with a separate home theater system in the living room. Alternatively, applications may be configured to cooperate or integrate with other external or remote device, including without limitation external media devices, such as televisions, video game consoles, streaming video players, audio receivers, etc. In another scenario, applications may be configured to present both media and supplemental data on the tablet. In the example scenario, Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the determination unit for determining the customized content described in the specification at column 6. It would have been obvious to a skilled artisan at the time of the invention to have incorporated Hansen’s features of providing a customized service proposal message and providing the customized service based on a user’s approval within Niewczas’s system of providing customized content because both Niewczas and Hansen were directed to presenting customized content to users based on voice recognition and a skilled artisan at the time of the invention would have been motivated to implement Hansen’s features of prompting a user with a proposal method as a way of facilitating direct content selection of a user’s choice to an external device and the results would have been predictable. Regarding claim 3, Niewczas discloses the method of claim 1, wherein the analyzing the voice includes analyzing a call portion in a message presented by the voice and analyzing a request portion in the message presented by the voice, wherein steps of analyzing the call portion and analyzing the request portion are independently performed. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Regarding claim 4, Niewczas discloses the service providing server of claim 2, wherein the one or more processors are configured to analyze a call portion in a message presented by the voice and analyze a request portion in the message presented by the voice, and wherein the one or more processors analyze the call portion and the request portion independently. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the speaker identifier unit for performing the call analysis (see column 5, lines 38-55) and the determination unit for analyzing the request portion (see column 5, lines 56-67). Regarding claim 5, Niewczas discloses a method for providing a personalized voice recognition service, the method comprising. See column 1, lines 7-8 and column 2, lines 27-61 disclosing a method of using voice recognition in a social-networking environment and a method of recording and analyzing a user’s voice to determine a digital voiceprint for the user. See also column 17, lines 27-65. Niewczas discloses receiving, by a service providing server, a voice registration request of each of a plurality of users from a user terminal. See column 2, lines 27-61 where Niewczas discloses receiving a voiceprint by a client system, stored on the social-networking system. A user at a client system, such as a smartphone, may establish a voiceprint by speaking words and phrases into the microphone of the smartphone which records the user’s speech as audio input. A voiceprint is generated based on the audio input and stored in the data store as a user’s voiceprint. Multiple users are able to create voiceprints (first and second users). See columns 2-3. See also column 17, lines 27-65. Niewczas discloses performing, by the service providing server, a mapping of voice information of each of the plurality of users included in the voice registration request with user information including identification information and preferred content information, and storing the mapped information in the service providing server. See column 2, lines 49-51 disclosing that a voiceprint generated based on the audio input is stored in a data store a user’s voiceprint. The social-networking system includes one or more user-profile stores for storing user profiles such as biographic information, demographic information, social information, educational and work history, hobbies, interests, affinities, etc. See column 8, lines 14-28. See also column 17, lines 27-65. Niewczas discloses that when the social networking system first receives a first audio input from an unknown user, the social-networking system receives identify information for the unknown user, generates a new voiceprint based on the first audio input and stores the new voiceprint in association with identify information for subsequent access by the social networking system. See column 25, lines 50-57. Column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. See also column 6, lines 42-55 disclosing the information stored in data stores may be organized according to specific data structures. In particular embodiments, each data store may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system, a social-networking system, or a third-party system to manage, retrieve, modify, add, or delete, the information stored in data store. Niewczas discloses receiving, by the service providing server, a service provision request message including a voice of a speaker from the user terminal; analyzing, by the service providing server, the voice included in the service provision request message to identify the speaker of the voice, which is to be corresponded to one of the plurality of users. See column 2, lines 51-57 disclosing that when a user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Additionally, the social-networking system may use the user's social-networking information when identifying or authenticating the user based on the voiceprint, and when performing actions based on voice commands. See also column 17, lines 52-65. Niewczas discloses determining, by the service providing server, at least a part of the preferred content information of an identified user as a customized content for the speaker. See column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. Hansen discloses generating, by the service providing server, a control command needed to provide a customized service for the speaker. See column 31, lines 7-column 32, lines 27 disclosing a scenario in which a Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. When Mom first picks up the tablet to turn on the Olympics, the application tailoring module may use the tablet's camera to automatically detect that Mom is the current user. The application tailoring module may then consult the user profile for the current user, to identify any preferences for the current user that are relevant to the application. The video player application can be configured to cooperate or integrate with a separate home theater system in the living room. Alternatively, applications may be configured to cooperate or integrate with other external or remote device, including without limitation external media devices, such as televisions, video game consoles, streaming video players, audio receivers, etc. In another scenario, applications may be configured to present both media and supplemental data on the tablet. In the example scenario, Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. It would have been obvious to a skilled artisan at the time of the invention to have incorporated Hansen’s features of providing a customized service proposal message and providing the customized service based on a user’s approval within Niewczas’s system of providing customized content because both Niewczas and Hansen were directed to presenting customized content to users based on voice recognition and a skilled artisan at the time of the invention would have been motivated to implement Hansen’s features of prompting a user with a proposal method as a way of facilitating direct content selection of a user’s choice to an external device and the results would have been predictable. Regarding claim 6, Niewczas discloses wherein the analyzing the voice includes analyzing a call portion in a message presented by the voice; and analyzing a request portion in the message presented by the voice, wherein said analyzing the call portion and said analyzing the request portion are independently performed. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Regarding claim 7, Niewczas discloses the method of claim 5, wherein analyzing the voice comprises: identifying the speaker of the voice based on a text-dependent analysis. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker. See columns 2-3 and column 17, lines 47-65. Regarding claim 8, Niewczas discloses the method of claim 5, wherein analyzing the voice comprises: identifying the speaker of the voice based on a part of the voice included in the service provision request message. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker such as when a user issues a voice command such as “play music”. See columns 2-3 and column 17, lines 47-65. Regarding claim 9, Niewczas discloses the method of claim 8, wherein identifying the speaker comprises: identifying the speaker of the voice based on the part of the voice by a text-dependent analysis. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker. See columns 2-3 and column 17, lines 47-65. Regarding claim 10, Niewczas discloses the method of claim 8, wherein identifying the speaker comprises: identifying the speaker of the voice by comparing a first voice information included in the service provision request message with a second voice information stored in the server. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker. Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker such as when a user issues a voice command such as “play music”. See columns 2-3 and column 17, lines 47-65. Regarding claim 11, Niewczas discloses the method of claim 5, wherein the identified user includes an additional speaker in addition to the speaker who spoke the voice from the user terminal. See column 20, lines 4-7 disclosing the social-networking system 160 may receive, from a client system 130 of a first user 180 of the online social network, an audio input from a second user 182, wherein the audio input comprises one or more voice commands. Regarding claim 12, Niewczas discloses the method of claim 5, wherein the determining the at least part of the preferred content information comprises: determining a second customized content for the speaker and an additional speaker based on a request for the second customized content by additional speaker is received. See column 20, lines 4-7 disclosing the social-networking system 160 may receive, from a client system 130 of a first user 180 of the online social network, an audio input from a second user 182, wherein the audio input comprises one or more voice commands. See column 22, lines 16-24 disclosing in reference to figure 3, At step 350, the social-networking system 160 may perform the action associated with each voice command using a user identity associated with the second user 182. Additionally, Hansen discloses a family interaction engine in which an additional user can make a request for second customized content in a shared tablet. See column 5, lines 45-column 6, line 3 disclosing a shared tablet may frequently be handed from one user to another. If the first user happens to already be logged in to the account that the second user intends to use, the second user may easily access the desired content. Otherwise, with a conventional tablet, a cumbersome and inefficient process may be required to log the first user out and to log the second user in to the desired account. By contrast, according to the present disclosure, the family interaction engine may automatically determine which user is holding the tablet, and the family interaction engine may automatically change the user interface and the open user account in response to detecting that the tablet has been handed from one user to another. More details concerning the types of operations that may be performed when the tablet is handed from one user to another are provided below, with regard to FIG. 8. Also, when the tablet is being used to present media content, the family interaction engine may determine which user is currently interacting with the tablet, and the family interaction engine may cause the tablet to display supplemental data that is relevant both to the media content and to a predetermined interest of the current user. The family interaction engine may then determine that a second user is interacting with the tablet. In response, the family interaction engine may cause the tablet to display new supplemental data that is relevant to the media content and to a predetermined interest of the second user. See also column 43. Regarding claim 13, Niewczas discloses the method of claim 5, wherein the service providing server is configured to receive the registration information from an external electronic device and store the received registration information in a storage of the service providing server. See column 2, lines 27-61 where Niewczas discloses receiving a voiceprint by a client system, stored on the social-networking system. A user at a client system, such as a smartphone, may establish a voiceprint by speaking words and phrases into the microphone of the smartphone which records the user’s speech as audio input. A voiceprint is generated based on the audio input and stored in the data store as a user’s voiceprint. Multiple users are able to create voiceprints (first and second users). See columns 2-3. See also column 17, lines 27-65. This element is interpreted under 35 U.S.C. 112(f) as the service providing server that receives the registration information from an external device and the storage unit of the service providing server described in the specification at column 4, lines 51-column 5, line 5. Regarding claim 14, Niewczas discloses the method of claim 5, wherein the service provision request message includes a call portion and a request portion. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Regarding claim 15, Niewczas discloses the method of claim 5, wherein the speaker of the voice is identified by the call portion of a service provision request message. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Regarding claim 16, Niewczas discloses a service providing server comprising one or more processors. See column 1, lines 7-8 and column 2, lines 27-61 disclosing a system of using voice recognition in a social-networking system and a system of recording and analyzing a user’s voice to determine a digital voiceprint for the user in order to tailor customized content. See column 3, lines 43-47, column 17, lines 27-65, and column 32, lines 10-28. See figure 1 and column 48 disclosing a computer system including a processor, memory, storage, I/O interface, communication interface and a bus. Niewczas discloses receive a voice registration request of each of a plurality of users from a user terminal. See column 2, lines 27-61 where Niewczas discloses receiving a voiceprint by a client system, stored on the social-networking system. A user at a client system, such as a smartphone, may establish a voiceprint by speaking words and phrases into the microphone of the smartphone which records the user’s speech as audio input. A voiceprint is generated based on the audio input and stored in the data store as a user’s voiceprint. Multiple users are able to create voiceprints (first and second users). See columns 2-3. See also column 17, lines 27-65. This element is interpreted under 35 U.S.C. 112(f) as the service providing server that receives the registration information from an external device described in the specification at column 4, lines 51-column 5, line 5. Niewczas discloses perform a mapping of voice information of each of the plurality of users included in the voice registration request with user information including identification information and preferred content information, and storing the mapped information in the service providing server. See column 2, lines 49-51 disclosing that a voiceprint generated based on the audio input is stored in a data store a user’s voiceprint. The social-networking system includes one or more user-profile stores for storing user profiles such as biographic information, demographic information, social information, educational and work history, hobbies, interests, affinities, etc. See column 8, lines 14-28. See also column 17, lines 27-65. Niewczas discloses that when the social networking system first receives a first audio input from an unknown user, the social-networking system receives identify information for the unknown user, generates a new voiceprint based on the first audio input and stores the new voiceprint in association with identify information for subsequent access by the social networking system. See column 25, lines 50-57. Column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. See also column 6, lines 42-55 disclosing the information stored in data stores may be organized according to specific data structures. In particular embodiments, each data store may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system, a social-networking system, or a third-party system to manage, retrieve, modify, add, or delete, the information stored in data store. This element is interpreted under 35 U.S.C. 112(f) as the service providing server, with the identification and determination unit for identifying a user and determining the customized content described in the specification at column 4, line 66-column 6. Niewczas discloses receive a service provision request message including a voice of a speaker from the user terminal; analyze the voice included in the service provision request message to identify the speaker of the voice, which is to be corresponded to one of the plurality of users. See column 2, lines 51-57 disclosing that when a user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Additionally, the social-networking system may use the user's social-networking information when identifying or authenticating the user based on the voiceprint, and when performing actions based on voice commands. See also column 17, lines 52-65. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the identification unit of the service providing server that analyzes the voice of the speaker and extracts speaker voice data having the same format as the registered voice data described in the specification at columns 6, lines 37-42 and Table 1 in column 5. Niewczas discloses determine at least a part of the preferred content information of an identified user as a customized content for the speaker. See column 32, lines 16-20 discloses that the social-networking system can provide customized content to the identified users based on their social-networking information. The customized content is personalized to match their interests and includes ads, newsfeeds, push notifications, coupons, etc. See column 34, lines 21-48 disclosing the social-networking system may send customized content to one or more of the first user or the second user based on their social-networking information. The social-networking system may generate the customized content based on one or more interests of the first user or the second user, wherein the one or more interests are received from the online social network. In particular embodiments, the customized content may comprise content having one or more topics that match the interests of the first user or the second user. In particular embodiments, the customized content may comprise advertisements, news feeds, push notifications, place tips, coupons, suggestions, or a combination thereof. See also figure 6. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the determination unit for determining the customized content described in the specification at column 6. Hansen discloses generate a control command needed to provide a customized service for the speaker. See column 31, lines 7-column 32, lines 27 disclosing a scenario in which a Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. When Mom first picks up the tablet to turn on the Olympics, the application tailoring module may use the tablet's camera to automatically detect that Mom is the current user. The application tailoring module may then consult the user profile for the current user, to identify any preferences for the current user that are relevant to the application. The video player application can be configured to cooperate or integrate with a separate home theater system in the living room. Alternatively, applications may be configured to cooperate or integrate with other external or remote device, including without limitation external media devices, such as televisions, video game consoles, streaming video players, audio receivers, etc. In another scenario, applications may be configured to present both media and supplemental data on the tablet. In the example scenario, Mom has used the video player application to cause a selected program to be presented by a separate home theater system in the living room. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the determination unit for determining the customized content described in the specification at column 6. It would have been obvious to a skilled artisan at the time of the invention to have incorporated Hansen’s features of providing a customized service proposal message and providing the customized service based on a user’s approval within Niewczas’s system of providing customized content because both Niewczas and Hansen were directed to presenting customized content to users based on voice recognition and a skilled artisan at the time of the invention would have been motivated to implement Hansen’s features of prompting a user with a proposal method as a way of facilitating direct content selection of a user’s choice to an external device and the results would have been predictable. Regarding claim 17, Niewczas discloses the service providing server of claim 16, wherein the one or more processors are further configured to analyze a call portion in a message presented by the voice; and analyze a request portion in the message presented by the voice, wherein said analyzing the call portion and said analyzing the request portion are independently performed. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the speaker identifier unit for performing the call analysis (see column 5, lines 38-55) and the determination unit for analyzing the request portion (see column 5, lines 56-67). Regarding claim 18, Niewczas discloses the service providing server of claim 16, wherein the one or more processors are further configured to analyze the voice comprises: identifying the speaker of the voice based on a text-dependent analysis. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker. See columns 2-3. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the speaker identifier unit for determining the speaker. See column 2, lines 8-28. Regarding claim 19, Niewczas discloses the service providing server of claim 16, wherein the one or more processors are further configured to identify the speaker of the voice based on a part of the voice included in the service provision request message. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker such as when a user issues a voice command such as “play music”. See columns 2-3 and column 17, lines 47-65. See columns 2-3. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the speaker identifier unit for determining the speaker. See column 2, lines 8-28. Regarding claim 20, Niewczas discloses the service providing server of claim 19, wherein the one or more processors are further configured to identify the speaker of the voice based on the part of the voice by a text-dependent analysis. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker. See columns 2-3. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the speaker identifier unit for determining the speaker. See column 2, lines 8-28. Regarding claim 21, Niewczas discloses the service providing server of claim 19, wherein the one or more processors are further configured to identify the speaker of the voice by identifying the speaker of the voice by comparing a first voice information included in the service provision request message with a second voice information stored in the server. See column 2, lines 15-24 disclosing, “voice profiles can be generated for individual users to store data specific to each individual user for use in recognizing each individual user's speech. The voice profile information may include parameters such as the user's default language or a speaker-dependent model generated based on that user's voice…” Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker. Niewczas further discloses storing a voiceprint for a user so that a voiceprint match can be performed for a speaker upon subsequently receiving audio input from the same speaker such as when a user issues a voice command such as “play music”. See columns 2-3 and column 17, lines 47-65. See columns 2-3. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the speaker identifier unit for determining the speaker. See column 2, lines 8-28. Regarding claim 22, Niewczas discloses the service providing server of claim 16, wherein the identified user includes an additional speaker in addition to the speaker who spoke the voice from the user terminal. See column 20, lines 4-7 disclosing the social-networking system 160 may receive, from a client system 130 of a first user 180 of the online social network, an audio input from a second user 182, wherein the audio input comprises one or more voice commands. Regarding claim 23, Niewczas discloses the service providing server of claim 16, wherein the one or more processors are further configured to determine the at least part of the preferred content information by determining a second customized content for the speaker and an additional speaker based on a request for the second customized content by additional speaker is received. See column 20, lines 4-7 disclosing the social-networking system 160 may receive, from a client system 130 of a first user 180 of the online social network, an audio input from a second user 182, wherein the audio input comprises one or more voice commands. See column 22, lines 16-24 disclosing in reference to figure 3, At step 350, the social-networking system 160 may perform the action associated with each voice command using a user identity associated with the second user 182. Additionally, Hansen discloses a family interaction engine in which an additional user can make a request for second customized content in a shared tablet. See column 5, lines 45-column 6, line 3 disclosing a shared tablet may frequently be handed from one user to another. If the first user happens to already be logged in to the account that the second user intends to use, the second user may easily access the desired content. Otherwise, with a conventional tablet, a cumbersome and inefficient process may be required to log the first user out and to log the second user in to the desired account. By contrast, according to the present disclosure, the family interaction engine may automatically determine which user is holding the tablet, and the family interaction engine may automatically change the user interface and the open user account in response to detecting that the tablet has been handed from one user to another. More details concerning the types of operations that may be performed when the tablet is handed from one user to another are provided below, with regard to FIG. 8. Also, when the tablet is being used to present media content, the family interaction engine may determine which user is currently interacting with the tablet, and the family interaction engine may cause the tablet to display supplemental data that is relevant both to the media content and to a predetermined interest of the current user. The family interaction engine may then determine that a second user is interacting with the tablet. In response, the family interaction engine may cause the tablet to display new supplemental data that is relevant to the media content and to a predetermined interest of the second user. See also column 43. This element is interpreted under 35 U.S.C. 112(f) as the processor, with the speaker identifier unit for performing the call analysis (see column 5, lines 38-55) and the determination unit for analyzing the request portion (see column 5, lines 56-67). Regarding claim 24, Niewczas discloses wherein the service providing server is configured to receive the registration information from an external electronic device and store the received registration information in a storage of the service providing server. See column 2, lines 27-61 where Niewczas discloses receiving a voiceprint by a client system, stored on the social-networking system. A user at a client system, such as a smartphone, may establish a voiceprint by speaking words and phrases into the microphone of the smartphone which records the user’s speech as audio input. A voiceprint is generated based on the audio input and stored in the data store as a user’s voiceprint. Multiple users are able to create voiceprints (first and second users). See columns 2-3. See also column 17, lines 27-65. This element is interpreted under 35 U.S.C. 112(f) as the service providing server that receives the registration information from an external device and the storage unit of the service providing server described in the specification at column 4, lines 51-column 5, line 5. Regarding claim 25, Niewczas discloses wherein the service provision request message includes a call portion and a request portion. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Regarding claim 26, Niewczas discloses wherein the speaker of the voice is identified by the call portion of a service provision request message. See column 2, lines 42-61 disclosing that the social-networking system may use the voiceprint to identify or authenticate a user based on audio input, and then perform actions based on voice commands in the audio input. For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user's speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user's voiceprint. Subsequently, when that user speaks a voice command such as “play music” into a smartphone or other client system, the voice command may be compared with the user's voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user's identity, e.g., playing music from the user's music library. Conclusion 18. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHNA SINGH DESAI whose telephone number is (571)272-4099. The examiner can normally be reached on M-F 7:30-4PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Kosowski can be reached on 571-272-3744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHNA S DESAI/Primary Examiner, Art Unit 3992 Conferees: /William H. Wood/Primary Examiner, Art Unit 3992 /ALEXANDER J KOSOWSKI/Supervisory Patent Examiner, Art Unit 3992
Read full office action

Prosecution Timeline

Aug 09, 2023
Application Filed
Aug 09, 2023
Response after Non-Final Action
Dec 09, 2025
Non-Final Rejection — §103, §112
Jan 13, 2026
Non-Final Rejection — §103, §112
Mar 17, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent RE50780
APPARATUS AND METHOD FOR GENERATING A SYNTHESIS AUDIO SIGNAL AND FOR ENCODING AN AUDIO SIGNAL
2y 5m to grant Granted Feb 03, 2026
Patent RE50767
APPARATUS AND METHOD FOR GENERATING A SYNTHESIS AUDIO SIGNAL AND FOR ENCODING AN AUDIO SIGNAL
2y 5m to grant Granted Jan 27, 2026
Patent RE50710
APPARATUS AND METHOD FOR GENERATING A SYNTHESIS AUDIO SIGNAL AND FOR ENCODING AN AUDIO SIGNAL
2y 5m to grant Granted Dec 23, 2025
Patent RE50692
APPARATUS AND METHOD FOR GENERATING A SYNTHESIS AUDIO SIGNAL AND FOR ENCODING AN AUDIO SIGNAL
2y 5m to grant Granted Dec 09, 2025
Patent RE50693
APPARATUS AND METHOD FOR GENERATING A SYNTHESIS AUDIO SIGNAL AND FOR ENCODING AN AUDIO SIGNAL
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
45%
Grant Probability
72%
With Interview (+27.1%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 111 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month