Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is responsive to application No. 18/384,551 filed on 12/10/2025. Claim(s) 1-20 is/are pending and have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 6-7, 16-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vandichalrajan (US 2015/0304727) in view of Sanders et al. (US 2023/0396854).
Consider claims 1, 16, and 19, Vandichalrajan teaches a method, system, and a non-transitory computer-readable storage medium having computer-executable instructions stored thereon which, when executed by at least one computer processor, cause operations to be performed, the operations comprising/including: at least one computer processor; and at least one memory coupled to the at least one computer processor, wherein the at least one memory has computer-executable instructions stored thereon which, when executed by the at least one computer processor, cause operations to be performed, the operations (Fig.2, Paragraph 0030-00031, 0085-0088) including:
electronically determining, by at least one computer processor, whether a particular user of a plurality of users is viewing or listening to a presentation device; automatically detecting, by at least one computer processor, which language of a plurality of languages is preferred by the particular user in response to the electronic determination whether the particular user is viewing or listening to the presentation device; and based on the automatic detection of which language of the plurality of languages is preferred by the particular user, electronically presenting, by at least one computer processor, media to the particular user that is in the detected language; wherein: which language of the plurality of languages is preferred by the particular user is detected automatically from a user profile for the particular user in response to detecting biometric data that identifies the particular user of the plurality of users; and electronically presenting media to the particular user that is in the detected language comprises automatically switching to information of a program being displayed on the presentation device that is in the detected language (Paragraph 0017 teaches the user profile of each of the detected one or more viewers may be determined based on a face recognition, a speech recognition, and/or a proximity detection of one or more other electronic devices associated with the one or more viewers. Paragraph 0024 teaches the electronic device 102 may be operable to detect the one or more viewers, such as the viewer 112 and the viewer 114, associated with the electronic device 102. The electronic device 102 may be operable detect such one or more viewers, based on user profiles stored in the database server 106. In an embodiment, the user profiles of each of the detected viewers 112 and 114, may be determined based on a face recognition, a speech recognition, and/or a proximity detection of one or more other electronic devices associated with the viewers 112 and 114. Paragraph 0026 teaches the electronic device 102 may be further operable to dynamically translate the received metadata from a first language to one or more other languages based on the detected one or more viewers. Paragraph 0027 teaches the electronic device 102 may determine the one or more other languages based on a user profile of each of the detected viewers 112 and 114. Paragraph 0037 teaches the sensing device 214 may comprise one or more sensors to confirm recognition, identification and/or verification of a viewer. Paragraph 0043 teaches the metadata may comprise one or more of EPG information, subtitle information, closed caption information, and/or the like. Paragraph 0046 teaches the sensing device 214 may be operable to detect the one or more viewers, such as the viewers 112 and 114, based on biometric data. Paragraph 0047 teaches based on the retrieved one or more biometric parameters of the viewers 112 and 114, the processor 202 may determine the user profiles of the viewers 112 and 114, from a pre-stored set of user profiles. Each user profile of the respective viewer may comprise viewer data, such as preferred language. Paragraph 0048 teaches the language detection unit 204 may be operable to determine one or more other languages associated with the viewers 112 and 114, based on the user profiles determined for the viewers 112 and 114. Paragraph 0052 teaches based on the one or more other languages determined by the language detection unit 204, the language translation unit 206 may be operable to dynamically translate the received metadata from a first language to the one or more other languages. Paragraph 0054 teaches translated metadata may be rendered in the one or more other languages. Paragraph 0061 teaches based on the determined user profile, the language detection unit 204 may determine a second language, such as Spanish language, associated with the viewer 112. Paragraph 0062 teaches in response to the determined second language, the language translation unit 206 may translate the metadata from the English language to the Spanish language. The processor 202 may display the metadata in the Spanish language in the first sub-region 302a, the second sub-region 302b and the third sub-region 302c in the EPG information 302 rendered on the display screen 110).
Vandichalrajan does not explicitly teach automatically switching to an audio track of a program being displayed on the presentation device that is in the detected language.
In an analogous art, Sanders teaches automatically switching to an audio track of a program being displayed on the presentation device that is in the detected language (Paragraph 0189 teaches electronic device stores any number of preferred language settings for respective users 622A, 622B, and 622C. Electronic device optionally stores preferred language settings, which include language C as a first preferred language for user 622A. Paragraph 0190 teaches electronic device determines that a certain user is currently active based on facial recognition information of the user. Paragraph 0192, 0224 teaches adjusting an audio track of the content item according to the preferred language settings of user 622A. Because current default audio track language 614 is language A rather than preferred language of user 622A, electronic device adjusts the audio track such that audio track language 624 is a preferred language of user 622A, such as first preferred language, e.g. language C. Electronic device may automatically, e.g., without user input, change the audio track to an audio track in the first available preferred language of the user 622A. In some embodiments, in response to receiving a confirmation input from user 622A, the electronic device changes the audio track to an audio track in the first available preferred language of user 622A).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan to include automatically switching to an audio track of a program being displayed on the presentation device that is in the detected language, as taught by Sanders, for the advantage of providing efficient ways for consuming content items according to certain language settings, e.g. preferred languages, when a user may wish to consume content items associated with audio tracks in various languages (Sanders – Paragraph 0003), allowing the system to quickly provide preferred audio that the user can understand and/or prefer.
Consider claims 2, 17, and 20, Vandichalrajan and Sanders teach wherein the electronically determining whether the particular user of the plurality of users is viewing or listening to a presentation device includes:
receiving the biometric data identifying the particular user via a device coupled to the presentation device or coupled to a receiving device connected to the presentation device (Vandichalrajan - Paragraph 0017, 0024; Paragraph 0030, 0037, 0046).
Consider claim 3, Vandichalrajan and Sanders teach wherein the device comprises a camera for identifying the particular user (Vandichalrajan - Paragraph 0037).
Consider claim 6, Vandichalrajan and Sanders teach further comprising electronically presenting to the particular user, in response to detecting biometric data that identifies the particular user (Vandichalrajan - Paragraph 0026, 0043, 0052, 0054, 0062; Paragraph 0037-0038).
Sanders further teaches further comprising electronically presenting to the particular user, in response to detecting biometric data that identifies the particular user, a confirmation screen that asks the particular user to confirm that the detected language is correct (Paragraph 0190 teaches electronic device determines that a certain user is currently active based on facial recognition information of the user. In some embodiments, in response to receiving a confirmation input from user 622A, the electronic device changes the audio track to an audio track in the first available preferred language of user 622A).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan and Sanders to include further comprising electronically presenting to the particular user, in response to detecting biometric data that identifies the particular user, a confirmation screen that asks the particular user to confirm that the detected language is correct, as further taught by Sanders, for the advantage of enabling the system to receive added feedback/input from users, in order to verify accuracy, in case things may have changed, allowing they system to proceed with the appropriate settings.
Consider claim 7, Vandichalrajan and Sanders teach wherein the biometric data comprises data identifying the particular user based on one or more of: a voice pattern of the particular user (Vandichalrajan - sensing device 214-Fig.2, Paragraph 0037), a voice signature of the particular user, voice recognition of the particular user, a hand shape of the particular user, a finger vein pattern of the particular user, iris characteristics of the particular user, retina characteristics of the particular user, facial recognition of the particular user, gestures of the particular user, gait of the particular user, movements of the particular user, and how the particular user moves a remote control device that is coupled to the presentation device or coupled to a receiving device connected to the presentation device.
Claim(s) 4, 5, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vandichalrajan (US 2015/0304727), in view of Sanders et al. (US 2023/0396854), and further in view of Robinson et al. (US 2016/0182950).
Consider claims 4 and 18, Vandichalrajan and Sanders do not explicitly teach wherein the device comprises a remote control device.
In an analogous art, Robinson teaches wherein device comprises a remote control device (Paragraph 0032, 0034, 0036).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan and Sanders to include wherein device comprises a remote control device, as taught by Robinson, for the advantage of providing user(s) with convenient controls, that is at their disposal, allowing them to perform and control a multitude of entertainment functions.
Consider claim 5, Vandichalrajan, Sanders, and Robinson teach wherein: the biometric data identifies the particular user based on a fingerprint of the particular user; biometric input processing logic identifies the particular user by comparing the biometric data with a list of registered fingerprints; and both the fingerprint and which language of the plurality of languages is preferred by the particular user are stored in association with the user profile (Vandichalrajan - Paragraph 0027 teaches electronic device may determine the one or more other languages on a user profile of each of the detected viewer(s). Paragraph 0037 teaches the sensing device 214 may comprise one or more sensors may comprise a camera to detect at least one of a fingerprint, palm geometry, etc. Paragraph 0037 teaches sensing device 214 may implement various known algorithms for viewer recognition, viewer identification and/or viewer verification. Examples of such algorithms include, but are not limited to, algorithms for fingerprint matching. Paragraph 0047 teaches based on the retrieved one or more biometric parameters of the viewer(s) from a pre-stored set of user profiles. Each user profile of the respective viewer may comprise viewer data, such as preferred language. Paragraph 0048 teaches language detection unit 204 may be operable to determine one or more other languages associated with the viewer(s), based on the user profiles determined for the viewer(s). Paragraph 0069 teaches sensing device 214 may receive one or more biometric parameters of the viewer(s). Paragraph 0070 teaches based on the one or more biometric parameters received, processor 202 may determine a user profile of the viewer. The user profile may be determined from a pre-stored set of user profiles. Paragraph 0071 teaches based on the determined user profile, language detection unit may determine a second language for the viewer. Paragraph 0072 teaches in response to the determine language, language translation unit may translate the metadata from default English language to second language of the viewer).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over in view of Vandichalrajan (US 2015/0304727), in view of Sanders et al. (US 2023/0396854), and further in view of Palaniswami (US 11,190,851).
Consider claim 8, Vandichalrajan teach wherein the biometric data comprises data identifying the particular user based on a voice recognition of the particular user and wherein the biometric data identifying the particular user includes data (sensing device 214-Fig.2, Paragraph 0037), but do not explicitly teach data representing spoken words of the particular user and is additionally used in the automatically detecting which language of the plurality of languages is preferred by the particular user.
In an analogous art, Palaniswami teaches data representing spoken words of the particular user and is additionally used in the automatically detecting which language of the plurality of languages is preferred by the particular user (Col 5: lines 6-46, Col 7: lines 50-65, Fig.3A-C, Col 9: line 10 – Col 10: line 15).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan and Sanders to include data representing spoken words of the particular user and is additionally used in the automatically detecting which language of the plurality of languages is preferred by the particular user, as taught by Palaniswami, for the advantage of allowing users to immediately enjoy programming in a language they understand by merely speaking their own language, without having to view each channel to search for programs in their own language or search the program guide that is unfamiliar and perhaps in a different language than the user understands (Palaniswami – Col 1: lines 29-35), providing input that the system can process to determine user language.
Claim(s) 9-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vandichalrajan (US 2015/0304727), in view of Sanders et al. (US 2023/0396854), and further in view of Levy et al. (US 2011/0314502).
Consider claim 9, Vandichalrajan and Sanders teach wherein the electronically determining whether the particular user of the plurality of users is viewing or listening to the presentation device (Vandichalrajan - Paragraph 0017, 0024, 0037, 0046), but does not explicitly teach further includes:
a guest registration component of a computerized management system of a multi-unit property determining a particular unit of the multi-unit property that the particular user is staying in, wherein the particular unit is associated with the presentation device.
In an analogous art, Levy teaches a guest registration component of a computerized management system of a multi-unit property determining a particular unit of the multi-unit property that the particular user is staying in, wherein the particular unit is associated with the presentation device (Fig.1; Abstract, Paragraph 0027, 0036 teaches system for providing tailored entertainment experience at different hospitality locations, such as a hotels, resorts, etc, having entertainment devices installed in guest rooms. Paragraph 0058-0060, 0071-0072 teaches retrieving user profiles corresponding to current guests at each hotel and automatically adjusting what content is made available on STBs at each hotel according to information stored in user profile(s). Paragraph 0077, 0089-0090 teaches upon user check-in at hospitality location(s), the user-profile server 108 is queried to retrieve information related to the user, in order for in-room STB to be automatically and dynamically configured according to the particular guest).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan and Sanders to include a guest registration component of a computerized management system of a multi-unit property determining a particular unit of the multi-unit property that the particular user is staying in, wherein the particular unit is associated with the presentation device, as taught by Levy, for the advantage of allowing users to kill time during overnight stays (Levy – Paragraph 0005), providing travelers with familiar content or at least content in an understandable language at hotels (Levy – Paragraph 0007), enabling the system to tailor the entertainment experience to current users of a hospitality location by automatically adjusting content made available on each of the plurality of entertainment device at the hospitality location according to information stored in the user profiles (Levy – Paragraph 0009), providing a way in which to better aid, track, and manage users(s) and entertainment devices.
Consider claim 10, Vandichalrajan and Sanders teach wherein the automatically detecting which language of the plurality of languages is preferred by the particular user (Vandichalrajan - Paragraph 0027, 0047-0048, 0061).
Levy further teaches wherein the automatically detecting which language of a plurality of languages is preferred by the particular includes: the computerized management system electronically retrieving from a database a preference setting previously set for the particular user (Paragraph 0058, 0077, 0090; Paragraph 0071-0072, 0078, 0084).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan, Sanders, and Levy to include wherein the automatically detecting which language of a plurality of languages is preferred by the particular includes: the computerized management system electronically retrieving from a database a preference setting previously set for the particular user, as further taught by Levy, for the advantage of allowing the system to quickly ascertain user preference(s), in order to tailor entertainment to the user, efficiently and effectively.
Consider claim 11, Vandichalrajan, Sanders, and Levy teach wherein the electronically presenting media to the particular user that is in the detected language includes: in response to the computerized management system electronically retrieving from the database a preference setting previously set for the particular user, the computerized management system electronically causing the presentation device to electronically present media to the particular user in the detected language (Vandichalrajan – Paragraph 0026, 0043, 0052, 0054, 0062; Levy – Paragraph 0040, 0071-0072).
Consider claim 12, Vandichalrajan and Sanders teach further comprising: electronically causing the presentation device to prompt the particular user for input indicating a language preference of the particular user; electronically receiving input from the particular user, via the presentation device associated with the particular unit (Vandichalrajan - Paragraph 0045, 0050; Sanders – Paragraph 0189).
Levy further teaches the computerized management system electronically causing the device to prompt the particular user for input indicating a language preference of the particular user; in response to the prompt, the computerized management system electronically receiving input from the particular user, via the device, indicating a language preference of the particular user; the computerized management system electronically updating the user profile in the database indicating the language preference of the particular user (Paragraph 0107-0109);
the computerized management system electronically retrieving from the database the language preference of the particular user upon the particular user checking in via the guest registration component; and in response to the computerized management system retrieving from the database the language preference of the particular user upon the particular user checking in via the guest registration component, the computerized management system electronically causing the presentation device to electronically present media to the particular user according to the language preference (Paragraph 0110-0112; Paragraph 0077, 0089-0090).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan, Sanders, and Levy to include the computerized management system electronically causing the device to prompt the particular user for input indicating a language preference of the particular user; in response to the prompt, the computerized management system electronically receiving input from the particular user, via the device, indicating a language preference of the particular user; the computerized management system electronically updating the user profile in the database indicating the language preference of the particular user; the computerized management system electronically retrieving from the database the language preference of the particular user upon the particular user checking in via the guest registration component; and in response to the computerized management system retrieving from the database the language preference of the particular user upon the particular user checking in via the guest registration component, the computerized management system electronically causing the presentation device to electronically present media to the particular user according to the language preference, as further taught by Levy, for the advantage of enabling the system to tailor the entertainment experience to current users of a hospitality location by automatically adjusting content made available on each of the plurality of entertainment device at the hospitality location according to information stored in the user profiles (Levy – Paragraph 0009), allowing the system to better manage and keep track of user(s) preferences, in order to quickly facilitate and execute on tailored entertainment experience for user(s).
Consider claim 13, Vandichalrajan and Sanders teach wherein the electronically determining whether the particular user of the plurality of users is viewing or listening to a presentation device (Vandichalrajan - Paragraph 0017, 0024, 0037, 0046), but do not explicitly teach further includes:
electronically determining a current time window of a plurality of time windows associated with the presentation device, wherein each time window of the plurality of time windows is associated with a respective user profile of a respective user of the plurality of users;
electronically retrieving data indicating which respective user profile of the plurality of users is associated with the current time window;
electronically determining that the current time window is associated with the user profile of the particular user; and
electronically determining that the particular user of the plurality of users is viewing or listening to the presentation device based on the determination that the current time window is associated with the user profile of the particular user.
In an analogous art, Levy teaches electronically determining a current time window of a plurality of time windows associated with the presentation device, wherein each time window of the plurality of time windows is associated with a respective user profile of a respective user of the plurality of users; electronically retrieving data indicating which respective user profile of the plurality of users is associated with the current time window; electronically determining that the current time window is associated with the user profile of the particular user; and electronically determining that the particular user of the plurality of users is viewing or listening to the presentation device based on the determination that the current time window is associated with the user profile of the particular user (user arrival date/time 1210-Fig.12, Paragraph 0119-0120; Paragraph 0040, 0071-0072, 0077, 0089-0090).
Therefore, it would have been obvious to a person of ordinary skill in the art to modify the system of Vandichalrajan and Sanders to include electronically determining a current time window of a plurality of time windows associated with the presentation device, wherein each time window of the plurality of time windows is associated with a respective user profile of a respective user of the plurality of users; electronically retrieving data indicating which respective user profile of the plurality of users is associated with the current time window; electronically determining that the current time window is associated with the user profile of the particular user; and electronically determining that the particular user of the plurality of users is viewing or listening to the presentation device based on the determination that the current time window is associated with the user profile of the particular user, as taught by Levy, for the advantage of enabling the system to tailor the entertainment experience to current users of a hospitality location by automatically adjusting content made available on each of the plurality of entertainment device at the hospitality location according to information stored in the user profiles (Levy – Paragraph 0009), allowing the system to better manage and keep track of user(s) preferences, in order to quickly facilitate and timely execute on tailored entertainment experience for user(s).
Consider claim 14, Vandichalrajan, Sanders, and Levy teach wherein the automatically detecting which language of the plurality of languages is preferred by the particular user includes: in response to electronically determining that the particular user of the plurality of users is viewing or listening to the presentation device, electronically retrieving data from the user profile of the particular user indicating a language preference of the particular user (Vandichalrajan – Paragraph 0027, 0047-0048, 0061; Levy - Paragraph 0058, 0077, 0090; Paragraph 0071-0072, 0078, 0084).
Consider claim 15, Vandichalrajan, Sanders, and Levy teach wherein the electronically presenting media to the particular user that is in the detected language (Vandichalrajan – Paragraph 0026, 0043, 0052, 0054, 0062) includes one or more of:
automatically selecting language preference settings available in a configuration menu of the presentation device or receiving device connected to the presentation device based on the detected language;
automatically switching to a user interface on the presentation device that is in the detected language (Vandichalrajan – Paragraph 0043, 0052, 0054, 0062);
automatically switching to present on the presentation device closed captions that are in the detected language for a program currently being displayed on the presentation device;
automatically presenting on the presentation device a selectable list that only includes one or more television channels that are in the detected language or other content that is in the detected language;
automatically presenting on the presentation device a selectable list of recommendations of content that is in the detected language;
and automatically selecting and presenting on the presentation device content of one or more television channels that are in the detected language.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON K LIN whose telephone number is (571)270-1446. The examiner can normally be reached on Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON K LIN/Primary Examiner, Art Unit 2425