Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-16 is/are rejected under 35 U.S.C. 102(a)(1) as being unpatentable by US 20220109914 A1 to Kato et al. (“Kato”).
As to claim 1, Kato teaches a method implemented by a media decoder, comprising: identifying a future programme playable via the decoder and constituting a programme of interest for a user of the decoder, comparing a time until broadcast of the programme of interest and a predefined threshold (¶0026, even if a channel is selected to play a program, whether the user watches the program or not is determined according to, for example, a continuous play time. For example, while the program is continuously playing for more than 5 minutes, it is determined that the user has watched the program); causing an emission of an alarm signal perceptible by a person, provided that the time until broadcast is less than the predefined threshold, -causing a recording of the programme of interest, the recording being implemented provided no user action on the decoder has been detected in a predefined period starting from the emission of the alarm signal (¶0034, The schedule control element 145 transmits a notification signal to a sound control element 146 when the start time of the recommended program is coming. The sound control element 146 sends a notification signal to a sound output element 602 via an interface, such as a communication element 150. Therefore, a speaker 603 provides a notification sound so that the user is able to know that the start time of the recommended program is close. In particular, in the present embodiment, the notification speaker 603 that outputs a notification sound is arranged in a place different from that of the speaker system 106 for program sound output (watching). The sound output element 602 and the notification speaker 603 are configured as one speaker component and can be connected to the television apparatus 100 by a wire, or can be connected to the television apparatus 100 by a short-range wireless or infrared way, such as Wi-Fi (registered trademark) or Bluetooth (registered trademark). In this case, the user is not necessarily required to watch the television apparatus 100 in front of the television apparatus 100, and the user is free to move the speaker component to the kitchen, bedroom, or the like, and places the speaker in a place where the notification sound is easily heard).
As to claim 2, Kato teaches the method according to claim 1, wherein the alarm signal is emitted provided that a person has been detected by a sensor of the decoder (¶0063, the user speaks the name, the user's sound is recorded via the microphone 606. In addition, same as before, with respect to the sound data, parsing of the sound frequency is performed by the frequency parser, and based on the parsing result, the detection processing of the sound feature component distribution is performed by the sound feature component detector).
As to claim 3, Kato teaches the method according to claim 2, wherein the alarm signal is emitted provided that the person detected is recognised by the decoder as a user of the decoder having been previously enrolled (¶0063, Then, the re-input sound feature data of the user corresponding to the sound feature component distribution is compared with the registered sound feature data previously recorded and registered in the memory 611 (S11, S12)).
As to claim 4, Kato teaches the method according to claim 2, wherein the alarm signal is emitted provided that the person detected is recognised by the decoder as a user of the decoder for which the programme is of interest (¶0066, FIG. 3B is a diagram for illustrating an effect in a case where register sound feature data of multiple members within a family can be recognized as described above. Since the sound feature data of the user can be registered in the sound recognition element 147, sound identifiers (VCH1, VCH2, VCH3, VCH4 in the example of the drawing) can be assigned to each sound feature data. Besides, the names of the members within the family (in the example of the figure, the names are Ichiro, Yuko, Taro and Hanako), which are actually the vocalization data of the names, can also be registered in correspondence with the sound identifiers. Thus, if the identification number of the user is specified and the name sound button is operated to play the sound data, the name of the member within the family (Ichiro, Yuko, Taro and Hanako) can be output in the form of sound from the speaker 603).
As to claim 5, Kato teaches the method according to claim 1, wherein the alarm signal is emitted by the decoder when the decoder is in standby mode (¶0090, a setting in advance is configured: as for a first type of program (e.g., a program with a low degree of preference), automatic recording is performed; as for a second type of program (e.g., a program with a high degree of preference), playing is performed. Alternatively, a function of setting, like automatically determining a program with a high degree of preference as a recommended program, may be provided, or a function of automatic determination of a recommended program with with a high degree of preference is provided if the function of setting is not configured. Device operation functions performed based on sound typically include an ASR (sound recognition element (convert sound to text) and NLU (Natural Language Understanding). A speaker determination function is typically a function of storing (learning) a sound of a user operating a device in advance, and determining whether it is the same person according to similarity to the sound. Alternatively, a design in which a user's sound is not learned in advance and the age range of the sound is estimated by using a feature component (e.g., a frequency component) of a sound instruction data is also considered. In this case, although it is difficult to determine an accurate age, it is possible to achieve such a control that the user is set as a watching restriction subject when it is noticeably a child's sound and the user is not set as a watching restriction subject when it is noticeably an adult's sound).
As to claim 6, Kato teaches the method according to claim 1, wherein the alarm signal is an audible signal (¶0037, The notification sound may be in various forms. For example, it may be a simple sound like the sound “beep”, or it may be a speech with a meaning like “the recommend program is about to start”. In addition, for the notification sound, the user may also preset a preference “notification sound”. Furthermore, a function of converting keywords which are used to determine a recommended program, such as an actor, a character name, a program name, and the like, into a sound signal may be provided).
As to claim 7, Kato teaches the method according to claim 1, wherein the alarm signal is emitted by a piece of external equipment controlled by the decoder when the decoder is in wake-up mode (¶0024).
As to claim 8, Kato teaches the method according to claim 1, wherein the alarm signal is a visual signal (¶0079, schedule control element outputs a notification instruction to the sound control element when the broadcast start time of the recommended program approaches so that the sound control element outputs a notification signal).
As to claim 9, Kato teaches the method according to claim 1, wherein the alarm signal comprises a message indicating the programme of interest (¶0030, there are methods such as the following: for preference information indicating the preference tendency, actors, producers, writers, or the like is taken as factors (aspects) to determine the preference tendency of a user (programs that the user may be interested in), the category (TV series, episodes, sports, funny programs, mystery series, etc.) is used as factors to determine the preference tendency of a user (programs that the user may be interested in) or the above methods are combined to determine the preference tendency of a user (programs that the user may be interested in).
As to claim 10, Kato teaches the method according to claim 1, wherein identifying the future programme is triggered when at least one of the following events occurs: - a sensor of the decoder detects a person, - the decoder receives a list of future programmes playable via the decoder, - a predefined period since a preceding implementation of the comparison has elapsed (¶0051, television apparatus 100 (actually, identification data of the television apparatus 100) in association with a recommended program for the television apparatus 100 (strongly preferred program) and program information thereof (the channel of the recommended program, the broadcasting time (the start time or the end time), the program name, and the like) after the recommended program is determined. In the preference data collection element 406, recommended programs and channels, broadcasting times (the start time or the end time), program names, and the like, respectively recommended to multiple television apparatus, are accumulated).
As to claim 11, Kato teaches the method according to claim 1, wherein the future programme is identified as a programme of interest for the user of the decoder only if the following conditions are fulfilled: - N past programmes having an identifier in common with the future programme have been played via the decoder, - N is greater than or equal to a predefined threshold (¶0067, multiple AI interface devices AI-IF-D1, AI-IF-D2, AI-IF-D3, AI-IF-D4, . . . are used within the family. In this case, in the present system, for example, a table such as that illustrated in FIG. 3B is created within the sound recognition element 147. That is, an identifier is configured for each user in the horizontal direction, and each AI interface device AI-IF-D1, AI-IF-D2, AI-IF-D3, AI-IF-D4, . . . is configured in the vertical direction. Also, it can be known which AI interface device the user has recently used (the speech takes place near which device). That is, the location information of each member (the residence in which each member was last located) can be managed and tracked via the interface device).
As to claim 12, see the rejection of claim 1.
As to claim 13, see the rejection of claim 2.
As to claim 14, see the rejection of claim 1.
As to claim 15, see the rejection of claim 2.
As to claim 16, see the rejection of claim 3.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE A KURIEN whose telephone number is (571)270-5694. The examiner can normally be reached M-F; 7:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached at 571-272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTINE A KURIEN/Examiner, Art Unit 2421 /NATHAN J FLYNN/Supervisory Patent Examiner, Art Unit 2421