DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 2/27/2026 have been fully considered but they are not persuasive.
Applicant argues Gurram does not disclose the first data and second data within a spoken request. The Examiner disagrees. Kessler et al teach, see column 7, lines 40-50, “…such as compound command embodiments wherein a user issues two commands in a single utterance (e.g., ‘set account manager to Jim’s supervisor and set shipping to overnight’). In such an example, the response may include the identity of Jim’s supervisor (e.g. Jill)…” So, using compound command (the first data and second data within a spoken request) is obvious to one ordinary skill in the art.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Gurram et al (US 2006/0136220) and further in view of Kessler et al (US 12,411,654).
For claim 1, Gurram et al teach a method, comprising:
using a first data within a spoken request to identify a one of a plurality of individuals (e.g. paragraph 58: “Name”, figure 5: 515, Name: Mr Jerry Wagner);
using a second data within the spoken request to identify a one of a plurality of graphical user interfaces (e.g. abstract, paragraph 57: The preprocessor identifies voice commands for controlling the identified user interface element);
modifying the one of the plurality of graphical user interfaces based on the one of the plurality of individuals (e.g. figures 6-8, paragraphs 78-79: the user selects the upper leftmost text field 705 in the time entry component 525 by saying "two."); and
causing the modified one of the plurality of graphical user interfaces to be displayed on a display device associated with a smart device (e.g. paragraph 78: FIG. 7, the user selects the upper leftmost text field 705 in the time entry component 525 by saying "two." After the user input is received, the representational enumerated labels disappear and the system prepares for data entry in the text field 705 by entering a data entry mode. A blue outline serves as a visual cue to indicate to the user that the system is in data entry mode and will enter any data entry dictation in the text field with the blue outline. An audio message indicating that data may be entered in the text field 705 also may be presented.).
Gurran et al do not further disclose first data and second data within spoken request. Kessler et al teach first data and second data within spoken request. (e.g. abstract, column 7, lines 40-50: “…such as compound command embodiments wherein a user issues two commands in a single utterance (e.g., ‘set account manager to Jim’s supervisor and set shipping to overnight’). In such an example, the response may include the identity of Jim’s supervisor (e.g. Jill)…”). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Kessler et al into the teaching of Gurram et al to convert an utterance of user to a string and convert the string into commands (e.g. abstract, Kessler et al) to and to allow end users to create voice code-free voice functionality (e.g. column 2, lines 1-7, Kessler et al) to improve user convenience.
For claim 2, Gurram et al teach the display device comprises a television (e.g. paragraph 33, “television”).
For claim 7, Gurram et al teach the smart device comprises a television (e.g. paragraph 33, “television”).
For claim 3, Gurram et al do not further disclose comparing a voice print created from the spoken request to a voice print created from a prior utterance captured from the one of the plurality of individuals. Kessler et al teach comparing a voice print created from the spoken request to a voice print created from a prior utterance captured from the one of the plurality of individuals (e.g. column 13, lines 35-62: “For example, the user may be utter a “go to power search” command. The user's command may be translated by the command interpreter 152 as an intent (e.g., PageNavigate) and an entity (e.g., PowerSearch). The intent and/or the entity may be included in a predetermined list of multi-turn intents/entities. The command handling module 218 may compare the identified entity (e.g., PowerSearch) to the predetermined list of multi-turn entities.”). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Kessler et al into the teaching of Gurram et al to convert an utterance of user to a string and convert the string into commands (e.g. abstract, Kessler et al) to improve user convenience.
Claims 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Gurram et al and Kessler et al, as applied to claims 1-3 and 7 above, and further in view of Arling et al (US 2014/0047467).
For claim 4, Gurram et al and Kessler et al do not further disclose the graphical user interface is modified by ordering user interface elements of the graphical user interface based upon a behavior pattern that is associated with the one of the plurality of individuals. Arling et al teach the graphical user interface is modified by ordering user interface elements of the graphical user interface based upon a behavior pattern that is associated with the one of the plurality of individuals (e.g. paragraph 6: adjusting DVR and/or program guide displays to produce a "blended" display order in cases where one user frequently plays back another user's recording requests; identification of specific groups of users (e.g., via use of facial recognition) for example "family group", "kids present", etc., and adjusting the displayed favorites, recordings, and/or channel line-up accordingly; . paragraphs 59-60, “… the newly-ordered viewing history data may be used to prioritize an order in which the information within a content listing is displayed, i.e., to thereby provide a user-specific order to the listing of content of the program guide that is caused to be displayed”). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Arling et al into the teaching of Gurram et al and Kessler et al to provide a user-specific order to the listing of the content of the program guide (e.g. paragraph 59, Arling et al) to improve convenience for user.
For claim 5, Gurram et al and Kessler et al do not further disclose the graphical user interface is modified by omitting one or more user interface elements of the graphical user interface based upon a behavior pattern that is associated with the one of the plurality of individuals. Arling et al teach the graphical user interface is modified by omitting one or more user interface elements of the graphical user interface based upon a behavior pattern that is associated with the one of the plurality of individuals (e.g. paragraph 6: adjusting the displayed favorites, recordings, and/or channel line-up accordingly; so It would have been obvious to one ordinary skill in the art that listing are not in user’s favorites will not be display; also paragraphs 59-60, “… the newly-ordered viewing history data may be used to prioritize an order in which the information within a content listing is displayed, i.e., to thereby provide a user-specific order to the listing of content of the program guide that is caused to be displayed”). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Arling et al into the teaching of Gurram et al and Kessler et al to provide a user-specific order to the listing of the content of the program guide (e.g. paragraph 59, Arling et al) to improve convenience for user.
For claim 6, Gurram et al and Kessler et al do not further disclose the smart device comprises a set-top box. Arling et al teach the smart device comprises a set-top box (e.g. figure 1, 108, STB). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Arling et al into the teaching of Gurram et al and Kessler et al to provide a user-specific order to the listing of the content of the program guide using a Set-top box (e.g. paragraph 59, Arling et al) to improve convenience for user.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAQUAN ZHAO whose telephone number is (571)270-1119. The examiner can normally be reached M-Thur: 7:00 am-5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached on 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Email: daquan.zhao1@uspto.gov.
Phone: (571)270-1119
/DAQUAN ZHAO/Primary Examiner, Art Unit 2484