DETAILED ACTION
This Office Action is in response to the correspondence filed by the applicant on 3/22/2024.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The Information Statements (IDS) filed on 3/22/2024 have been accepted and considered in this office action and are in compliance with the provisions of 37 CFR 1.97.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a sound acquisition unit”, and “a control unit”, “” in claims 1-15.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Objection(s)
Claim 7 recites, “a recommended query that expand …” The claim should read, “a recommended query that expands …”
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6, 8, 11, and their dependent claims are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 6 recites, “output the voice result so that the current broadcast is listed as priority when the voice guidance function is set to be turned on …”
There is insufficient antecedent basis for the bolded limitation in the claim.
Claim 8 recites, “to an object of the voice results in order when acquiring a voice that utters the recommended query.” It is not clear what the limitation means. Examiner believes the claim should read “to execute an object of the voice results ….”
Claim 11 recites, “The display device according to claim 1, wherein the control unit is configured to output the voice feedback in the page unit in a manner in which all pieces of information on the current page are output, and then, information on the next page is output.”
There is insufficient antecedent basis for the bolded limitations in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 and 7-15 are rejected under 35 U.S.C. 103 as being unpatentable over CHA (US 2014/0195243 A1), and in further view of MINGOT (US 6,762,692 B1).
REGARDING CLAIM 1, CHA discloses a display device comprising:
a display (Fig. 3 – “Display 111”); an audio output unit (Fig. 3 – “Audio Output 112”);
a sound acquisition (Fig. 3 – “Voce Collector 120”) unit configured to acquire a voice command (Par 277 – “Referring to FIGS. 8A to 8C, when the user utters “What is on TV today?”, the controller 150 may output a list 430 of broadcast programs to be broadcasted today as the system response, based on the response information received from the second server 300. The controller 150 may then output voice command guide 440 representing the user voice applicable on the list 430 outputted as the system response.”); and
a control unit (Fig. 3 – “Controller 150”) configured to output voice results according to the voice command through at least one of the display or the audio output unit (Par 277 – “Referring to FIGS. 8A to 8C, when the user utters “What is on TV today?”, the controller 150 may output a list 430 of broadcast programs to be broadcasted today as the system response, based on the response information received from the second server 300. The controller 150 may then output voice command guide 440 representing the user voice applicable on the list 430 outputted as the system response.”; Par 27 – “The electronic device may include an audio output. The outputting the system response and the outputting the voice command guide may include outputting the system response and the voice command guide to the audio output as an audio output signal.”),
wherein the control unit is configured to further output voice feedback for the voice results (Par 277 – “Referring to FIGS. 8A to 8C, when the user utters “What is on TV today?”, the controller 150 may output a list 430 of broadcast programs to be broadcasted today as the system response, based on the response information received from the second server 300. The controller 150 may then output voice command guide 440 representing the user voice applicable on the list 430 outputted as the system response.”) [when a voice guidance function is set to be turned on] (Figs. 7A-9 – “CLOSE HELP”).
CHA does not explicitly disclose the [square-bracketed] limitations. In other words, CHA teaches outputting the voice guidance and disabling the guidance (e.g., “close help” in Figs. 8A-8C); thus, CHA implicitly suggests the disabling/enabling the voice guidance function. Although CHA implicitly suggests the limitations, for the clarity of the rejections, Examiner provides MINGOT.
MINGOT discloses the [square-bracketed] limitations. MINGOT discloses a method/system for controlling a display device with voice commands comprising:
wherein the control unit is configured to further output voice feedback for the voice results [when a voice guidance function is set to be turned on] (MINGOT Col 6:48-61 – “It is furthermore assumed that the user has requested the displaying of the voice help window so as to ascertain the functional features which can be accessed within the prevailing context, that is to say in the “Picture” menu. A voice help window 30, 31 is therefore displayed on the right part of the screen. Since the entire list of accessible functional features cannot be displayed at the same time in the window, this list scrolls through the window 31 in a loop as long as the user does not utter the words “stop scrolling” in which case, the window is frozen in the state which prevailed at that moment. When the window is in a frozen state, the words “Stop scrolling” of the list are replaced by “Scrolling of pages” which correspond to the words which the user must utter for scrolling to resume.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA to include enabling/disabling a voice guide function, as taught by MINGOT.
One of ordinary skill would have been motivated to include enabling/disabling a voice guide function, in order to enable a user to access informative data as per demand, so that the user has a full control of the device and configure the device to operate according to his/her own preference.
REGARDING CLAIM 2, CHA in view of MINGOT discloses the display device according to claim 1, wherein the voice feedback comprises a recommended query (CHA Figs. 8A-9 Unit 440; Par 278 – “Referring to FIGS. 8A to 8C, the voice command guide 440 may display text in the slide show form representing the user voice that is applicable to the list 430 of broadcast programs outputted as the system response, such as, for example, “The third one, please” “Can I see details of the third one?”, “What is on SBC (i.e., channel name)?”. “Can I see documentary programs?”, “Can I see the program that features Peter (i.e., appearing persons' name)?”, or “Can I see “The Show” (i.e., broadcast program name), please?””).
REGARDING CLAIM 3, CHA in view of MINGOT discloses the display device according to claim 1, wherein the control unit is configured to further output the voice results and then outputs voice feedback that recommends any one of the voice results (CHA Par 277 – “Referring to FIGS. 8A to 8C, when the user utters “What is on TV today?”, the controller 150 may output a list 430 of broadcast programs to be broadcasted today as the system response, based on the response information received from the second server 300. The controller 150 may then output voice command guide 440 representing the user voice applicable on the list 430 outputted as the system response.”).
REGARDING CLAIM 4, CHA in view of MINGOT discloses the display device according to claim 3, wherein the control unit is configured to output both the voice results and the voice feedback in a voice (CHA Par 27 – “The electronic device may include an audio output. The outputting the system response and the outputting the voice command guide may include outputting the system response and the voice command guide to the audio output as an audio output signal.”).
REGARDING CLAIM 5, CHA in view of MINGOT discloses the display device according to claim 1.
MINGOT further discloses wherein the control unit is configured to output the voice results when the voice guidance function is set to be turned on differently from the voice results when the voice guidance function is set to be turned off (MINGOT Col 6:66-7:5 – “In the window 23, one of the headings is different from those which appear in FIG. 4 in the menu 100. Specifically, the “Voice help” heading, which is present when the voice help window is not displayed so as to prompt the user to view it, is replaced with the “Close help” heading when the voice help window is being displayed as is the case in FIG. 8, so as this time to prompt the user to close the said window.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA to include displaying the results differently based on the on/off state of the voice guidance function, as taught by MINGOT.
One of ordinary skill would have been motivated to include displaying the results differently based on the on/off state of the voice guidance function, in order to provide more appropriate information to a user based on the current state of the device.
REGARDING CLAIM 7, CHA in view of MINGOT discloses the display device according to claim 1, wherein the control unit is configured to output a recommended query that expand a target range of the voice results through the voice feedback (CHA Fig. 9 – “Can I see the details of the third one?”, “Next Page”;”; Par 116 – “Further, when the list of the contents, searched for in response to the user voice to search content, is outputted, the controller 150 may output at least one of voice command guide to execute a specific content included in the content list, and voice command guide to output details of the specific content. The “details” may include at least one of a name of the content, broadcasting time, cast, theme, channel number that provides the content, and channel name.”; Par 118 – “In one exemplary embodiment, the controller 150 may output voice command guide about information on the user voices which can be used for executing a specific broadcast program on the list of the broadcast programs scheduled to be broadcasted today, or for outputting details of a specific broadcast program such as, for example, “The third one,” or “Can I see the details of the third one?””; Par 273 – “The controller 330 may transmit information about user voices applicable for the application executed on the display apparatus 100, such as, for example, “home page,” “favorites,” “refresh,” “open new page,” “close current page,” “backward,” “forward,” or “end.””).
REGARDING CLAIM 8, CHA in view of MINGOT discloses the display device according to claim 7, wherein the control unit is configured to execute an object of the voice results in order (Fig. 9 – “ “1” Title 1; “2” Title 2; “3” Title 3 … “6” Title 6”; Par 277 – “Referring to FIGS. 8A to 8C, when the user utters “What is on TV today?”, the controller 150 may output a list 430 of broadcast programs to be broadcasted today as the system response, based on the response information received from the second server 300. The controller 150 may then output voice command guide 440 representing the user voice applicable on the list 430 outputted as the system response.”) when acquiring a voice that utters the recommended query (CHA Par 57 – “Further, when the collected user voice is “The third one,” the display apparatus 100 may tune to a third broadcast program on the list outputted as the system response and output the same.”).
REGARDING CLAIM 9, CHA in view of MINGOT discloses the display device according to claim 8, wherein the control unit is configured to execute the object at the time of utterance of the recommended query when a voice that utters the recommended query for selecting a specific object is acquired (CHA Par 257 – “Further, it is assumed that the user voice “What is on TV today?” is inputted, and the user voice “The third one” is inputted thereafter. In the above example, when the controller 330 determines that the user voice “The third one” does not correspond to the initial user utterance in the broadcast service domain, the controller 330 may determine the utterance intention of “The third one” based on the previously-received user voice “What is on TV today?””) while outputting the object of the voice results in order (CHA Par 258 – “More specifically, the controller 330 may determine that the utterance intention is to “request” “tuning” to “broadcast program” which is the “third one” on the list of the broadcast programs outputted from the display apparatus 100 in response to the previously-received user voice “What is on TV today?” Accordingly, the controller 330 may generate response information corresponding to the determined utterance intention and transmit the same to the display apparatus 100. That is, the controller 330 may transmit a control command, to tune to a broadcast program that is the third one on the list of broadcast programs outputted from the display apparatus 100, to the display apparatus 100, according to the determined utterance intention.”).
REGARDING CLAIM 10, CHA in view of MINGOT discloses the display device according to claim 1, wherein the control unit is configured to output the voice feedback in a page unit (CHA Figs. 8A-8C; Par 277 – “The controller 150 may then output voice command guide 440 representing the user voice applicable on the list 430 outputted as the system response.”).
REGARDING CLAIM 11, CHA in view of MINGOT discloses the display device according to claim 1, wherein the control unit is configured to output the voice feedback in the page unit in a manner in which all pieces of information on the current page are output, and then, information on the next page is output (CHA Par 278 – “Referring to FIGS. 8A to 8C, the voice command guide 440 may display text in the slide show form representing the user voice that is applicable to the list 430 of broadcast programs outputted as the system response, such as, for example, “The third one, please” “Can I see details of the third one?”, “What is on SBC (i.e., channel name)?”. “Can I see documentary programs?”, “Can I see the program that features Peter (i.e., appearing persons' name)?”, or “Can I see “The Show” (i.e., broadcast program name), please?””).
REGARDING CLAIM 12, CHA in view of MINGOT discloses the display device according to claim 1.
MINGOT further discloses the method/system, wherein, when the voice guidance function for the voice results is set to be turned on, the control unit is configured to output movement of focus (MINGOT Col – “It is furthermore assumed that the user has requested the displaying of the voice help window so as to ascertain the functional features which can be accessed within the prevailing context, that is to say in the “Picture” menu. A voice help window 30, 31 is therefore displayed on the right part of the screen. Since the entire list of accessible functional features cannot be displayed at the same time in the window, this list scrolls through the window 31 in a loop as long as the user does not utter the words “stop scrolling” in which case, the window is frozen in the state which prevailed at that moment. When the window is in a frozen state, the words “Stop scrolling” of the list are replaced by “Scrolling of pages” which correspond to the words which the user must utter for scrolling to resume.”) based on a signal received from a remote control device in a page unit (MINGOT Col – “To access the various headings of the tree, the user can either proceed in a conventional manner by moving around within the window 60 with the help of particular “Up”, “Down” buttons and by selecting a particular line with the help of a button 7 (FIG. 1) of the remote control device, or use the voice control by uttering one of the key words corresponding to the title of the proposed headings.”; Col – “If the user wished to achieve the same result using the buttons of the remote control device, he would firstly have to press the button 8 (FIG. 1) making it possible to display the main menu 60 (FIG. 4) on the screen; he would then have to move around with the help of the “Up” and “Down” buttons so as to reach the “Zoom” line of the main menu, select this line by pressing the button 7, thereby causing the “Zoom” menu to be displayed, and finally choose the 16/9 format with the help of move and select buttons of the remote control device.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system of CHA to include output movement of focus based on a remote control device, as taught by MINGOT.
One of ordinary skill would have been motivated to include output movement of focus based on a remote control device, in order to allow a user to access various data that cannot be displayed all at once.
REGARDING CLAIM 13, CHA in view of MINGOT discloses the display device according to claim 1, wherein the control unit is configured to output at least one recommended query in a voice standby state for acquiring the voice command (Par 108 – “Further, the initial screen may include a screen that is first provided in an interactive mode in which the display apparatus 100 is controlled by the user voice. For example, when a specific key provided on a manipulation panel of the display apparatus 100 is selected, or when a specific remote control signal is received from a remote controller (not illustrated), the controller 150 may operate in the interactive mode to display the initial screen and collect the voices uttered by the user.”; Par 275 – “Referring to FIG. 6, the controller 150 displays initial screen 410. Referring to FIGS. 7A to 7C, the controller 150 may display voice command guide 420 on a predetermined area of the initial screen 410.”; Par 276 – “The voice command guide 420 may display text in a slide show form representing user voices that can execute the available functions of the display apparatus 100, such as, for example, “What is on TV today?”, “Anything fun?”, “Any new movies?”, “Give me recommendations,” “Can I watch EBB (i.e., channel name),” and “Execute web browser, please.””).
REGARDING CLAIM 14, CHA in view of MINGOT discloses the display device according to claim 13, wherein the control unit is configured to output voice feedback for describing any one of the recommended query (Fig. 7A – “Hello Say the Following Words. What is on TV Today? Anything Fun?”; Par 110 – “For example, in a situation where the initial screen is being outputted, the voice command guide, including the user voice that can execute an operation executable on the display apparatus 100, such as, for example, “What is on TV today?”, “Anything fun?”, “Any new movies?”, “Recommend popular one,” “Turn to MINGOT (i.e., channel name),” “Execute web browser, please,” may be outputted.”; Par 118 – “In one exemplary embodiment, the controller 150 may output voice command guide about information on the user voices which can be used for executing a specific broadcast program on the list of the broadcast programs scheduled to be broadcasted today, or for outputting details of a specific broadcast program such as, for example, “The third one,” or “Can I see the details of the third one?””).
REGARDING CLAIM 15, CHA in view of MINGOT discloses the display device according to claim 1, wherein the control unit is configured to acquire a command to set the voice guidance function to be turned on or off (CHA Figs. 8A-8C – “CLOSE HELP”; MINGOT Col 6:48-61 – “It is furthermore assumed that the user has requested the displaying of the voice help window so as to ascertain the functional features which can be accessed within the prevailing context, that is to say in the “Picture” menu. A voice help window 30, 31 is therefore displayed on the right part of the screen.”; Col 6:66-7:5 – “In the window 23, one of the headings is different from those which appear in FIG. 4 in the menu 100. Specifically, the “Voice help” heading, which is present when the voice help window is not displayed so as to prompt the user to view it, is replaced with the “Close help” heading when the voice help window is being displayed as is the case in FIG. 8, so as this time to prompt the user to close the said window.”).
Allowable Subject Matter
Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and if rewritten to overcome the 112 (b) rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN C KIM whose telephone number is (571)272-3327. The examiner can normally be reached Monday to Friday 8:00 AM thru 4:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C Flanders can be reached at 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN C KIM/Primary Examiner, Art Unit 2655