DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al., US Patent Publication, 2016/0170710 in view of Hurtig et al., US Patent Publication 2016/0125705.
Regarding independent claim 1, Kim et al. teaches a method, comprising:
at a first computer system that is in communication with one or more input devices (electronic device 101 of figure 1 as given in paragraphs 0040-0043):
detecting, via the one or more input devices, a first input (voice signal input described in paragraphs 0040 and 0042);
after detecting the first input, detecting, via the one or more input devices, a first air gesture (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion, which is an air gesture); and
in response to detecting the first air gesture (as given in paragraphs 0042-0043):
in accordance with a determination that the first input corresponds to a second computer system different from the first computer system (paragraph 0043 explains that the gesture is directed to the display 110 of figure 1 that is a separate system from the electronic device 101), performing a first operation corresponding to the second computer system (paragraph 0044 explains that an operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the display on the second computer system of display 110).
Kim et al. does not explicitly teach that the display and device are part of two separate computer systems and in accordance with a determination that the first input does not correspond to the second computer system, forgoing performing the first operation corresponding to the second computer system.
Hurtig et al. teaches that the display and device are part of two separate computer systems (paragraphs 0063-0064 explain how different input is used to control completely separate devices with separate computer systems) and in accordance with a determination that the first input does not correspond to the second computer system, forgoing performing the first operation corresponding to the second computer system (paragraph 0072 explains that candidate signals that do not correspond to the system being controlled by being outside the frequency range are ignored or forgone).
It would have been obvious to one of ordinary skill in the art before the effective filing date to use the specific gestures corresponding to separate computer systems as taught by Hurtig et al. in the system of Kim et al. The rationale to combine would be to further reduce the amount of false positive intentional inputs (paragraph 0072 of Hurtig et al.).
Regarding claim 2, Kim et al. teaches the method of claim 1, wherein the determination that the first input corresponds to the second computer system includes a determination that the first input includes a touch input directed to a physical portion of the second computer system (paragraph 0054 explains that the first input may be a touch received on the touch screen of the display 260).
Regarding claim 3, Kim et al. teaches the method of claim 1, further comprising: in response to detecting the first air gesture and in accordance with a determination that the first input corresponds to a third computer system different from the first computer system and the second computer system, performing a second operation corresponding to the third computer system (paragraph 0116 explains that the user’s gaze may be directed to any of a plurality of displays to perform the operation on the specific display as given in paragraph 0117).
Regarding claim 4, Hurtig et al. teaches further the method of claim 1, further comprising: in response to detecting the first air gesture and in accordance with a determination that the first input does not correspond to the first computer system and that the first input does not correspond to the second computer system, performing a third operation corresponding to the first computer system (paragraph 0064 teaches that different gestures can be detected and tracked as a number of inputs to cause the appropriate action to the given computer system).
Regarding claim 5, Hurtig et al. teaches further the method of claim 1, further comprising:
after performing the first operation corresponding to the second computer system, detecting a second air gesture separate from the first air gesture, wherein the second air gesture is the same as the first air gesture (paragraph 0064 teaches that different gestures can be detected and tracked as a number of inputs such that multiple separate gestures are tracked); and
in response to detecting the second air gesture, performing a fourth operation different from the first operation (paragraph 0064 teaches that different gestures can be detected and tracked as a number of inputs to cause the appropriate action to the given computer system).
Regarding claim 6, Hurtig et al. teaches further the method of claim 5, wherein the fourth operation corresponds to the second computer system (paragraph 0064 teaches that different gestures can be detected and tracked as a number of inputs to cause the appropriate action to the appropriate computer system).
Regarding claim 7, Hurtig et al. teaches further the method of claim 5, wherein the fourth operation corresponds to the first computer system (paragraph 0064 teaches that different gestures can be detected and tracked as a number of inputs to cause the appropriate action to the appropriate computer system).
Regarding claim 8, Kim et al. teaches the method of claim 5, wherein:
the first computer system is in communication with a display component (to cause the display changes described in paragraph 0043);
performing the first operation corresponding to the second computer system includes displaying, via the display component, a first user interface element (paragraph 0043 explains that different content may be displayed as necessary); and
performing the fourth operation includes displaying, via the display component, a second user interface element different from the first user interface element (paragraph 0043 explains that different content may be displayed as necessary).
Regarding claim 9, Kim et al. teaches the method of claim 1, further comprising:
after performing the first operation corresponding to the second computer system, detecting, via the one or more input devices, a third air gesture separate from the first air gesture, wherein the third air gesture is the same as the first air gesture (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion, which is an air gesture where having the gestures repeat is part of the repeated gesture detection); and
in response to detecting the third air gesture, performing the first operation corresponding to the second computer system (paragraph 0044 explains that an operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the display on the second computer system of display 110 where the same gesture causes the same result).
Regarding claim 10, Hurtig et al. teaches further the method of claim 1, further comprising:
after performing the first operation corresponding to the second computer system, detecting, via the one or more input devices, a fourth air gesture separate from the first air gesture, wherein the fourth air gesture is the same as the first air gesture (paragraphs 0063-0064 explain how different input is used to control completely separate devices and can include different air gestures); and
in response to detecting the fourth air gesture:
in accordance with a determination that the fourth air gesture was detected within a threshold period of time, performing the first operation corresponding to the second computer system (paragraph 0143 explains that gestures are given a gesture time window to be recognized appropriately); and
in accordance with a determination that the fourth air gesture was not performed within the threshold period of time, forgoing performing the first operation corresponding to the second computer system (paragraph 0072 explains that candidate signals that do not correspond to the system being controlled by being outside the frequency range are ignored or forgone).
Regarding claim 11, Kim et al. teaches the method of claim 1, further comprising:
after detecting the first air gesture and after forgoing performing the first operation corresponding to the second computer system, detecting, via the one or more input devices, a fifth air gesture separate from the first air gesture, wherein the fifth air gesture is the same as the first air gesture (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion, which is an air gesture where having the gestures repeat is part of the repeated gesture detection); and
in response to detecting the fifth air gesture, performing the first operation corresponding to the second computer system (paragraph 0044 explains that an operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the display on the second computer system of display 110 where the same gesture causes the same result).
Regarding claim 12, Kim et al. teaches the method of claim 1, further comprising:
detecting, via the one or more input devices, a sixth air gesture different from the first air gesture (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion, which is an air gesture where having the gestures repeat is part of the repeated gesture detection); and
in response to detecting the sixth air gesture:
in accordance with a determination that the sixth air gesture corresponds to an input of a first type, performing a fourth operation corresponding to the second computer system (paragraph 0044 explains that a specific operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the display on the second computer system of display 110); and
in accordance with a determination that the sixth air gesture corresponds to an input of a second type different from the first type, performing a fifth operation corresponding to the second computer system (paragraph 0044 explains that a specific operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the display on the second computer system of display 110 where different types of gestures cause different operations).
Regarding claim 13, Kim et al. teaches the method of claim 12, wherein the fourth operation is different from the fifth operation (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion that each correspond to different operations).
Regarding claim 14, Kim et al. teaches the method of claim 1, further comprising:
detecting, via the one or more input devices, a seventh air gesture separate from the first air gesture (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion, which is an air gesture where having the gestures repeat is part of the repeated gesture detection); and
in response to detecting the seventh air gesture:
in accordance with a determination that the seventh air gesture corresponds to an input of a third type, performing a sixth operation corresponding to the first computer system (paragraph 0044 explains that a specific operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the desired computer system where different types of gestures cause different operations); and
in accordance with a determination that the seventh air gesture corresponds to an input of a fourth type different from the third type, performing a seventh operation corresponding to the first computer system (paragraph 0044 explains that a specific operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the desired computer system where different types of gestures cause different operations).
Regarding claim 15, Kim et al. teaches the method of claim 14, wherein the input of the third type is specific to the first computer system such that an air gesture corresponding to the input of the third type always performs an operation corresponding to the first computer system (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion and each corresponds to a different operation on the desired computer system).
Regarding independent claim 16, Kim et al. teaches a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a first computer system that is in communication with one or more input devices (as described in paragraph 0010), the one or more programs including instructions for:
detecting, via the one or more input devices, a first input (voice signal input described in paragraphs 0040 and 0042);
after detecting the first input, detecting, via the one or more input devices, a first air gesture (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion, which is an air gesture); and
in response to detecting the first air gesture (as given in paragraphs 0042-0043):
in accordance with a determination that the first input corresponds to a second computer system different from the first computer system (paragraph 0043 explains that the gesture is directed to the display 110 of figure 1 that is a separate system from the electronic device 101), performing a first operation corresponding to the second computer system (paragraph 0044 explains that an operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the display on the second computer system of display 110).
Kim et al. does not explicitly teach that the display and device are part of two separate computer systems and in accordance with a determination that the first input does not correspond to the second computer system, forgoing performing the first operation corresponding to the second computer system.
Hurtig et al. teaches that the display and device are part of two separate computer systems (paragraphs 0063-0064 explain how different input is used to control completely separate devices with separate computer systems) and in accordance with a determination that the first input does not correspond to the second computer system, forgoing performing the first operation corresponding to the second computer system (paragraph 0072 explains that candidate signals that do not correspond to the system being controlled by being outside the frequency range are ignored or forgone).
It would have been obvious to one of ordinary skill in the art before the effective filing date to use the specific gestures corresponding to separate computer systems as taught by Hurtig et al. in the system of Kim et al. The rationale to combine would be to further reduce the amount of false positive intentional inputs (paragraph 0072 of Hurtig et al.).
Regarding independent claim 17, Kim et al. teaches a first computer system that is in communication with one or more input devices (electronic device 101 of figure 1 as given in paragraphs 0040-0043), comprising:
one or more processors (processor 220 of figure 2 as given in paragraphs 0046-0047); and
memory storing one or more programs configured to be executed by the one or more processors (memory 230 of figure 2 as given in paragraphs 0046-0048), the one or more programs including instructions for:
detecting, via the one or more input devices, a first input (voice signal input described in paragraphs 0040 and 0042);
after detecting the first input, detecting, via the one or more input devices, a first air gesture (paragraphs 0042-0043 explain the user’s gesture that is detected after the voice signal input to be used together and is given in paragraph 0042 to be a hand motion, which is an air gesture); and
in response to detecting the first air gesture (as given in paragraphs 0042-0043):
in accordance with a determination that the first input corresponds to a second computer system different from the first computer system (paragraph 0043 explains that the gesture is directed to the display 110 of figure 1 that is a separate system from the electronic device 101), performing a first operation corresponding to the second computer system (paragraph 0044 explains that an operation is conducted corresponding to the voice input based on the gesture input based on or corresponding to the display on the second computer system of display 110).
Kim et al. does not explicitly teach that the display and device are part of two separate computer systems and in accordance with a determination that the first input does not correspond to the second computer system, forgoing performing the first operation corresponding to the second computer system.
Hurtig et al. teaches that the display and device are part of two separate computer systems (paragraphs 0063-0064 explain how different input is used to control completely separate devices with separate computer systems) and in accordance with a determination that the first input does not correspond to the second computer system, forgoing performing the first operation corresponding to the second computer system (paragraph 0072 explains that candidate signals that do not correspond to the system being controlled by being outside the frequency range are ignored or forgone).
It would have been obvious to one of ordinary skill in the art before the effective filing date to use the specific gestures corresponding to separate computer systems as taught by Hurtig et al. in the system of Kim et al. The rationale to combine would be to further reduce the amount of false positive intentional inputs (paragraph 0072 of Hurtig et al.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art listed in the attached notice of references cited include similar teachings.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARUL H GUPTA whose telephone number is (571)272-5260. The examiner can normally be reached Monday through Friday, from 10 AM to 7 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PARUL H GUPTA/Primary Examiner, Art Unit 2627