DETAILED ACTION
Status of Application
This action is a Non-Final Rejection. This action is in response to the request for continued examination filed on December 4, 2025.
Claims 32 and 38 have been amended.
Claims 2-5, 7-10, and 12-20 have been canceled.
Claims 1, 6, 11, and 21-40 are pending and are rejected.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Arguments
Regarding the rejections under 35 U.S.C. 112(b), the rejection of claim 38 is withdrawn in light of Applicant’s amendments. However, the rejection of claims 1, 6, 11, and their dependent claims is maintained. Applicant states that “we clarified during the interview that the claim language itself does not set forth unclear boundaries. Applicant requests that the claim language be interpreted as-is, in accordance with USPTO interpretational practice.” Remarks at 9. However, per the Examiner Interview Summary mailed on August 29, 2025, “Applicant’s representative shared his screen to discuss the 112(b) rejections and some possible ways to overcome them. No agreement was reached.” If Applicant’s position is that the claims are not indefinite, Applicant should explain, using the Specification, how the disputed limitation is to be interpretation.
Regarding the rejection under 35 U.S.C. 103, Applicant argues that Lavian does not teach "sending a command to the external device based, at least in part, on the second gesture." Remarks at 9. Applicant quotes paragraph 0128 of Lavian. Id. at 10. However, paragraph 0128 states “In another embodiment of the invention, the user may provide inputs at the device 102 by using different gestures or hand movements. For example the user may switch on an air conditioner by showing a gesture of a thumb up at the device 102. In an embodiment of the invention, the device 102 may include a camera.” Lavian is stating that the device 102 senses a gesture, which causes devices such as an air conditioner to be controlled. In order for an external device such as an air conditioner to be controlled, a command must be sent from the device 102 to the device that is being controlled. Therefore, this limitation is taught by Lavian.
Claim Rejections - 35 USC § 112
The following is a quotation 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 6, 11, and 21-40 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claims 1, 6, and 11 recite “capturing, using a device equipped with a sensor that is capable of detecting gestures made by an object in a three dimensional (3D) space, an image of the object.” It is unclear what is meant by this limitation. For example, it appears that the gesture is captured with a single sensor and can detect 2D movement and does not need to capture 3D movement. However, the claim states that the object is in “a three dimensional (3D) space.” It is unclear whether the 3D space is referring to the space where a (2D or 3D) gesture is made (e.g., the space in which a gesture is made is always 3D) or whether this space is referring to a specific configuration of sensors that detects 3D movement. The Specification refers to a “3D sensory space,” which Applicant has previously defined narrowly. The term “sensory” is not included in these claims. Applicant should clarify whether this limitation requires more than a device with a sensor that is capable of detecting an object. For purposes of examination, the claimed “3D space” is interpreted as requiring 3D sensing.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6, 11, 21-24, 27-32, and 35-37, 39, and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Lavian, et al, U.S. Patent Application Publication No. 2013/0080898 A1 and Ko, Dong-Ik and Guarav Agarwal. “Gesture Recognition: Enabling Natural Interactions with Electronics,” Texas Instruments, https://www.edge-ai-vision.com/2012/04/gesture-recognition-enabling-natural-interactions-with-electronics/ (April 13, 2012).
Claim 1:
Lavian teaches:
capturing, using a device equipped with a sensor that is capable of detecting gestures made by an object…, an image of the object (see at least Lavian, paragraph 0124 (“FIG. 3A illustrates an exemplary visual access menu 308 and an enhanced visual access menu 310 at a device 102, in accordance with the first embodiment of the invention. As discussed with reference to FIG. 1A, the device 102 may include a graphical user interface (GUI) for accessing the visual access menus. Further, the VMThings 108 may display the visual access menu 308 (or the Internet of Things menu) at the device 102 so as to enable the user to control the remote devices 106a-n. A visual access menu 308 may include one or more options. The options may be a remote devices 302 option and services 304 option. Though not shown, but a person skilled in the art will appreciate that the visual access menu 308 (or the Internet of Things menu) may include more than two options. A user of the device 102 may select an option of these options from the displayed visual access menu 308 (or the Internet of Things menu). Further, the user may select an option by any of the following ways, but are not limited to, touching an option, through a voice command, through a gesture or hand movement, through an audio input, by pressing one or more keys at the device 102, and so forth.”); paragraph 0172 (“At step 808, a selection of a device option may be received from the user. The user may provide the selection by touching the screen of the display device or by making some gestures or through hand movements in front of the display device or the access device.”)).
detecting, by the device, an external device that is external to the device and the sensor (see at least Lavian, Figure 3A (shows the detected devices); paragraph 0087 (“In an embodiment of the invention, the user may select an option by making gestures or hand movements at the device. For example, the user may do a thumb up gesture to switch on an appliance at home or may do a thumb down gesture to switch off the same. Similarly, the user may do other gestures such as, but are not limited to, waving a hand, nodding head, smiling, blinking an eye, and so forth. In an embodiment of the invention, the device may include a camera for detecting the gestures or hand movements. In an embodiment of the invention, the VMThings 108 may be configured to analyze and interpret the gestures and hand movements.”); paragraph 0128 (The devices can be controlled by gestures or hand movements.); paragraph 0125 (An enhanced visual access menu of a device displays remote devices that may be controlled.)).
selecting the external device based, at least in part, on a first gesture detected by the device (see at least Lavian, paragraph 0124 (“FIG. 3A illustrates an exemplary visual access menu 308 and an enhanced visual access menu 310 at a device 102, in accordance with the first embodiment of the invention. As discussed with reference to FIG. 1A, the device 102 may include a graphical user interface (GUI) for accessing the visual access menus. Further, the VMThings 108 may display the visual access menu 308 (or the Internet of Things menu) at the device 102 so as to enable the user to control the remote devices 106a-n. A visual access menu 308 may include one or more options. The options may be a remote devices 302 option and services 304 option. Though not shown, but a person skilled in the art will appreciate that the visual access menu 308 (or the Internet of Things menu) may include more than two options. A user of the device 102 may select an option of these options from the displayed visual access menu 308 (or the Internet of Things menu). Further, the user may select an option by any of the following ways, but are not limited to, touching an option, through a voice command, through a gesture or hand movement, through an audio input, by pressing one or more keys at the device 102, and so forth.”); paragraph 0125; paragraph 0172 (“At step 808, a selection of a device option may be received from the user. The user may provide the selection by touching the screen of the display device or by making some gestures or through hand movements in front of the display device or the access device.”); paragraph 0186).
establishing a connection that is capable of sending gesture data between the device and the external device (see at least Lavian, Figure 1A, Item 104; Figure 4; paragraph 0080 (“The device 102 is connected to the plurality of remote devices 106a-n through the network 104.”); paragraph 0128 (A user may control devices using hand gestures or hand movements.); paragraph 0133; paragraph 0173 (“At step 810, the user may be connected to a remote device based on the selection of a device option.”)).
detecting a second gesture made in the 3D space (see at least Lavian, paragraph 0087 (“Further, the VMThings 108 may include stored gestures defined by the user at device 102 and may compare or match the real time gestures with the stored gestures.”); paragraph 0128 (“In another embodiment of the invention, the user may provide inputs at the device 102 by using different gestures or hand movements. For example the user may switch on an air conditioner by showing a gesture of a thumb up at the device 102. In an embodiment of the invention, the device 102 may include a camera. … In an embodiment of the invention, the VMThings 108 may store a list of voice commands or gestures or hand movements for selecting options from the visual access menus or the enhanced visual access menus (or the Internet of Things menus). The VMThings 108 may store the actions to be taken corresponding to these commands or gestures or hand movements.”)).
sending a command to the external device based, at least in part, on the second gesture (see at least Lavian, paragraph 0087 (“Further, the VMThings 108 may include stored gestures defined by the user at device 102 and may compare or match the real time gestures with the stored gestures.”); paragraph 0128 (“In another embodiment of the invention, the user may provide inputs at the device 102 by using different gestures or hand movements. For example the user may switch on an air conditioner by showing a gesture of a thumb up at the device 102. In an embodiment of the invention, the device 102 may include a camera. … In an embodiment of the invention, the VMThings 108 may store a list of voice commands or gestures or hand movements for selecting options from the visual access menus or the enhanced visual access menus (or the Internet of Things menus). The VMThings 108 may store the actions to be taken corresponding to these commands or gestures or hand movements.”)).
Lavian does not explicitly teach, however, Ko teaches:
in a three dimensional (3D) space (see at least Ko, page 3 (“The most common 3D acquisition system is the stereoscopic vision system, which uses two cameras to obtain a left and right stereo image. These images are slightly offset on the same order as the human eyes are. As the computer compares the two images, it develops a disparity image that relates the displacement of objects in the images. Commonly used in 3D movies, stereoscopic vision systems enable exciting and low-cost entertainment. It is ideal for 2D movies and mobile devices, including smartphones and tablets.”); page 4 (“Stereoscopic vision systems can be more cost effective and fit in a small form factor, making them a good choice for devices like smartphones, tablets and other consumer devices.”); page 6 (gesture recognition)).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Ko’s 3D gesture recognition system with Lavian’s method of using a smart phone to interact with other devices using voice commands or gestures. One of ordinary skill in the art would have been motivated to incorporate this feature for the purpose of recognizing 3D movement of gestures in order to more accurately identify gestures and be able to recognize many types of gestures.
Claim 6:
Claim 6 is rejected using the same rationale that was used for the rejection of claim 1.
Claim 11:
Claim 11 is rejected using the same rationale that was used for the rejection of claim 1.
Claim 21:
Lavian further teaches:
displaying data that identifies the external device (see at least Lavian, Figure 3A; paragraph 0125 (An enhanced visual access menu of a device displays remote devices that may be controlled.)).
interpreting the first gesture made in the 3D space as selecting the external device based, at least in part, on the data (see at least Lavian, paragraph 0124 (“FIG. 3A illustrates an exemplary visual access menu 308 and an enhanced visual access menu 310 at a device 102, in accordance with the first embodiment of the invention. As discussed with reference to FIG. 1A, the device 102 may include a graphical user interface (GUI) for accessing the visual access menus. Further, the VMThings 108 may display the visual access menu 308 (or the Internet of Things menu) at the device 102 so as to enable the user to control the remote devices 106a-n. A visual access menu 308 may include one or more options. The options may be a remote devices 302 option and services 304 option. Though not shown, but a person skilled in the art will appreciate that the visual access menu 308 (or the Internet of Things menu) may include more than two options. A user of the device 102 may select an option of these options from the displayed visual access menu 308 (or the Internet of Things menu). Further, the user may select an option by any of the following ways, but are not limited to, touching an option, through a voice command, through a gesture or hand movement, through an audio input, by pressing one or more keys at the device 102, and so forth.”); paragraph 0172 (“At step 808, a selection of a device option may be received from the user. The user may provide the selection by touching the screen of the display device or by making some gestures or through hand movements in front of the display device or the access device.”)).
Claim 22:
Lavian further teaches:
processing the second gesture; and sending the processed second gesture via the connection to the external device (see at least Lavian, paragraph 0087 (“Further, the VMThings 108 may include stored gestures defined by the user at device 102 and may compare or match the real time gestures with the stored gestures.”); paragraph 0128 (“In another embodiment of the invention, the user may provide inputs at the device 102 by using different gestures or hand movements. For example the user may switch on an air conditioner by showing a gesture of a thumb up at the device 102. In an embodiment of the invention, the device 102 may include a camera. … In an embodiment of the invention, the VMThings 108 may store a list of voice commands or gestures or hand movements for selecting options from the visual access menus or the enhanced visual access menus (or the Internet of Things menus). The VMThings 108 may store the actions to be taken corresponding to these commands or gestures or hand movements.” The gestures are recognized at the VMThings and then the processed information is sent to the device that is being controlled.)).
Claim 23:
Lavian further teaches:
including processing the second gesture based, at least in part, on the first gesture (see at least Lavian, Figure 8, items 808 (“Receive a selection of a device option from a user”), 812 (“Control one or more operations of the remote device based on selection of the device option”) and associated text).
Claim 24:
Lavian further teaches:
providing an instruction to the external device to perform an action, wherein the action is based, at least in part, on the first gesture (see at least Lavian, Figure 8, items 808 (“Receive a selection of a device option from a user”), 812 (“Control one or more operations of the remote device based on selection of the device option”) and associated text).
Claim 27:
Lavian further teaches:
detecting, by the sensor, the second gesture; and determining, for the second gesture, gesture information based, at least in part, on a library of gestures stored in the device (see at least Lavian, paragraph 0087 (“Further, the VMThings 108 may include stored gestures defined by the user at device 102 and may compare or match the real time gestures with the stored gestures.”); paragraph 0128 (“In another embodiment of the invention, the user may provide inputs at the device 102 by using different gestures or hand movements. For example the user may switch on an air conditioner by showing a gesture of a thumb up at the device 102. In an embodiment of the invention, the device 102 may include a camera. … In an embodiment of the invention, the VMThings 108 may store a list of voice commands or gestures or hand movements for selecting options from the visual access menus or the enhanced visual access menus (or the Internet of Things menus). The VMThings 108 may store the actions to be taken corresponding to these commands or gestures or hand movements.”)).
Claim 28:
Lavian further teaches:
detecting, by the device, a third gesture; interpreting the third gesture as selecting another external device; and establishing a second connection that is capable of exchanging commands between the external device and the other external device (see at least Lavian, Figure 3A and associated text (A user can select a second device to control.)). See rejection of claim 1.
Claim 29:
Claim 29 is rejected using the same rationale that was used for the rejection of claim 21.
Claim 30:
Claim 30 is rejected using the same rationale that was used for the rejection of claim 22.
Claim 31:
Claim 31 is rejected using the same rationale that was used for the rejection of claim 23.
Claim 32:
Claim 32 is rejected using the same rationale that was used for the rejection of claim 24.
Claim 35:
Claim 35 is rejected using the same rationale that was used for the rejection of claim 27.
Claim 36:
Claim 36 is rejected using the same rationale that was used for the rejection of claim 28.
Claim 37:
Claim 37 is rejected using the same rationale that was used for the rejection of claim 21.
Claim 39:
Claim 39 is rejected using the same rationale that was used for the rejection of claim 23.
Claim 40:
Claim 40 is rejected using the same rationale that was used for the rejection of claim 24.
Claims 25, 26, 33, and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Lavian, et al, U.S. Patent Application Publication No. 2013/0080898 A1; Ko, Dong-Ik and Guarav Agarwal. “Gesture Recognition: Enabling Natural Interactions with Electronics,” Texas Instruments, https://www.edge-ai-vision.com/2012/04/gesture-recognition-enabling-natural-interactions-with-electronics/ (April 13, 2012); and Poon et al., U.S. Patent Application Publication Number 2011/0175822 A1.
Claim 25:
Lavian does not explicitly teach, however, Poon teaches:
wherein selecting the external device is based, at least in part, on biometric information determined from the object performing at least one of the first or second gesture (see at least Poon, paragraphs 0028-0035 (A user’s fingerprint is detected and recognized while the user is making a gesture.); paragraph 0036 (“Also note that in particular embodiments, fingerprint could be used to authenticate and authorize the object transfer with the source and destination device.”)).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Poon’s method of detecting a fingerprint while a user is making a gesture to control a device with Lavian’s method of using a smart phone to interact with other devices using voice commands or gestures. One of ordinary skill in the art would have been motivated to incorporate this feature for the purpose of making sure that the user is authorized to control the device. This is an efficient method of preventing an unauthorized user from controlling the device.
Claim 26:
Lavian does not explicitly teach, however, Poon teaches:
wherein selecting the external device is based, at least in part, on biometric information determined from the object performing at least one of the first or second gesture, wherein the biometric information is based, at least in part, on at least one of vein patterns, palm prints, or fingerprints (see at least Poon, paragraphs 0028-0035 (A user’s fingerprint is detected and recognized while the user is making a gesture.); paragraph 0036 (“Also note that in particular embodiments, fingerprint could be used to authenticate and authorize the object transfer with the source and destination device.”)).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Poon’s method of detecting a fingerprint while a user is making a gesture to control a device with Lavian’s method of using a smart phone to interact with other devices using voice commands or gestures. One of ordinary skill in the art would have been motivated to incorporate this feature for the purpose of making sure that the user is authorized to control the device. This is an efficient method of preventing an unauthorized user from controlling the device.
Claim 33:
Claim 33 is rejected using the same rationale that was used for the rejection of claim 25.
Claim 34:
Claim 34 is rejected using the same rationale that was used for the rejection of claim 26.
Claim 38 is rejected under 35 U.S.C. 103 as being unpatentable over Lavian, et al, U.S. Patent Application Publication No. 2013/0080898 A1; Ko, Dong-Ik and Guarav Agarwal. “Gesture Recognition: Enabling Natural Interactions with Electronics,” Texas Instruments, https://www.edge-ai-vision.com/2012/04/gesture-recognition-enabling-natural-interactions-with-electronics/ (April 13, 2012); and Lynch et al., U.S. Patent Number 9,069,385 B1.
Claim 38:
Lavian further teaches:
process the second gesture, … and send the processed second gesture via the connection to the external device (see at least Lavian, paragraph 0087 (“Further, the VMThings 108 may include stored gestures defined by the user at device 102 and may compare or match the real time gestures with the stored gestures.”); paragraph 0128 (“In another embodiment of the invention, the user may provide inputs at the device 102 by using different gestures or hand movements. For example the user may switch on an air conditioner by showing a gesture of a thumb up at the device 102. In an embodiment of the invention, the device 102 may include a camera. … In an embodiment of the invention, the VMThings 108 may store a list of voice commands or gestures or hand movements for selecting options from the visual access menus or the enhanced visual access menus (or the Internet of Things menus). The VMThings 108 may store the actions to be taken corresponding to these commands or gestures or hand movements.” The gestures are recognized at the VMThings and then the processed information is sent to the device that is being controlled.)).
Lavian does not explicitly teach, however, Lynch teaches:
wherein the processing enables sending the processed second gesture with less bandwidth than sending a second unprocessed gesture (see at least Lynch, column 9, lines 8-19 (“In embodiments of the present invention, the communication component 141 of FIG. 2 is configured for compressing the coordinate representations 170 into reduced-bandwidth streaming data. Generally, the streaming data is conveyed from the source device 120 via the wireless connection 110. In an exemplary embodiment, the reduced-bandwidth streaming data is conveyed at a bit rate that is less than that of streaming video media. That is, because the streaming data being transmitted comprises spatial coordinates (e.g., coordinate representations 170 of physical gestures 180), the size of the data is substantially less than image data, thus, obviating the need for video codec compression.”)). Note: This limitation is intended use and descriptive of an inherent outcome or purpose of the processing of the gesture. Although this limitation is found in the art, it is not given patentable weight.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Lynch’s method of compressing gesture data in order to reduce bandwidth with Lavian’s method of using a smart phone to interact with other devices using voice commands or gestures. One of ordinary skill in the art would have been motivated to incorporate this feature for the purpose of reducing bandwidth resources that are needed in order to increase data transfer speeds, reduce latency, and reduce costs.
Other Relevant Prior Art
Phillips, U.S. Patent Number 9,274,742 B2. Phillips teaches visual-symbolic control (VSC) software that allows a controlling device to control a target device that has a display that changes in response to user interaction.
Lapidot et al., U.S. Patent Number 9,766,855 B2. Lapidot teaches an invention for controlling a surveillance system using gestures and/or voice commands.
Friedman, U.S. Patent Application Publication Number 2011/0090407 A1. This reference teaches gesture-based remote control. See Figure 2 and associated text.
Kramer et al., U.S. Patent Application Publication Number 2014/0195988 A1. This reference teaches gestural control of devices. See paragraph 0437.
De Schepper et al., U.S. Patent Application Publication Number 2014/0320274 A2. This reference teaches gesture control of remotely controllable devices.
Yoon et al., U.S. Patent Application Publication Number 2008/0019589 A1. This reference teaches a gesture-recognizing device that is, for example, a TV control set top.
Forutanpour et al., U.S. Patent Number 9,170,674 B2. This reference teaches gesture-based device control.
S. Berman and H. Stern, "Sensors for Gesture Recognition Systems," in IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 3, pp. 277-290, May 2012, doi: 10.1109/TSMCC.2011.2161077. Berman teaches a gesture recognition system.
Adzhigirey et al., U.S. Patent Application Number 2014/0198024 A1. This reference teaches detecting three dimensional gestures.
Park et al., U.S. Patent Application Publication Number 2014/0111423 A1. This reference teaches a mobile system with a three dimensional image sensor.
Chae et al., U.S. Patent Application Publication Number 2007/0275755 A1. This reference teaches a mobile wireless console that senses three dimensional motion .
Lee et al., U.S. Patent Application Publication Number 2012/0062558 A1. This reference teaches a mobile device such as a smart phone that detects three dimensional movement of a user’s hand.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIZABETH H ROSEN whose telephone number is (571) 270-1850 and email address is elizabeth.rosen@uspto.gov. The examiner can normally be reached Monday - Friday, 10 AM ET - 7 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Anderson, can be reached at 571-270-0508. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ELIZABETH H ROSEN/Primary Examiner, 3693