Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No.11409366 in view of Shamaie et al 2009/0051648.. Although the claims at issue are not identical, they are not patentably distinct from each other because the species cover the genus.
18800661
11409366
1. A gesture-based device activation system comprising: a server comprising a memory, wherein a gesture database is stored in the memory of the server; a user computing device coupled to the server; and a gesture application operable on the user computing device, wherein the server is programmed to: receive, from the user, an input gesture command captured by operation of a gesture application when the input gesture is performed by the user moving the user computing device; automatically process the input gesture command and compare the input gesture command with stored gesture commands comprising user-defined gesture commands that are each associated with a user-selected device function and determine a match between the input gesture command and one stored gesture command; and automatically send instructions to the user computing device to execute a function associated with the one stored gesture command.
1. A gesture-based device activation system comprising: a user computing device; and a gesture application operable on the user computing device, wherein the gesture application is selectable to activate and run on the user computing device or configured to run in a background of the device, wherein the gesture application, when active, monitors gesture entry by a user; and the application is programmed to: receive, from the user, an input gesture command, wherein the input gesture command is performed by the user of the user computing device; automatically process the input gesture command, access a gesture database, and retrieve gesture information stored in the gesture database including stored gesture commands; automatically compare the input gesture command with the stored gesture commands and determine a match between the input gesture command and one stored gesture command; and automatically execute a function associated with the one stored gesture command, wherein the stored gesture commands are user-defined gesture commands, and wherein the application is further programmed to: associate the input gesture command with a device function in response to user input to the user computing device; and store the input gesture command with the associated device function as a user-defined gesture command, wherein the device function comprises: sending a text, wherein the input gesture command is motion of the user computing device in a user-defined pattern associated with a grid pattern.
2. The system of claim 1 further comprising a server, wherein the gesture database is stored on a memory thereof.
‘661 fail to expressly teach, in a three-dimensional movement pattern in air and wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email.
However, Shamaie teaches in a three-dimensional movement pattern in air (fig. 7 702o [0064] wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email.(fig. 1 (104, and at least 112d placing a call)
[0064] Other examples illustrate that a gesture may be generally linear (e.g., 702n) or may be polygonal (e.g., 702d, 702m). Gestures may be formed through connected movements, or the gestures may include disconnected motions or trajectories (e.g.,
PNG
media_image1.png
788
598
media_image1.png
Greyscale
702h). A gesture may be formed through continuous movement, or may involve discontinuous movement (e.g., 702k, or a gesture (not shown) representing the letter "T"). Gestures may include intersecting lines (e.g., 702e, 702L). Other example gestures are possible, such as three-dimensional gestures (e.g., 702o) and a gesture made up of the tight movements formed from a handshake (e.g., 702i).
PNG
media_image2.png
774
600
media_image2.png
Greyscale
It would have been obvious to one of ordinary skill in the art to modify the teachings of ’661to further include wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email as taught by Shamaie in order to allowing mobile devices to be made smaller and effecting increased accuracy in functionality selection [003-005].
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12061744 in view of Shamaie et al 2009/0051648. Although the claims at issue are not identical, they are not patentably distinct from each other because the species cover the genus.
18800661
12061744
1. A gesture-based device activation system comprising: a server comprising a memory, wherein a gesture database is stored in the memory of the server; a user computing device coupled to the server; and a gesture application operable on the user computing device, wherein the server is programmed to: receive, from the user, an input gesture command captured by operation of a gesture application when the input gesture is performed by the user moving the user computing device; automatically process the input gesture command and compare the input gesture command with stored gesture commands comprising user-defined gesture commands that are each associated with a user-selected device function and determine a match between the input gesture command and one stored gesture command; and automatically send instructions to the user computing device to execute a function associated with the one stored gesture command.
1. A gesture-based device activation system comprising: a server comprising a memory, wherein a gesture database is stored in the memory of the server; a user computing device coupled to the server; and a gesture application operable on the user computing device, wherein the server is programmed to: receive, from a user, an input gesture command, wherein the input gesture command is captured by operation of the gesture application on the user computing device when an input gesture is performed by the user moving the user computing device; associate the input gesture command with a device function in response to user input to the user computing device; store the input gesture command with the associated device function as a user-defined gesture command in the gesture database; receive, from the user, the input gesture command again captured by operation of the gesture application when the input gesture is performed by the user moving the user computing device; automatically process the input gesture command, access the gesture database, and retrieve gesture information stored in the gesture database including stored gesture commands captured by operation of the gesture application on the user computing device when the input gesture is performed by the user moving the user computing device; automatically compare the input gesture command with the stored gesture commands and determine a match between the input gesture command and one stored gesture command; and automatically send instructions to the user computing device to execute a function associated with the one stored gesture command.
‘744 fail to expressly teach, in a three-dimensional movement pattern in air and wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email.
However, Shamaie teaches in a three-dimensional movement pattern in air (fig. 7 702o [0064] wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email.(fig. 1 (104, and at least 112d placing a call)
[0064] Other examples illustrate that a gesture may be generally linear (e.g., 702n) or may be polygonal (e.g., 702d, 702m). Gestures may be formed through connected movements, or the gestures may include disconnected motions or trajectories (e.g.,
702h). A gesture may be formed through continuous movement, or may involve discontinuous movement (e.g., 702k, or a gesture (not shown) representing the letter "T"). Gestures may include intersecting lines (e.g., 702e, 702L). Other example gestures are possible, such as three-dimensional gestures (e.g., 702o) and a gesture made up of the tight movements formed from a handshake (e.g., 702i).
It would have been obvious to one of ordinary skill in the art to modify the teachings of ‘744 to further include wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email as taught by Shamaie in order to allowing mobile devices to be made smaller and effecting increased accuracy in functionality selection [003-005].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ye et al (20130162525) hereinafter, Ye in view of Want et al (8,638190) hereinafter, Want in view of Forutanpour et al 20150078613 hereinafter, Forutanpour in view of Shamaie et al (2009/0051648) hereinafter, Shamaie
In regards to claim 1, Ye teaches a gesture-based device activation system comprising (abstract)
Ye fails to teach: a server comprising a memory,
However, Want teaches a server comprising a memory, wherein a gesture database is stored in the memory of the server; (fig. 1 (20/18) and 12) and fig. 4 (12 (86) Want and col. 6-7, lines 52-2.
It would have been obvious to one of ordinary skill in the art to modify the teachings of Ye to further include a server comprising a memory as taught by Want in order to allow for the use of remote recourses which can save power.
Ye and Want fail to expressly teach gesture application on the user computing device.
However, Forutanpour teaches gesture application on the user computing device.(fig. 1b (130/134/110/140) [0033-0034] Foruntanpour.
It would have been obvious to one of ordinary skill in the art to modify the teachings of Ye and Want to further include gesture application on the user computing device in order to compartmentalize the input of the gesture control and through simple substitution of one known means for another and the results would have been predictable at handling gesture information.
Therefore, Ye in view of Want teaches:
a user computing device coupled to the server (fig. 1 device and fig. 4 (100)(33) Ye and a gesture application operable on the user computing device (fig. 1b (130/134/110/140) [0033-0034] Foruntanpour, wherein the server is programmed to: receive, from the user, an input gesture command captured by operation of a gesture application when the input gesture is performed by the user moving the user computing device (fig. 17 (330) Ye and col. 8, lines 31-43) Want in a three-dimensional movement pattern in air; ( (fig. 2 swing and rotate fig. 19 X, Y, and Z) motion air [0005,0056-0058])
[0059] According to a variation of the embodiment shown in FIG. 1, the processor may perform sensor fusion according to the sensor data to obtain motion data based on the device coordinate system and to obtain orientation of the electronic device, such as the orientation based on the global coordinate system, where the motion data based on the device coordinate system comprise linear acceleration based on the device coordinate system. In addition, the processor may convert the motion data based on the device coordinate system into at least one portion of the converted data according to the orientation based on the global coordinate system, where the aforementioned at least one portion of the converted data comprise linear acceleration based on the global coordinate system, and the linear acceleration based on the global coordinate system and the orientation based on the global coordinate system are utilized for sensing linear translation and rotation motion of the electronic device respectively in three-dimensional (3D) space. More particularly, according to the converted data based on the global coordinate system, the processor can perform motion recognition to recognize the user's motion in the 3D space and at least one character drawn by the user in the 3D space. For example, the processor may perform character recognition by mapping the converted data onto at least one predetermined plane of the 3D space to obtain trajectories on the aforementioned at least one predetermined plane, where the aforementioned at least one predetermined plane is parallel to two of three axes of the global coordinate system, and the three axes of the global coordinate system are orthogonal to each other. In a situation where the database comprises a pre-built database, the pre-built database can be utilized for performing character recognition, allowing the user to follow drawing rules corresponding to meaningful characters to be recognized, without need for the user to train the database in advance. This is for illustrative purposes only, and is not meant to be a limitation of the present invention. According to another variation of this embodiment, the processor may load the aforementioned at least one character drawn by the user in the 3D space into a character recognition software module, for performing character recognition.
PNG
media_image3.png
584
690
media_image3.png
Greyscale
automatically process the input gesture command and compare the input gesture command with stored gesture commands and determine a match between the input gesture command and one stored gesture command comprising user-defined gesture commands that are each associated with a user-selected device function(fig. 19 motion user defined) Ye; and automatically send instructions to the user computing device to execute a function associated with the one stored gesture command.(fig. 1 130/134/110/140) [0033-0034] Forutanpour and col. 16, lines 45-55) Want [102-118] Ye and (fig. 1 20/18 and (12) and fig. 4 (12, 86) Want
Ye, Want and Forutanpour fail to expressly teach, wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email.
However, Shamaie teaches in a three-dimensional movement pattern in air (fig. 7 702o [0064] wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email.(fig. 1 (104, and at least 112d placing a call)
[0064] Other examples illustrate that a gesture may be generally linear (e.g., 702n) or may be polygonal (e.g., 702d, 702m). Gestures may be formed through connected movements, or the gestures may include disconnected motions or trajectories (e.g., 702h). A gesture may be formed through continuous movement, or may involve discontinuous movement (e.g., 702k, or a gesture (not shown) representing the letter "T"). Gestures may include intersecting lines (e.g., 702e, 702L). Other example gestures are possible, such as three-dimensional gestures (e.g., 702o) and a gesture made up of the tight movements formed from a handshake (e.g., 702i).
PNG
media_image2.png
774
600
media_image2.png
Greyscale
It would have been obvious to one of ordinary skill in the art to modify the teachings of Ye, Want and Forutanpour to further include wherein the function comprises at least one of: placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email as taught by Shamaie in order to allowing mobile devices to be made smaller and effecting increased accuracy in functionality selection [003-005].
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant contends:
PNG
media_image4.png
430
676
media_image4.png
Greyscale
While Examiner believes Ye [168] reads on “three-dimensional movement pattern in the air” under MPEP 2111. Examiner has further cited to Shamaie which expressly teaches “three-dimensional movement pattern in the air” (fig. 7 702o [0064] and more pertinent the placing a phone call, engaging a camera and taking a photo, effecting payment from an application, sending a text, or sending an email.(fig. 1 (104, and at least 112d placing a call). Shamaie’s teachings in view of Ye’s user-defined gestures, Examiner believes the elements as currently claimed, would have been well within the purview of one of ordinary skill in the art for the reasons cited above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRANT SITTA whose telephone number is (571)270-1542. The examiner can normally be reached M-F 7:30-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached at 571-272-6084. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRANT SITTA/ Primary Examiner, Art Unit 2622