DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claims 6 and 13 are objected to because of the following informalities:
Claim 6 line 4, the term “of” is missing between “plurality “ and “sensed” in the limitation of “one or a plurality sensed gestures“.
Claim 13, line 4, the term “of” is missing between “plurality “ and “sensed” in the limitation of “one or a plurality sensed gestures“.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 8 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 12,340,027 because the instant application claims 1 and 8 are broader in every aspect than the patent claim 9 and is therefore an obvious variant thereof.”
Current Application 19/245,339
Patent No. 12,340,027
1. A gesture interface device comprising:
one or a plurality of gesture sensors for sensing one or a plurality gestures of a user of one or a plurality of users; and
9. A gesture and voice-controlled interface device comprising:
one or a plurality of gesture sensors for sensing gestures of a user;
(it is understood that a user of one or a plurality of users, is the same as “a user”)
a processor configured to obtain an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors,
a processor configured to obtain one or a plurality of sensed gestures from said one or a plurality of gesture sensors.
(the term input is not specifically written, only as “an input from the user “, however, it is understood that the one or a plurality of sensed gestures are input into the processor as it obtains those gestures)
to analyze the one or a plurality sensed gestures to identify a specific person based on comparing said one or a plurality of sensed gestures to a gesture signature of the specific person,
to analyze the sensed gesture and sensed sounds to identify an input from the user; verify, based on comparing said one or a plurality of sensed gestures, to a gesture signature of the specific person that the one or a plurality sensed gestures were performed by the specific person;
(based on the analysis the process is configured to identify a specific person)
and to generate an output signal corresponding to the input to a controlled device only if it was verified that said one or a plurality of sensed gestures were performed by the specific person.
to generate an output signal corresponding to the input to a controlled device, only if it was verified that the one or a plurality of sensed gestures were performed by the specific person.
8. A method comprising: using a gesture interface device comprising: one or a plurality of gesture sensors for sensing one or a plurality gestures of a user of one or a plurality of users;
9. A gesture and voice-controlled interface device comprising:
one or a plurality of gesture sensors for sensing gestures of a user;
(it is understood that a user of one or a plurality of users, is the same as “a user”)
a processor, obtaining an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors;
a processor configured to obtain one or a plurality of sensed gestures from said one or a plurality of gesture sensors.
(the term input is not specifically written, only as “an input from the user “, however, it is understood that the one or a plurality of sensed gestures are input into the processor as it obtains those gestures)
analyzing the one or a plurality sensed gestures to identify a specific person based on comparing said one or a plurality of sensed gestures to a gesture signature of the specific person;
to analyze the sensed gesture and sensed sounds to identify an input from the user; verify, based on comparing said one or a plurality of sensed gestures, to a gesture signature of the specific person that the one or a plurality sensed gestures were performed by the specific person;
(based on the analysis the process is configured to identify a specific person)
and generating an output signal corresponding to the input to a controlled device only if it was verified that said one or a plurality of sensed gestures were performed by the specific person.
to generate an output signal corresponding to the input to a controlled device, only if it was verified that the one or a plurality of sensed gestures were performed by the specific person.
Claims 1 and 8 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 12,340,029, in view of the prior art reference of Child (U.S. Pub. No. 2017/0195636) because:
Current Application 19/245,339
U.S. Patent No. 12,340,029
1. A gesture interface device comprising:
one or a plurality of gesture sensors for sensing one or a plurality gestures of a user of one or a plurality of users; and
1. A gesture-controlled interface device comprising:
one or a plurality of gesture sensors for sensing gestures of a user;
(the term “or” allows the examiner to choose between one or a plurality terms, examiner has selected the term “one)
a processor configured to obtain an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors,
a processor configured to obtain one or a plurality of sensed gestures from said one or a plurality of gesture sensors
(the term input is not specifically written, only as “an output signal corresponding to the input to a controlled device“, however, it is understood that the one or a plurality of sensed gestures are input into the processor as it obtains those gestures)
to analyze the one or a plurality sensed gestures to identify a specific person based on comparing said one or a plurality of sensed gestures to a gesture signature of the specific person,
analyzing the one or a plurality of sensed gestures
*1
to generate an output signal corresponding to the input to a controlled device only if it was verified that said one or a plurality of sensed gestures were performed by the specific person.
generating an output signal corresponding to the input to a controlled device,
*2
8. A method comprising: using a gesture interface device comprising:
one or a plurality of gesture sensors for sensing one or a plurality gestures of a user of one or a plurality of users;
1. A gesture-controlled interface device comprising:
one or a plurality of gesture sensors for sensing gestures of a user;
(the term “or” allows the examiner to choose between one or a plurality terms, examiner has selected the term “one)
a processor, obtaining an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors;
a processor configured to obtain one or a plurality of sensed gestures from said one or a plurality of gesture sensors
(the term input is not specifically written, only as “an output signal corresponding to the input to a controlled device“, however, it is understood that the one or a plurality of sensed gestures are input into the processor as it obtains those gestures)
analyzing the one or a plurality sensed gestures to identify a specific person based on comparing said one or a plurality of sensed gestures to a gesture signature of the specific person;
analyzing the one or a plurality of sensed gestures
*1
and generating an output signal corresponding to the input to a controlled device only if it was verified that said one or a plurality of sensed gestures were performed by the specific person.
generating an output signal corresponding to the input to a controlled device,
*2
The U.S. Patent No. 12,340,029 does not teach all of the limitations of the current application’s claims 1 and 8, highlighted in bold, however, the prior art reference of Child (U.S. Pub. No. 2017/0195636) teaches:
*1) analyze the one or a plurality sensed gestures (in step 1010 the gesture is identified, [0135], lines 1-5) to identify a specific person based on comparing (the gesture is compared to a predefined gesture stored in the memory, [0135], lines 1-4) said one or a plurality of sensed gestures to a gesture signature ([0082], lines 7-13) of the specific person (the gesture that was compared to a predefined gesture stored in the memory is used in step 1020 to determine a predefined contact associated with the gesture and the identity of the user, [0137], lines 1-6), and
*2) to generate an output signal (transmit a call request to the child’s father) corresponding to the input to a controlled device (gestural command, [0137], line 7 and camera is the controlled device, [0133], line 2) only if it was verified that said one or a plurality of sensed gestures were performed by the specific person (the child’s identity is determined based on the gesture and other physical characteristic, therefore the child can contact only the father rather than grandmother or any other contacts, [0137], lines 6-17).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the identify a specific person based on comparing said one or a plurality of sensed gestures to a gesture signature of the specific person of Child to the U.S. Patent No. 12,340,029 because a child may make a gestural command, detected by a video monitoring component of the home automation system, which indicates that the system should “Call dad.” The system may identify this gesture as indicating a desire to reach a parent, and may communicate a call request to the child's mom, simultaneously or serially, based on this identifying. [0012], lines 20-27.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 8, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Child (U.S. Pub. No. 2017/0195636) in view of Chen (U.S. Pub. No. 2018/0025562).
As to claim 1, Child teaches a gesture interface device ([0005], lines 12, cell phones or personal computing devices) comprising:
one or a plurality of gesture sensors (camera, the camera may receive one or a combination if inputs, wherein the inputs could be an indirect input such as a gesture, therefore the camera is interpreted as a gesture sensor, [0079], lines 2 and 7-9) for sensing one or a plurality gestures of a user of one or a plurality of users ([0079], lines 1-9, the camera receives an indirect input such as a gesture from a user); and
Child teaches a processor ([0014], lines 2-10 and device 115 or control panel 130 has a processor for receiving and display data from one or more sensor units 110, [0055], lines 14-23), to analyze the one or a plurality sensed gestures (in step 1010 the gesture is identified, [0135], lines 1-5) to identify a specific person based on comparing (the gesture is compared to a predefined gesture stored in the memory, [0135], lines 1-4) said one or a plurality of sensed gestures to a gesture signature ([0082], lines 7-13) of the specific person (the gesture that was compared to a predefined gesture stored in the memory is used in step 1020 to determine a predefined contact associated with the gesture and the identity of the user, [0137], lines 1-6), and
to generate an output signal (transmit a call request to the child’s father) corresponding to the input to a controlled device (gestural command, [0137], line 7 and camera is the controlled device, [0133], line 2) only if it was verified that said one or a plurality of sensed gestures were performed by the specific person (the child’s identity is determined based on the gesture and other physical characteristic, therefore the child can contact only the father rather than grandmother or any other contacts, [0137], lines 6-17).
Child does not mention a processor configured to obtain an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors,
Chen teaches a processor (204) configured to obtain an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors (gestures sensed via sensor circuit 202, [0032], lines 4-7 and 11-14).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the processor of Chen to the gesture interface device of Child because to provide the control signal to a user device, external to the device 200, based on the information associated with the control signal, [0032], 19-22.
As to claim 2, Child teaches said one or a plurality of gesture sensors comprises one or more sensors selected from the group of sensors consisting of: reflectometer sensor, biopotential sensor, electro-myography (EMG) sensor, surface nerve conductance (SNC) sensor, electro-oculogram (EOG) sensor, pressure sensor, inertial measurement unit (IMU) sensor, optical sensor and imaging sensor (the one sensor is a camera, which is considered as an imaging sensor, camera, the camera may receive one or a combination if inputs, wherein the inputs could be an indirect input such as a gesture, therefore the camera is interpreted as a gesture sensor, [0079], lines 2 and 7-9).
As to claim 8, Child teaches a method comprising:
using a gesture interface device ([0005], lines 12, cell phones or personal computing devices) comprising:
one or a plurality of gesture sensors (camera, the camera may receive one or a combination if inputs, wherein the inputs could be an indirect input such as a gesture, therefore the camera is interpreted as a gesture sensor, [0079], lines 2 and 7-9) for sensing one or a plurality gestures of a user of one or a plurality of users ([0079], lines 1-9, the camera receives an indirect input such as a gesture from a user);; and
a processor ([0014], lines 2-10 and device 115 or control panel 130 has a processor for receiving and display data from one or more sensor units 110, [0055], lines 14-23),
analyzing the one or a plurality sensed gestures (in step 1010 the gesture is identified, [0135], lines 1-5) to identify a specific person based on comparing (the gesture is compared to a predefined gesture stored in the memory, [0135], lines 1-4) said one or a plurality of sensed gestures to a gesture signature ([0082], lines 7-13) of the specific person (the gesture that was compared to a predefined gesture stored in the memory is used in step 1020 to determine a predefined contact associated with the gesture and the identity of the user, [0137], lines 1-6); and
generating an output signal (transmit a call request to the child’s father) corresponding to the input to a controlled device (gestural command, [0137], line 7 and camera is the controlled device, [0133], line 2) only if it was verified that said one or a plurality of sensed gestures were performed by the specific person (the child’s identity is determined based on the gesture and other physical characteristic, therefore the child can contact only the father rather than grandmother or any other contacts, [0137], lines 6-17).
Child does not mention a processor obtaining an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors,
Chen teaches a processor (204) configured to obtain an input of one or a plurality of sensed gestures from said one or a plurality of gesture sensors (gestures sensed via sensor circuit 202, [0032], lines 4-7 and 11-14).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the processor of Chen to the gesture interface device of Child because to provide the control signal to a user device, external to the device 200, based on the information associated with the control signal, [0032], 19-22.
As to claim 9, Child teaches said one or a plurality of gesture sensors comprises one or more sensors selected from the group of sensors consisting of: reflectometer sensor, biopotential sensor, electro-myography (EMG) sensor, surface nerve conductance (SNC) sensor, electro-oculogram (EOG) sensor, pressure sensor, inertial measurement unit (IMU) sensor, optical sensor and imaging sensor (the one sensor is a camera, which is considered as an imaging sensor, camera, the camera may receive one or a combination if inputs, wherein the inputs could be an indirect input such as a gesture, therefore the camera is interpreted as a gesture sensor, [0079], lines 2 and 7-9).
Claim(s) 3-5 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Child in view of Chen, and further in view of Wu (U.S. Pub. No. 2022/0121288).
As to claim 3, Child and Chen do not teach the device is worn by the user,
Wu teaches the device (600, Fig. 6, [0052], lines 1-2) is configured to be worn by the user (the watch is worn by the user, [0055], lines 2-5).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the device of Wu to the gesture interface of Child as modified by Chen because this implementation, a processor of the smart phone may be configured to analyze the gesture data to detect an AR initiation gesture, [0055], lines 13-14 and other variations to optimize resources may exist, [0055], lines 16-18.
As to claim 4, Child and Chen do not teach the device to be strapped to a hand of the user,
Wu teaches device is configured to be strapped to a hand of the user (watch 600 is strapped to a hand of the user, [0055], lines 2-5).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the device of Wu to the gesture interface of Child as modified by Chen because this implementation, a processor of the smart phone may be configured to analyze the gesture data to detect an AR initiation gesture, [0055], lines 13-14 and other variations to optimize resources may exist, [0055], lines 16-18.
As to claim 5, Child and Chen do not teach device is configured to be strapped to a wrist of the user,
Wu teaches device is configured to be strapped to a wrist of the user (watch 600 is strapped to a wrist of the user, [0055], lines 2-5).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the device of Wu to the gesture interface of Child as modified by Chen because this implementation, a processor of the smart phone may be configured to analyze the gesture data to detect an AR initiation gesture, [0055], lines 13-14 and other variations to optimize resources may exist, [0055], lines 16-18.
As to claim 10, Child and Chen do not teach the device is worn by the user,
Wu teaches the device (600, Fig. 6, [0052], lines 1-2) is configured to be worn by the user (the watch is worn by the user, [0055], lines 2-5).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the device of Wu to the gesture interface of Child as modified by Chen because this implementation, a processor of the smart phone may be configured to analyze the gesture data to detect an AR initiation gesture, [0055], lines 13-14 and other variations to optimize resources may exist, [0055], lines 16-18.
As to claim 11, Child and Chen do not teach strapping the device to a hand of the user,
Wu teaches strapping the device to a hand of the user (watch 600 is strapped to a hand of the user, [0055], lines 2-5).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the device of Wu to the gesture interface of Child as modified by Chen because this implementation, a processor of the smart phone may be configured to analyze the gesture data to detect an AR initiation gesture, [0055], lines 13-14 and other variations to optimize resources may exist, [0055], lines 16-18.
As to claim 12, Child and Chen do not teach strapping the device to a wrist of the user,
Wu teaches strapping the device to a wrist of the user (watch 600 is strapped to a wrist of the user, [0055], lines 2-5).
Therefore, it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to have added the device of Wu to the gesture interface of Child as modified by Chen because this implementation, a processor of the smart phone may be configured to analyze the gesture data to detect an AR initiation gesture, [0055], lines 13-14 and other variations to optimize resources may exist, [0055], lines 16-18.
Allowable Subject Matter
Claims 6, 7, 13, and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claims 6 and 13 are objected to because the prior art references mentioned above and, in the conclusion, do not teach the biopotential sensors are used to identify the specific person in combination with the sensors mentioned in claim 1 based on the term “device further comprising one or a plurality of biopotential sensors.
Claims 7 is objected to because the prior art references mentioned above and, in the conclusion, do not teach having biopotential sensors to record the biopotentials using the one or a plurality of biopotential sensors, to build a gesture metric space for gestures configured such that samples from a plurality of users of said one or a plurality of users and different gestures will fall away from each other while gathering same gestures in clusters and gestures of a same user of a plurality of users in an internal cluster.
Claim 14 is objected to because the prior art references mentioned above and, in the conclusion, do not teach using one or a plurality of biopotential sensors, and recording the biopotentials using the one or a plurality of biopotential sensors, to build a gesture metric space for gestures configured such that samples from a plurality of users of said one or a plurality of users and different gestures will fall away from each other while gathering same gestures in clusters and gestures of a same user of a plurality of users in an internal cluster.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Rodriguez Bravo (U.S. Pub. No. 2023/0129964) teaches a gesture sensor to detect whether a user is making a gesture that is consistent with placing a card in a card insertion portion.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEGEMAN KARIMI whose telephone number is (571)270-1712. The examiner can normally be reached Monday-Friday; 9:00am-4:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at 5712727772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PEGEMAN KARIMI/ Primary Examiner, Art Unit 2623