Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 6 is objected to because of the following informalities:
The limitation: “third finger position is a closed finger position” should be changed to “second finger position is a closed finger position”, to be consistent with the other limitations of the claim: “first finger position is an open finger position, and the third finger position is between the first finger position and the second finger position”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5, 9-13, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bazarevsky et al, (US-PGPUB 20210174519) in view of Hu (US-PGPUB 20200250409)
In regards to claim 1, Bazarevsky discloses an apparatus for classifying a hand
gesture, (see at least: Fig. 1, “hand tracking system 100”), the apparatus comprising:
at least one memory; (2114 in Fig. 21a), and at least one processor, (2112 in Fig. 21a), coupled to the at least one memory and configured to:
determine a classification for the hand gesture, wherein the classification comprises the code associated with the one or more fingers of the hand, (see at least: Par. 0051-0052, the gesture recognition system can associate a gesture detected in an image frame to one or more pre-defined gestures at least in part by mapping the determined set of finger states to a set of pre-defined gestures. By mapping a set of finger states to pre-defined gestures, a system can be customized to a specific set of gestures, [which is technically equivalent to determining a classification for the hand gesture, based on determining specific set of gestures by mapping a set of finger states to pre-defined gestures, wherein the classification comprises the code associated with the one or more fingers of the hand, “the code correspond to the set of finger states”]).
Bazarevsky does not expressly disclose encoding one or more fingers of five fingers of a hand with a code, wherein the determining code corresponding to a position associated with the one or more fingers making the hand gesture.
However, Hu discloses encoding one or more fingers of five fingers of a hand with a code, wherein the determining code corresponding to a position associated with the one or more fingers making the hand gesture, (see at least: Fig. 6, and Par. 0045, each finger position may be encoded by linear position (e.g. x, y, and z coordinates), angular position, or any other coordinate system, [i.e., each finger position may be encoded to finger states (e.g.., hand gesture) based on linear position (e.g. x, y, and z coordinates), angular position, or any other coordinate system]).
Bazarevsky and Hu are combinable because they are both concerned with hand gestures recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Bazarevsky, to encode each finger position to multiple finger positions, as though by Hu, in order to track motion of a hand of a user, (Hu, Par. 0003)
In regards to claim 2, the combine teaching Bazarevsky and Hu as whole discloses the limitations of claim 1.
Bazarevsky further discloses wherein the at least one processor is configured to: receive an image of the hand making the hand gesture, (see at least: Fig. 1, Par. 0065-0066, receiving image frames, and detecting palm of hand from the image frames; [i.e., implicitly receiving an image, “implicit by receiving image frames”, of the hand making the hand gesture, “implicit by imaging hand palm”]); and
determine the code corresponding to the one or more fingers of the hand based on the image, (see at least: Par. 0051, the hand tracking system can include a gesture recognition system that can identify a gesture in an image frame based at least in part on three-dimensional coordinates, where the state of a hand, finger(s), etc. can be derived from the three-dimensional coordinates of a detection and mapped to a set of pre-defined gestures, [i.e., implicitly determine the code corresponding to the one or more fingers of the hand, “state of a fingers”, based on the image, “the image frame that is used for identify a gesture”]).
In regards to claim 3, the combine teaching Bazarevsky and Hu as whole discloses the limitations of claim 1.
Bazarevsky further discloses wherein the at least one processor is configured to perform a function based on the classification of the hand gesture, (see at least: Par. 0051, the hand tracking system can initiate a functionality at one or more computing devices in response to detecting a gesture within one or more image frames, “performing a function based on the classification of the hand gesture”).
In regards to claim 5, the combine teaching Bazarevsky and Hu as whole discloses the limitations of claim 1.
Hu further discloses wherein the position comprises one of a first finger position, a second finger position, or a third finger position, (see at least: Par. 0045, output 606 may include multiple finger positions, “i.e., one of a first finger position, a second finger position, or a third finger position”).
In regards to claim 9, the combine teaching Bazarevsky, Hu, and Kim as whole discloses the limitations of claim 1.
Bazarevsky further discloses wherein the at least one processor is configured to determine the classification for the hand gesture using a model, (see at least: Par. 0052, the gesture recognition system may include one or more machine-learned classifiers that are trained to identify pre-defined gestures based at least in part on three-dimensional hand coordinates generated by the hand landmark model, “i.e., the one or more machine-learned classifiers implicitly determine the classification for the hand gesture”)
In regards to claim 10, the combine teaching Bazarevsky, Hu, and Kim as whole discloses the limitations of claim 9.
Bazarevsky further discloses wherein the model is trained based on a plurality of hand models with keypoints associated with the classification for the hand gesture, (see at least: Par. 0052, the gesture recognition system can associate a gesture detected in an image frame to one or more pre-defined gestures at least in part by mapping the determined set of finger states to a set of pre-defined gestures based on hand landmark positions for gesture recognition; and from Par. 0050, the hand landmark model can perform key-point localization to determine three-dimensional coordinates corresponding to a plurality of hand landmark positions within the image frame. [Accordingly, the gesture recognition system is trained based on set of pre-defined gestures, “plurality of hand models”, using plurality of hand landmark positions, “keypoints associated with the classification for the hand gesture”]).
In regards to claim 11, the combine teaching Bazarevsky, Hu, and Kim as whole discloses the limitations of claim 9.
Bazarevsky further discloses wherein the model is a self-supervised machine learning model, (Par. 0052-0053, mapping a set of finger states to pre-defined gestures based on hand landmark positions, where the training data can be annotated with ground truth data that indicates three-dimensional coordinates corresponding to hand landmark positions. That is, the one or more machine learning models is trained by annotated or labeled pre-defined gestures based on hand landmark positions, “i.e., the one or more machine learning models is a self-supervised machine learning model”).
Regarding claim 12, claim 12 recites substantially similar limitations as set forth in claim 1. As such, claim 12 is rejected for at least similar rational.
The Examiner further acknowledged the following additional limitation(s): “a method for classifying a hand gesture”. However, Bazarevsky discloses the “method for classifying a hand gesture”, (see at least: Par. 0005, “computer implemented method for hand tracking”).
Regarding claim 13, claim 13 recites substantially similar limitations as set forth in claim 2. As such, claim 13 is rejected for at least similar rational.
Regarding claim 18, claim 18 recites substantially similar limitations as set forth in claim 9. As such, claim 18 is rejected for at least similar rational.
Regarding claim 19, claim 19 recites substantially similar limitations as set forth in claim 10. As such, claim 19 is rejected for at least similar rational.
Regarding claim 20, claim 20 recites substantially similar limitations as set forth in claim 11. As such, claim 20 is rejected for at least similar rational.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bazarevsky and Hu, as applied to claim 1 above; and further in view of Du et al, (US-PGPUB 20210158031)
In regards to claim 4, the combine teaching Bazarevsky and Hu as whole discloses the limitations of claim 1.
The combine teaching Bazarevsky and Hu as whole does not expressly disclose wherein the code comprises a number for each of the one or more fingers.
Du et al discloses wherein the code comprises a number for each of the one or more fingers, (see at least: Par. 0098, implicitly assigning identification value to the fingers based on different states and positions of finger. See also, Par. 0109, determining hand gesture based on the state vector of fingers, “number for each of the one or more fingers”).
Bazarevsky, Hu, and Du et al are combinable because they are all concerned with hand gestures recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Bazarevsky and Hu, to use the state vector of different states and positions of finger, as though by Du, in order to recognize the hand gesture, (Du et al, Par. 0004)
Regarding claim 14, claim 14 recites substantially similar limitations as set forth in claim 4. As such, claim 14 is rejected for at least similar rational.
Claims 6 and 15 are is rejected under 35 U.S.C. 103 as being unpatentable over Bazarevsky and Hu, as applied to claim 1 above; and further in view of Agrawal (US-PGUB 20110234384)
In regards to claim 6, the combine teaching Bazarevsky and Hu as whole discloses the limitations of claim 5.
The combine teaching Bazarevsky and Hu as whole does not expressly disclose wherein the first finger position is an open finger position, the third finger position is a closed finger position, and the third finger position is between the first finger position and the second finger position.
Agrawal discloses wherein the first finger position is an open finger position, the third finger position is a closed finger position, and the third finger position is between the first finger position and the second finger position, (see at least: Par. 0039, determining instantaneously the position of each finger and motion of each finger in three coordinate axes, whether a finger is open, partially closed or fully closed, [i.e., the first finger position is an open finger position, “finger is open”, the second finger position is a closed finger position, “fully closed”,, and the third finger position is between the first finger position and the second finger position, “implicit by partially closed finger position”]).
Bazarevsky, Hu, and Agrawal are combinable because they are all concerned with hand gestures recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Bazarevsky and Hu, to use the sensor data, as though by Agrawal, in order to determine instantaneously the position of each finger, including the whether a finger is open, partially closed or fully closed, (Agrawal, Par. 0039)
In regards to claim 15, the combine teaching Bazarevsky and Hu as whole discloses the limitations of claim 12.
Hu further discloses wherein the position comprises one of a first finger position, a second finger position, or a third finger position, (Hu, see at least: Par. 0045, output 606 may include multiple finger positions, “i.e., one of a first finger position, a second finger position, or a third finger position”)
The combine teaching Bazarevsky and Hu as whole does not expressly disclose wherein the first finger position is an open finger position, the third finger position is a closed finger position, and the third finger position is between the first finger position and the second finger position
Agrawal discloses wherein the first finger position is an open finger position, the third finger position is a closed finger position, and the third finger position is between the first finger position and the second finger position, (see at least: Par. 0039, determining instantaneously the position of each finger and motion of each finger in three coordinate axes, whether a finger is open, partially closed or fully closed, [i.e., the first finger position is an open finger position, “finger is open”, the second finger position is a closed finger position, “fully closed”,, and the third finger position is between the first finger position and the second finger position, “implicit by partially closed finger position”]).
Bazarevsky, Hu, and Agrawal are combinable because they are all concerned with hand gestures recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Bazarevsky and Hu, to use the sensor data, as though by Agrawal, in order to determine instantaneously the position of each finger, including the whether a finger is open, partially closed or fully closed, (Agrawal, Par. 0039)
Claims 7-8, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Bazarevsky and Hu, as applied to claim 1 above; and further in view of Kim et al, (US-PGPUB 20080089587)
In regards to claim 7, the combine teaching Bazarevsky and Hu as whole discloses the limitations of claim 1.
The combine teaching Bazarevsky and Hu as whole does not expressly disclose wherein the hand gesture is an inter-gesture that occurs in between a first hand gesture and a second hand gesture based on the hand transitioning in motion from the first hand gesture to the second hand gesture
Kim et al discloses wherein the hand gesture is an inter-gesture that occurs in between a first hand gesture and a second hand gesture based on the hand transitioning in motion from the first hand gesture to the second hand gesture, (see at least: Par. 0075, there can be at least one intermediate hand gesture between the initial hand gesture and final hand gesture, “i.e., the hand gesture is an intermediate hand gesture”).
Bazarevsky, Hu, and Kim et al are combinable because they are all concerned with hand gestures recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Bazarevsky and Hu, to use the controller 160, as though by Kim et al, in order to collect motion picture in which an initial hand gesture is changed into a final hand gesture, (Kim, Par. 0075).
In regards to claim 8, the combine teaching Bazarevsky, Hu, and Kim as whole discloses the limitations of claim 1.
Kim further discloses wherein the at least one processor is configured to determine a dynamic hand gesture based on occurrence of the first hand gesture, the inter-gesture, and the second hand gesture, (Par. 0075, implicit by collecting the motion image in which an initial hand gesture is changed into a final hand gesture)
Regarding claim 16, claim 16 recites substantially similar limitations as set forth in claim 7. As such, claim 16 is rejected for at least similar rational.
Regarding claim 17, claim 17 recites substantially similar limitations as set forth in claim 8. As such, claim 17 is rejected for at least similar rational.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMARA ABDI/Primary Examiner, Art Unit 2668 02/11/2026