Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities: It appears there are multiple areas where the specification does not line up with the drawings.
[0034] The flowchart 300 proceeds to block 325, where geometric characteristics for the hand relative to the hand are calculated or otherwise determined. In some embodiments, the geometric characteristics may include a relative position and/or orientation of the hand (or point in space representative of the hand and/or controller) and the head (or point in space representative of the head). In some embodiments, the geometric characteristics may include various vectors determined based on the location information for various portions of the user. Example parameters and other metrics relating to the geometric characteristics will be determined in greater detail below with respect to FIG. 4.
The above appears directed to block 330, not 325, in fig. 3. This also appears to be the case for at least paragraphs [0035-0039]
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The claim(s) recite(s) determining geometric characteristics of a hand relative to a head of a user performing a gesture; determining a gaze vector for the user; determining a hand gesture state from a plurality of candidate hand gesture states based on the gaze vector and the geometric characteristics of the hand relative to the head of the user; and in response to a determination that the hand gesture state corresponds to an input gesture, invoking an action corresponding to the input gesture.
The limitation of determining geometric characteristics of a hand relative to a head of a user performing a gesture, could be a user determining the direction of the palm facing up or down; determining a gaze vector for the user, could be the user determining where one is looking; determining a hand gesture state from a plurality of candidate hand gesture states based on the gaze vector and the geometric characteristics of the hand relative to the head of the user; could be the user determining whether gesture is recognized and looking, and in response to a determination that the hand gesture state corresponds to an input gesture, invoking an action corresponding to the input gesture, could be input the gesture is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind.
This is similarly the case for claim 18 but for the recitation of generic computer components. That is, other than reciting “by a processor,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “one or more processors” language, “determine” in the context of this claim encompasses the user manually calculating the steps. For example, but for the “processor” language, “determine” in the context of this claim encompasses the user thinking and determining gaze and a recognized gesture for inputting. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a processor to perform both the ranking and determining steps. The processor in both steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of ranking information based on a determined amount of use) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform both determining a gesture, determining a gaze, and a gesture state, and invoking an input amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 9-11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palm Gazer:Unimanual Eye-hand Menus in Augmented Reality, Ken Pfeuffer et al, October 13, 2023, UI '23: Proceedings of the 2023 ACM Symposium on Spatial User Interaction (October 2023) https://doi.org/10.1145/3607822.3614523 ISBN: 9798400702815, hereinafter, Pfeuffer in view of Lacey et al (2019/0362557) hereinafter, Lacey.
In regards to claim 1, Pfeuffer teaches a method comprising (abstract):
How can we design the user interfaces for augmented reality (AR) so that we can interact as simple, flexible and expressive as we can with smartphones in one hand? To explore this question, we propose PalmGazer as an interaction concept integrating eye-hand interaction to establish a singlehandedly operable menu system. In particular, PalmGazer is designed to support quick and spontaneous digital commands– such as to play a music track, check notifications or browse visual media – through our devised three-way interaction model: hand opening to summon the menu UI, eye-hand input for selection of items, and dragging gesture for navigation. A key aspect is that it remains always-accessible and movable to the user, as the menu supports meaningful hand and head based reference frames. We demonstrate the concept in practice through a prototypical mobile UI with application probes, and describe technique designs specifically-tailored to the application UI. A qualitative evaluation highlights the system’s interaction benefits and drawbacks, e.g., that common 2D scroll and selection tasks are simple to operate, but higher degrees of freedom may be reserved for two hands. Our work contributes interaction techniques and design insights to expand AR’s uni-manual capabilities.(abstract)
determining geometric characteristics of a hand; (fig. 2 Activation closed palm)
Pfeuffer fails to expressly teach determining geometric characteristics of a hand relative to a head of a user performing a gesture.
However, Lacey teaches determining geometric characteristics of a hand relative to a head of a user performing a gesture.(abstract, [0364](fig. 17a-17b head pose, eye gaze and gestures confidence))
Examples of wearable systems and methods can use multiple inputs (e.g., gesture, head pose, eye gaze, voice, totem, and/or environmental factors (e.g., location)) to determine a command that should be executed and objects in the three-dimensional (3D) environment that should be operated on. The wearable system can detect when different inputs converge together, such as when a user seeks to select a virtual object using multiple inputs such as eye gaze, head pose, hand gesture, and totem input. Upon detecting an input convergence, the wearable system can perform a transmodal filtering scheme that leverages the converged inputs to assist in properly interpreting what command the user is providing or what object the user is targeting.
It would have been obvious to one of ordinary skill in the art to modify the teachings of Pfeuffer to further include determining geometric characteristics of a hand relative to a head of a user performing a gesture as taught by Lacey in order to “reduce the degree of specificity required in an input command and to reduce error rate associated with an imprecise command, the wearable system described herein can be programmed to dynamically apply multiple inputs for identification of an object to be selected or acted upon” [0080]
Therefore, Pfeuffer in view of Lacey teaches
determining a gaze vector for the user; (fig. 2 pictures of gaze laser) Pfeuffer [0081,0086,374] Lacey
PNG
media_image1.png
696
292
media_image1.png
Greyscale
determining a hand gesture state from a plurality of candidate hand gesture states based on the gaze vector and the geometric characteristics of the hand relative to the head of the user; and (fig. 2 open hand palm up) Pfeuffer ([0086, 158] hand, finger gesture) Lacey
PNG
media_image2.png
754
1316
media_image2.png
Greyscale
in response to a determination that the hand gesture state corresponds to an input gesture, invoking an action corresponding to the input gesture.(fig. 2 activation of UI ) Pfeuffer. Examiner notes Gaze and Commit is a well known model for input and when view with Pfeuffer’s open palm interaction for opening a UI reads on the claims furthermore, Pfeuffer fig. 3 shows a well written state of art of UI activation see below). And [0158] Lacey.
PNG
media_image3.png
492
1122
media_image3.png
Greyscale
In regards to claim 9, Pfeuffer teaches a non-transitory computer readable medium comprising computer readable code executable by one or more processors to (abstract): determine geometric characteristics of a hand (fig. 2 Activation closed palm)
Pfeuffer fails to expressly teach determining geometric characteristics of a hand relative to a head of a user performing a gesture.
However, Lacey teaches determining geometric characteristics of a hand relative to a head of a user performing a gesture. (abstract, [158, 0364])(fig. 17a-17b head pose, gaze and gesture Lacey)
It would have been obvious to one of ordinary skill in the art to modify the teachings of Pfeuffer to further include determining geometric characteristics of a hand relative to a head of a user performing a gesture as taught by Lacey in order to “reduce the degree of specificity required in an input command and to reduce error rate associated with an imprecise command, the wearable system described herein can be programmed to dynamically apply multiple inputs for identification of an object to be selected or acted upon” [0080]
Therefore, Pfeuffer in view of Lacey teaches determine a gaze vector for the user(fig. 2 selection eye gaze); determine a hand gesture state from a plurality of candidate hand gesture states based on the gaze vector and the geometric characteristics of the hand relative to the head of the user (fig. 2 open hand palm up) Pfeuffer and in response to a determination that the hand gesture state corresponds to an input gesture, invoke an action corresponding to the input gesture. (fig. 2 activation of UI ) Pfeuffer and (abstract, [158, 0364]) Lacey
In regards to claim 18, Pfeuffer teaches system comprising: one or more processors; and one or more computer readable media comprising computer readable code executable by the one or more processors to (abstract) determine geometric characteristics of a hand (fig. 2 Activation closed palm)(heading 8 smartphone applications)
Pfeuffer fails to expressly teach determining geometric characteristics of a hand relative to a head of a user performing a gesture.
However, Lacey teaches determining geometric characteristics of a hand relative to a head of a user performing a gesture. (abstract, [158, 0364]) Lacey
It would have been obvious to one of ordinary skill in the art to modify the teachings of Pfeuffer to further include determining geometric characteristics of a hand relative to a head of a user performing a gesture as taught by Lacey in order to “reduce the degree of specificity required in an input command and to reduce error rate associated with an imprecise command, the wearable system described herein can be programmed to dynamically apply multiple inputs for identification of an object to be selected or acted upon” [0080].
Therefore, Pfeuffer in view of Lacey teaches determine a gaze vector for the user; (fig. 2 pictures of gaze laser) Pfeuffer [0081,0086,374] Lacey
determining a hand gesture state from a plurality of candidate hand gesture states based on the gaze vector and the geometric characteristics of the hand relative to the head of the user; and (fig. 2 open hand palm up) Pfeuffer ([0086, 158] hand, finger gesture) Lacey
in response to a determination that the hand gesture state corresponds to an input gestures, invoking an action corresponding to the input gesture.(fig. 2 activation of UI ) Pfeuffer. Examiner notes Gaze and Commit is a well known model for input and when view with Pfeuffer’s open palm interaction for opening a UI reads on the claims furthermore, Pfeuffer fig. 3 shows a well written state of art of UI activation see below). And [0158] Lacey.
In regards to claim 2, Pfeuffer in view of Lacey teaches the method of claim 1, wherein determining geometric characteristic of a hand relative to a head of a user comprises: obtaining hand tracking data of the hand ([127] Lacey) performing the gesture (fig. 2 gestures) Pfeuffer; and obtaining a head vector for the user (abstract, [158, 0364]) Lacey)
In regards to claim 10, Pfeuffer in view of Lacey teaches non-transitory computer readable medium of claim 9, wherein the computer readable code to determine geometric characteristic of a hand relative to a head of a user comprises computer readable code to([127] Lacey) and (fig. 2 gestures) Pfeuffer: obtain hand tracking data of the hand performing the gesture; and obtain a head vector for the user. (abstract, [158, 0364]) Lacey)
In regards to claim 11, Pfeuffer in view of Lacey teaches the non-transitory computer readable medium of claim 9, wherein the computer readable code to determine geometric characteristic of a hand relative to a head of a user comprises computer readable code to: obtain controller tracking data of a controller held by the hand;(fig. 19 a 1940)Lacey and obtain a head vector for the user [0364] Lacey.
Claim(s) 3-8, 12-17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Palm Gazer:Unimanual Eye-hand Menus in Augmented Reality, Ken Pfeuffer et al, October 13, 2023, UI '23: Proceedings of the 2023 ACM Symposium on Spatial User Interaction (October 2023) https://doi.org/10.1145/3607822.3614523 ISBN: 9798400702815, hereinafter, Pfeuffer in view of Lacey et al (2019/0362557) hereinafter, Lacey further in view of Salvi et al (2017/0344122) hereinafter, Salvi.
In regards to claim 3, Pfeuffer and Lacey teaches the method of claim 1, wherein the candidate hand gesture state comprise a palm-up state (fig. 2 open hand palm up) Pfeuffer
Pfeuffer and Lacey fail to teach a palm-flip state, and an invalid state.
However, Salvi teaches a palm-flip state (fig. 2a and 2b flipped states) Salvi, and an invalid state [148] [0044-0048].
[0048] Such an arrangement illuminates a notable advantage of certain embodiments disclosed herein. Namely, for a system of navigation that uses a single pose (or even a related group of poses) in several postures to deliver several different inputs, the user may utilize such an arrangement intuitively. If basic navigation is carried out with several postures, those postures exhibiting a single pose but in different orientations, such an arrangement may prove simple and/or intuitive to users. As a more concrete example, if all (or even several) core navigation functions within a user interface all use (for instance) some variation on the “thumbs up” pose, the user may find such an arrangement relatively intuitive and easy to remember and use.
PNG
media_image4.png
824
630
media_image4.png
Greyscale
It would have been obvious to one of ordinary skill in the art to modify the teachings of Pfeuffer and Lacey to further include teach a palm-flip state, and an invalid state as taught by Salvi in order to support a very large number of individual inputs [003-007].
In regards to claim 4, see rational of claim 3, Pfeuffer and Lacey in view of Salvi teaches the method of claim 1, wherein determining the hand gesture state comprises: determining a hand orientation state from a plurality of candidate hand orientation states based on the geometric characteristics of the hand relative to the head(abstract, [158, 0364]) Lacey); and determining the hand gesture state based on the hand orientation state and the gaze vector. (fig. 4a and 4b flipped state [0015, 0023, 0044-0048, 148,] Salvi).
In regards to claim 6, see rational of claim 3, Pfeuffer and Lacey in view of Salvi method of claim 4, wherein the hand gesture state is determined using a gesture detection state machine and based on one or more of a group consisting of: 1) the hand gesture state; and 2) a determination of whether a gaze criterion is satisfied (fig. 2 hand gestures and gaze) Pfeuffer
In regards to claim 7, Pfeuffer and Lacey in view of Salvi method of claim 6, wherein determining whether the gaze criterion is satisfied comprises: determining whether a target of the gaze vector is within a threshold distance of at least one of the hand, a controller, and a user input component (fig. 17a eye gaze [0084, 212, 217] Lacey.
In regards to claim 8, Pfeuffer and Lacey in view of Salvi method of claim 4, further comprising: obtaining controller tracking data; and determining a controller orientation based on the controller tracking data, wherein determining the hand gesture state comprises: determining a hand orientation state from a plurality of candidate hand orientation (fig. 4a and 4b flipped state) Salvi, states based on the geometric characteristics of the controller orientation relative to the head; and determining the hand gesture state based on the controller orientation state and the gaze vector. (fig. 4a and 4b flipped state [0015, 0023, 0044-0048, 148,] Salvi) and (abstract, [158, 0364]) Lacey);
In regards to claim 12, see rational of claim 3, Pfeuffer and Lacey in view of Salvi teaches non-transitory computer readable medium of claim 9, wherein the candidate hand gesture states comprise a palm-up state(fig. 2 open hand palm up) Pfeuffer, a palm-flip state(fig. 4a and 4b flipped state) Salvi, and an invalid state [148] [0044] Salvi
In regards to claim 13, see rational of claim 3, Pfeuffer and Lacey in view of Salvi teaches the non-transitory computer readable medium of claim 9, wherein the computer readable code to determine the hand gesture state comprises computer readable code to: determine a hand orientation state from a plurality of candidate hand orientation states based on the geometric characteristics of the hand relative to the head (abstract, [158, 0364]) Lacey); and determine the hand gesture state based on the hand orientation state and the gaze vector. (fig. 4a and 4b flipped state [0015, 0023, 0044-0048, 148,] Salvi)
In regards to claim 15, Pfeuffer and Lacey in view of Salvi non-transitory computer readable medium of claim 13, wherein the hand gesture state is determined using a gesture detection state machine and based on one or more of a group consisting of: 1) the hand gesture state; and 2) a determination of whether a gaze criterion is satisfied. (fig. 2 hand gestures and gaze) Pfeuffer
In regards to claim 16,Pfeuffer and Lacey in view of Salvi non-transitory computer readable medium of claim 15, wherein the computer readable code to determine whether the gaze criterion is satisfied comprises computer readable code to: determine whether a target of the gaze vector is within a threshold distance of at least one of the hand, a controller, and a user input component. (fig. 17a eye gaze [0084, 212, 217] Lacey.
In regards to claim 17, see rational of claim 3, Pfeuffer and Lacey in view of Salvi non-transitory computer readable medium of claim 9, wherein the computer readable code to invoke an action corresponding to the input gesture further comprises computer readable code to: determine a gesture activation state based on the hand gesture state (fig. 2 gesture and gaze) Pfeuffer and one or more suppression criteria.[409] Lacey
In regards to claim 19, see rational of claim 3, Pfeuffer and Lacey in view of Salvi system of claim 18, wherein the computer readable code to determine the hand gesture state comprises computer readable code to: determine a hand orientation state from a plurality of candidate hand orientation states based on the geometric characteristics of the hand relative to the head; (fig. 2a and 2b flipped state) Salvi, and determine the hand gesture state based on the hand orientation state and the gaze vector. (fig. 4a and 4b flipped state [0015, 0023, 0044-0048, 148,] Salvi) and (abstract, [158, 0364]) Lacey);
In regards to claim 20, see rational of claim 3, Pfeuffer and Lacey in view of Salvi system of claim 18, wherein the computer readable code to invoke an action corresponding to the input gesture further comprises computer readable code to: determine a gesture activation state based on the hand gesture state (fig. 2 gesture and gaze) Pfeuffer and one or more suppression criteria. [409] Lacey
Allowable Subject Matter
Claims 5 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRANT SITTA whose telephone number is (571)270-1542. The examiner can normally be reached M-F 7:30-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached at 571-272-6084. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRANT SITTA/Primary Examiner, Art Unit 2622