DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 – 3, 5 – 8, 10 – 11, 13, and 29 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Levesque (U.S. PG Pub 2020/0073482).
Regarding Claim 1, Levesque teaches a method, comprising:
obtaining sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50) from a sensor (Figure 1, Element 1015. Paragraph 50);
obtaining, responsive to providing the sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50) to a machine learning system (Paragraph 51), an output (Element certain gesture. Paragraph 51) from the machine learning system (Paragraph 51), the output (Element certain gesture. Paragraph 51) indicating one or more predicted gestures (Element different gesture and no-gesture measurements. Paragraph 51) and one or more respective probabilities (Element likelihood. Paragraphs 35 - 36) of the one or more predicted gestures (Element different gesture and no-gesture measurements. Paragraph 51);
determining, based on the output (Element certain gesture. Paragraph 51) of the machine learning system (Paragraph 51) and a gesture-detection factor (Element threshold value. Paragraphs 35 - 36), a likelihood (Element likelihood. Paragraphs 35 - 36) of an element control gesture (Figure 6, Element 625. Paragraph 82) being performed by a user of a device comprising the sensor (Figure 1, Element 1015. Paragraph 50); and
activating, based on the likelihood (Element likelihood. Paragraphs 35 - 36) and the gesture-detection factor (Element threshold value. Paragraphs 35 - 36), gesture-based control of an element (Figure 6, Element 635. Paragraph 82) according to the element control gesture (Figure 6, Element 625. Paragraph 82).
Regarding Claim 2, Levesque teaches the method of claim 1 (See Above), wherein the gesture-detection factor (Element threshold value. Paragraphs 35 - 36) comprises a gesture-detection sensitivity threshold or a likelihood adjustment factor (Paragraph 42).
Regarding Claim 3, Levesque teaches the method of claim 2 (See Above), wherein activating the gesture-based control of the element (Figure 6, Element 635. Paragraph 82) based on the likelihood (Element likelihood. Paragraphs 35 - 36) and the gesture-detection factor (Element threshold value. Paragraphs 35 - 36) comprises activating the gesture-based control of the element based on a comparison of the likelihood (Element likelihood. Paragraphs 35 - 36) with the gesture-detection sensitivity threshold (Paragraph 35).
Regarding Claim 5, Levesque teaches the method of claim 4 (See Above), wherein determining the likelihood (Element likelihood. Paragraphs 35 - 36) based on the output (Element certain gesture. Paragraph 51) and the gesture-detection factor (Element threshold value. Paragraphs 35 - 36) further comprises:
after determining that the first one of the one or more respective probabilities (Element likelihood. Paragraphs 35 - 36) that corresponds to the element control gesture (Figure 6, Element 625. Paragraph 82) is the highest one (Paragraph 56) of the one or more respective probabilities (Element likelihood. Paragraphs 35 - 36), determining that a second one of the one or more respective probabilities (Element likelihood. Paragraphs 35 - 36) that corresponds to a gesture other than the element control gesture (Figure 6, Element 625. Paragraph 82) is the highest one of the one or more respective probabilities (Element likelihood. Paragraphs 35 - 36); and
decreasing the likelihood (Element likelihood. Paragraphs 35 - 36) by an amount corresponding to a higher (Paragraph 52) of the second one of the one or more respective probabilities (Element weighted sum. Paragraph 75) and a fraction of the gesture-detection sensitivity threshold.
Regarding Claim 6, Levesque teaches the method of claim 2 (See Above), further comprising:
obtaining motion information from a motion sensor (Figure 1, Element 1015. Paragraph 50) of the device; and
modifying the gesture-detection sensitivity threshold (Element threshold value. Paragraphs 35 - 36) based on (Paragraph 71) the motion information.
Regarding Claim 7, Levesque teaches the method of claim 6 (See Above), wherein modifying the gesture-detection sensitivity threshold (Element threshold value. Paragraphs 35 - 36) comprises decreasing (Paragraph 42) the gesture-detection sensitivity threshold (Element threshold value. Paragraphs 35 - 36) responsive to an increase (Paragraph 42) in motion of the device indicted by the motion information (Figure 1, Element 1015. Paragraph 50).
Regarding Claim 8, Levesque teaches the method of claim 7 (See Above), wherein modifying the gesture-detection sensitivity threshold (Element threshold value. Paragraphs 35 - 36) based on the motion information comprises modifying the gesture-detection sensitivity threshold (Element threshold value. Paragraphs 35 - 36) based on the motion information (Figure 1, Element 1015. Paragraph 50) upon activation of the gesture-based control (Figure 6, Element 635. Paragraph 82), the method further comprising:
deactivating the gesture-based control (Figure 6, Element 635. Paragraph 82); and
smoothly increasing (Paragraph 85) the gesture-detection sensitivity threshold (Element threshold value. Paragraphs 35 - 36) to an initial value after deactivating the gesture-based control (Figure 6, Element 635. Paragraph 82).
Regarding Claim 10, Levesque teaches the method of claim 1 (See Above), wherein the element comprises a virtual knob, a virtual dial, a virtual slider (Paragraph 26), or a virtual remote control.
Regarding Claim 11, Levesque teaches the method of claim 1 (See Above), wherein obtaining the sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50) from the sensor (Figure 1, Element 1015. Paragraph 50) comprises obtaining first sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50) from a first sensor (Figure 1, Element 1015. Paragraph 50) of the device, the method further comprising obtaining second sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50) from a second sensor (Figure 1, Element 1015. Paragraph 50) of the device, wherein obtaining the output (Element certain gesture. Paragraph 51) indicating the one or more predicted gestures (Element different gesture and no-gesture measurements. Paragraph 51) and the one or more respective probabilities (Element likelihood. Paragraphs 35 - 36) of the one or more predicted gestures (Element different gesture and no-gesture measurements. Paragraph 51) comprises:
providing the first sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50) to a first machine learning model (Paragraph 51)trained to extract first features from a first type of sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50);
providing the second sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50) to a second machine learning model (Paragraph 51) trained to extract second features from a second type of sensor data (Figure 1, Element not labeled, but is the data from Element 1015. Paragraph 50);
combining a first output (Element certain gesture. Paragraph 51) of the first machine learning model (Paragraph 51) with a second output (Element certain gesture. Paragraph 51) of the second machine learning model (Paragraph 51) to generate a combined sensor (Figure 1, Element 1015. Paragraph 50) input; and
obtaining the output (Element certain gesture. Paragraph 51) indicating the one or more predicted gestures (Element different gesture and no-gesture measurements. Paragraph 51) and the one or more respective probabilities (Element likelihood. Paragraphs 35 - 36) of the one or more predicted gestures (Element different gesture and no-gesture measurements. Paragraph 51) from a third machine learning model (Paragraph 72) responsive to providing the combined sensor (Figure 1, Element 1015. Paragraph 50) input to the third machine learning model (Paragraph 72).
Regarding Claim 13, Levesque teaches the method of claim 1 (See Above), wherein the device comprises a first device (Figure 1, Element 1002. Paragraph 47), and wherein activating the gesture-based control (Figure 6, Element 635. Paragraph 82) of the element according to the element control gesture (Figure 6, Element 625. Paragraph 82) comprises activating the gesture-based control (Figure 6, Element 635. Paragraph 82) of the element at the first device (Figure 1, Element 1002. Paragraph 47) or at a second device different from the first device, and wherein the method further comprises providing, by the first device or the second device (Figure 3, Element 302. Paragraph 68), at least one of a visual indicator based on the likelihood, a haptic indicator (Figure 3, Element 305. Paragraph 69) based on the likelihood (Element likelihood. Paragraphs 35 - 36), or an auditory indicator based on the likelihood.
Regarding Claim 29, Levesque teaches the method of claim 1 (See Above), wherein the gesture-detection factor (Element threshold value. Paragraphs 35 - 36) comprises a gesture-detection sensitivity threshold (Element threshold value. Paragraph 42), wherein determining the likelihood (Element likelihood. Paragraphs 35 - 36) comprises:
determining the likelihood (Element likelihood. Paragraphs 35 - 36) based on the output the output (Element certain gesture. Paragraph 51) of the machine learning system (Paragraph 51);
obtaining the gesture-detection sensitivity threshold (Element threshold value. Paragraph 42); and
modifying (Paragraph 42) the determined likelihood (Element likelihood. Paragraphs 35 - 36) based on the gesture-detection sensitivity threshold (Element threshold value. Paragraph 42) used to generate a modified likelihood (Element improved likelihood. Paragraph 42), and
wherein activating the gesture-based control (Figure 6, Element 635. Paragraph 82) based on the likelihood (Element likelihood. Paragraphs 35 - 36) and the gesture-detection factor (Element threshold value. Paragraphs 35 - 36) comprises comparing the modified likelihood (Element improved likelihood. Paragraph 42) to the gesture-detection sensitivity threshold (Element threshold value. Paragraph 42).
Claims 27 – 28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ike et al. (U.S. PG Pub 2008/0052643).
Regarding Claim 27, Ike et al. teach a method, comprising:
obtaining sensor data (Figure 1, Element image. Paragraph 27) from a sensor (Figure 1, Element 1. Paragraph 27);
obtaining, responsive to providing the sensor data (Figure 1, Element image. Paragraph 27) to a machine learning system (Paragraphs 33 - 37), an output from the machine learning system (Paragraphs 33 - 37), the output indicating one or more predicted gestures (Figure 2, Element 23. Paragraph 29) and one or more respective probabilities (Figure 10, Element 51, Sub-Element Probability. Paragraph 68) of the one or more predicted gestures (Figure 2, Element 23. Paragraph 29);
determining, based on the output of the machine learning system (Paragraphs 33 - 37) and a gesture-detection factor (Figure 10, Element 51, Sub-Element Probability. Paragraph 68), a dynamically updating likelihood (Figure 10, Element 51, Sub-Element Probability. Paragraph 68) of an element control gesture (Figure 10, Element Recognition Result. Paragraph 60) being performed by a user (Element User. Paragraph 26) of a first device (Figures 12 and 13, Element 61. Paragraph 70); and
providing, for display, a dynamically updating visual indicator (Figures 12 and 13, Elements 64a and 64b. Paragraphs 70 and 87) of the dynamically updating likelihood (Figure 10, Element 51, Sub-Element Probability. Paragraph 68) of the element control gesture (Figure 10, Element Recognition Result. Paragraph 60) being performed by the user (Element User. Paragraph 26).
Regarding Claim 28, Ike et al. teach the method of claim 27 (See Above), further comprising performing gesture control of an element (Figure 6, Element 31e. Paragraph 42) at the first device (Figures 12 and 13, Element 61. Paragraph 70) or a second device different from the first device.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Levesque (U.S. PG Pub 2020/0073482) in view of Nowozin et al. (U.S. PG Pub 2012/0225719).
Regarding Claim 9, Levesque teaches the method of claim 1 (See Above). Levesque is silent with regards to wherein the element control gesture comprises a pinch-and-hold gesture.
Nowozin et al. teach wherein the element control gesture comprises a pinch-and-hold gesture (Paragraph 1).
It would have been obvious to a person of ordinary skill in the art to modify the teachings of the gesture likelihood interaction of Levesque with detected gesture of Nowozin et al. The motivation to modify the teachings of Levesque with the teachings of Nowozin et al. is to provide easily detected gestures, as taught by Nowozin et al. (Paragraph 2).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Levesque (U.S. PG Pub 2020/0073482) in view of Feng et al. (U.S. PG Pub 2015/0331534).
Regarding Claim 14, Levesque teaches the method of claim 1 (See Above). Levesque is silent with regards to further comprising: detecting motion of the device greater than a threshold amount of motion; and disabling the gesture-based control of the element while the motion of the device is greater than the threshold amount of motion.
Feng et al. teach further comprising: detecting motion of the device greater than a threshold amount of motion (Element movement over a threshold range. Paragraph 17); and disabling the gesture-based control (Paragraph 17) of the element while the motion of the device is greater than the threshold amount of motion (Element movement over a threshold range. Paragraph 17).
It would have been obvious to a person of ordinary skill in the art to modify the teachings of the gesture likelihood interaction of Levesque with the inadvertent gesture control of Fang et al. The motivation to modify the teachings of Levesque with the teachings of Feng et al. is to identify and disregard inadvertent gestures, as taught by Feng et al. (Paragraph 2).
Allowable Subject Matter
Claims 15 – 17 and 19 – 26 are allowed.
The following is an examiner’s statement of reasons for allowance: The prior art of record fails to disclose at least “
The following is an examiner’s statement of reasons for allowance: The prior art of record fails to disclose at least “providing, for display, a dynamically updating visual indicator of the dynamically updating likelihood of the element control gesture being performed by the user, wherein the dynamically updating visual indicator comprises a plurality of distinct visual indicator components having a plurality of respective component sizes, and wherein providing the dynamically updating visual indicator further comprises dynamically varying the plurality of respective component sizes by an amount that scales inversely with the dynamically updating likelihood” along with the other limitations of Claim 15.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Claims 4 and 12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The prior art of record fails to disclose at least “wherein determining the likelihood based on the output and the gesture-detection factor comprises: determining that a first one of the one or more respective probabilities that corresponds to the element control gesture is a highest one of the one or more respective probabilities; obtaining a fraction of the gesture-detection sensitivity threshold; identifying a higher of the first one of the one or more respective probabilities and a fraction of the gesture-detection sensitivity threshold; and increasing the likelihood by an amount corresponding to the identified one of the higher of the first one of the one or more respective probabilities and the fraction of the gesture-detection sensitivity threshold” in combination with Claim 1, from which Claim 4 depends.
The following is a statement of reasons for the indication of allowable subject matter: The prior art of record fails to disclose at least “wherein the first sensor data has a first characteristic amount of noise and the second sensor data has a second characteristic amount of noise higher than the first characteristic amount of noise, and wherein the machine learning system comprises at least one processing module interposed between the third machine learning model and the first and second machine learning models, the at least one processing module configured to emphasize the second sensor data having the second characteristic amount of noise higher than the first characteristic amount of noise” in combination with Claims 1 and 11, from which Claim 12 depends.
Response to Arguments
Regarding the first argument, in which the applicant asserts that Levesque fails to teach at least "determining, based on the output of the machine learning system and gesture-detection factor, a likelihood of an element control gesture being performed” of at least Claim 1. The applicant argues that there is no disclosure of Levesque that the cited "likelihood" of Paragraphs 35 - 36 is based on anything other than a "detected movement" and a "detected vibration", let alone based on the "certain gesture to be detected" and the "threshold." The examiner respectfully disagrees with the applicant's assertion. Levesque discloses "...the detected movement of the user matches the vibration profile of the first predefined gesture by: determining, based on vibration detected by at least one sensor in communication with the mobile device, that the user is touching an object; evaluating with at least one classification algorithm a likelihood of the at least first predefined gesture based on each of the detected movement and the detected vibration; calculating an average of the separate likelihoods; and selecting the first predefined gesture if the average of the likelihoods exceeds a threshold value (Paragraph 35. Emphasis Added)." For further explanation of how the gestures are classified, the examiner points the applicant to Figure 2 and Paragraphs 54 - 56. Levesque discloses "In other embodiments, the system may keep gesture recognition enabled at all times. The system may then attempt to classify a detected gesture 230. For example, the system may run a classification algorithm to determine which of the gestures permitted against the object of interest has been performed. In one embodiment, the classification algorithm may be the result of a machine learning process: e.g., sensor data is recorded as a wide range of people perform different gestures; this data is used to train a machine learning algorithm to distinguish between the different gestures or no gesture at all; the algorithm is then capable of indicating the likelihood that a gesture has been performed based on sensor data (Paragraph 54. Emphasis Added)." Levesque discloses "The system may then evaluate whether a retrieved predefined gesture against the object of interest has been performed 235. For example, the system may determine if a gesture has been detected, and if this gesture is associated with the object of interest. If not, the system may continue looking for a gesture of interest (240) and updating the location of the user periodically (Paragraph 55. Emphasis Added)." Levesque further discloses "It may for example assume that the object interacted with is the closest object or the object for which the gesture classifier indicates the greatest likelihood. The system may also ignore ambiguous gestures when more than one object is nearby, or whenever the distance to multiple objects or the likelihood of a gesture having been performed against them is too close to disambiguate them (Paragraph 56. Emphasis Added)." Therefore, it is clear that during the gesture classification process, the system makes attempt the classify the gesture and the checks to see if the gesture is detected in a repeating process that is running at all times. The Office is unmoved by the applicant's argument and the rejection is maintained.
Regarding the second argument, in which the applicant asserts that Ike et al. fails to teach at least “determining, based on the output of the machine learning system and gesture-detection factor, a dynamically updating likelihood of an element control gesture” of at least Claim 27. The applicant argues that Ike et al. does not disclose that the probability of Paragraph 68 is based on itself and therefore does not disclose dynamically updating likelihood of an element control gesture that is based on a gesture detection factor. The applicant further argues that the previous response to arguments does not suggest that gesture probabilities of Ike et al. meet the limitations for dynamically updating likelihood of Claim 27. The applicant further argues that the previous response to arguments has failed to provide why the explicit mapping of “Figure 10, Element 51, Sub-Element Probability. Paragraph 68” is linked to the examiners argument and/or the rejection. The examiner respectfully disagrees with the applicant's assertion.
The examiner firstly notes that no mapping has and/or is being changed. Ike et al. discloses “A recognition result display unit 61 displays a gesture recognition result. The recognition result includes at least information about which hand gesture is recognized by the gesture recognition unit 5. As to the information, a kind of extracted gesture may be included. Furthermore, as to a hand candidate region selected by the hand candidate region selection unit 52, a probability that the hand candidate region represents each hand gesture (evaluated by the gesture evaluation unit 51), and a threshold to decide whether the hand candidate region represents each hand gesture, may be included (Paragraph 68. Emphasis Added).” Ike et al. further discloses “FIG. 10 is a block diagram of a gesture recognition unit 5 according to the second embodiment. A gesture evaluation unit 51 sets various partial regions on the input image, and evaluates the possibility that each partial region includes a detection object's gesture. Thus the gesture evaluation unit 51 calculates a score of each partial region based on the evaluation result, and outputs the score. As a method for calculating the score, in case of the gesture recognition method explained in FIGS. 4 and 5, the score is calculated by evaluation result and reliability of the weak classifier W in the equation (2) as follows. Σi Σj ( h ( i , j , x ) × α( i , j ) ) (Paragraphs 62 – 63. Emphasis Added).” This disclosure of Ike et al. teaches that the gesture evaluation unit (Element 51) of Ike et al. determines a probability that the region represents a hand gesture and that the probability is determined based score of each partial region which is calculated based on the evaluation result and the reliability of the weak classifier.
Ike et al. discloses " Next, method for evaluating the partial region image by the weak classifier W is explained by referring to FIG. 5. After normalizing the partial region image to N.times.N size, each weak classifier W(i,1), . . . , W(i,n(i)) in the strong classifier i decides whether the partial region image represents the recognition object (user's hand). As to a region A and a region B (defined by each classifier W), sum SA of brightness values of all pixels in the regions A is calculated and sum SB of brightness values of all pixels in the regions B is calculated. A difference between SA and SB is calculated and the difference is compared with an object decision threshold T. In this case, as shown in FIG. 5, two regions A and B are respectively represented by one or two rectangle region. As to position and shape of regions A and B, and a value of the object decision threshold T, by the learning using an object image and non-object image, the position, the shape and the value to effectively decide the object and non-object are previously selected (Paragraphs 32 – 33. Emphasis Added)." As the last sentence states, a learning process is used in order to provide for position, shape, and value to effectively decide the object and non-object. Ike et al. further discuses equations for the different classifiers used in making the determination. Furthermore, Ike et al. states " A decision result H(i,x) of the strong classifier i is calculated by the evaluation result h(i,j,x) of each weak classifier W as follows: H ( i , x ) = { 1 if Σ ( α ( i , j ) × h ( i , j , x ) ) ≥ 0 { -1 otherwise. In above equation (2), α(i,j) represents reliability of the weak classifier W(i, j), which is determined based on correct answer ratio in image for learning (Paragraphs 36 – 37. Emphasis Added)." Therefore, each gesture is given a reliability based on a repeated learning process. The Office is unmoved by the applicant's argument and the rejection is maintained.
Regarding the third argument, in which the applicant asserts that the prior art of record fails to teach at least the limitations of Claim 19. The applicant argues that Levesque fails to disclose any likelihood that is modified based on a threshold of Paragraphs 35 – 36. The examiner respectfully disagrees with the applicant’s assertion.
Levesque discloses “In some instances, the retrieved information may further comprise a vibration profile of the first predefined gesture, and determining whether the detected movement matches the first predefined gesture may include determining whether the detected movement of the user matches the vibration profile of the first predefined gesture by: determining, based on vibration detected by at least one sensor in communication with the mobile device, that the user is touching an object; evaluating with at least one classification algorithm a likelihood of the at least first predefined gesture based on each of the detected movement and the detected vibration; calculating an average of the separate likelihoods; and selecting the first predefined gesture if the average of the likelihoods exceeds a threshold value. In some cases, the retrieved information may further comprise a location of the first real-world object, and determining whether the detected movement matches the first predefined gesture may further include: determining whether the user is within a threshold distance of the first real-world object based on the location of the first real-world object and the user's location. In some instances, the retrieved information may further comprise a vibration profile of the first predefined gesture associated with the first real-world object, and determining whether the detected movement matches the first predefined gesture may comprise: determining, based on vibration detected by at least one sensor in communication with the mobile device, that the user is touching an object; evaluating with at least one classification algorithm a likelihood of the first predefined gesture based on the detected vibration; and matching the detected movement of the user to the vibration profile of the first predefined gesture if the likelihood of the first predefined gesture exceeds a threshold (Paragraphs 35 – 36. Emphasis Added).”
Levesque further discloses “In some instances, rather than a specific predetermined threshold distance factor, consideration of which real-world object a user is interacting with may utilize a likelihood factor based on a distance between the user and the real-world object. For example, such a likelihood may be higher within a certain range of the user (e.g., distances that are easily reachable by the user), and may decrease outside of this range as the distance increases. The likelihood output by a classifier may, for example, be weighted by such a distance-to-object likelihood factor to determine an improved likelihood of a given gesture. In some cases, a threshold may still be applied for the distance at which the distance-based likelihood factor is near zero, or below a certain minimum level (Paragraph 42. Emphasis Added).” Therefore, the improved likelihood of Paragraph 42 will be the likelihood of the first predefined gesture that is compared to a threshold of Paragraphs 35 and 36. The Office is unmoved by the applicant’s argument and the rejection is maintained.
All other arguments are considered moot in light of the above rejection and/or the response to the first, second, and/or third arguments.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Zaliva (U.S. PG Pub 2012/0007821) discloses a gesture classifier that provides the likelihood that an execution gesture is from a collection of pre-defined gestures.
Ahmed et al. (U.S. PG Pub 2017/0177207) discloses calculating a probability that a gesture is being drawn.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW B SCHNIREL whose telephone number is (571)270-7690. The examiner can normally be reached Monday - Friday, 10 - 6 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Boddie can be reached at 571-272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.B.S/Examiner, Art Unit 2625
/WILLIAM BODDIE/Supervisory Patent Examiner, Art Unit 2625