DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-26 are presented for examination on the merits.
Claim Rejections – 35 USC § 101
2. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3. Claims 1-19 and 23-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1, 25 and 26 are directed to steps that can be interpreted as mental processes.
The claim is rejected under 35 U.S.C. § 101 because it is directed to an abstract idea, specifically a mental process (i.e., concepts performed in the human mind or with pen and paper), and the claim does not recite additional elements that amount to significantly more than the abstract idea.
Step 1: Statutory Category: The claim recites a method/process. This is a statutory category. The analysis proceeds to Step 2A. Step 2A, Prong 1: Directed to an Abstract Idea. The claim is directed to Mental Processes and Certain Methods of Organizing Human Activity. Specifically, the claim recites the steps of: observing a user's action (detecting a gesture within a distance of a head); evaluating the action (determining the type of hand gesture based on shape); and acting upon that evaluation (performing a first operation).
Concepts involving observation, evaluation, and decision-making are capable of being performed in the human mind (e.g., a human watching another person, seeing a hand near their head, recognizing the hand shape, and deciding to perform an associated task). Furthermore, the limitation "performing a first operation" is purely functional and result-oriented. It does not specify what the operation is. As such, this step can be interpreted as a mental act (e.g., mentally acknowledging the gesture) or a fundamental administrative instruction (responding to a signal), which falls under the abstract idea of Managing Personal Behavior or Relationships or Interactions Between People (MPEP 2106.04(a)(2)).
Step 2A, Prong 2: Integration into a Practical Application. The claim fails to integrate the abstract idea into a practical application. To satisfy Prong 2, the claim must apply the exception in a meaningful way, such as an improvement to the functioning of a computer or a specific technical treatment of data.
Here, the claim fails for the following reasons:
1. Lack of Specific "real world" output: The limitation "performing a first operation" is devoid of technical specificity. It does not recite: a change in state of the audio device (e.g., adjusting volume, changing tracks), a mechanical actuation. Because "first operation" is not defined, it does not compel a practical result. It is merely a placeholder for any action, including abstract or non-technical actions (such as updating a variable in memory or a human making a mental note).
2. Insignificant Extra-Solution Activity: The recitation of a "wearable audio device" and the location "respective distance of a side of the users head" amounts to mere field-of-use restrictions. Limiting the abstract idea to a particular environment (a wearable device context) does not render the idea eligible. The sensors and device are used merely as tools to gather data for the abstract process of recognition.
3. No Technical Improvement: The claim does not explain how the device achieves the result. It effectively claims the result of "recognizing a gesture and doing something" without reciting the specific technical means or hardware configurations that improve the computer's ability to process gestures.
Step 2B: Significantly More:
The claim does not include an inventive concept sufficient to transform the abstract idea into a patent-eligible application. Generic Computer Implementation: The steps of "detecting," "determining," and "performing" represent generic computer functions (receiving data, analyzing data, executing instructions) performed on presumably generic components (a wearable device, sensors). Routine Data Gathering: Collecting data regarding "distance" and "shape" is routine data acquisition. Because the "first operation" is undefined and effectively encompasses mental processes or generic data handling, the claim amounts to nothing more than the abstract idea itself performed on a generic device.
Claims 2-19 and 23-24 depend on claim 1 therefore, they suffer the same deficiencies.
4. Claim 26 is rejected under 35 U.S.C. 101 because the claimed invention is directedto non-statutory subject matter.
Claim 26 recites "… computer-readable medium ..." and, the "medium" fails to fall within a statutory category because the original disclosure does not provide a clear definition for "medium", thus, the "medium" could be directed to "transmission medium" (i.e., signal/transitory medium). The specification does not explicitly describe what is meant by "computer- readable medium." The term “computer storage medium may be defined as, ineligible transitory propagating signal.
Therefore, claim 26 is rejected under 35 USC 101 as failing to be limited to embodiments which fall within a statutory category. To overcome this rejection, it is suggested to applicant to replace "computer-readable recording medium ..." with --- non transitory computer-readable medium - -. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary or customary meaning that includes signals per se.
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claims 1-4, 6-10, 12-13, 17-20, 22-23 and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Park (US 2018/0349087 A1) in view of Arai (JP 2013012158 A).
As to claim 1, Park discloses in sound control by various hand gestures having claimed:
a. a method, comprising: at a wearable audio output device read on ¶ 0031, (FIG. 1 illustrates a system 100 that can be used to present generated audio and/or virtualized image(s) to a user 102. As illustrated, user 102 can wear a head-mounted display 104 with integrated audio generation device(s) 108 (e.g., a speaker or transducer));
In different embodiment, Park further discloses:
b. while outputting audio content, detecting a gesture performed by a hand of a user of the wearable audio output device read on ¶ 0031 & ¶ 0032, (FIG. 1 illustrates a simplified diagram embodying several features of the disclosure. FIG. 1 illustrates a system 100 that can be used to present generated audio and/or virtualized image(s) to a user 102. As illustrated, user 102 can wear a head-mounted display 104 with integrated audio generation device(s) 108 (e.g., a speaker or transducer). The head-mounted display 104 can include integrated displays 106 that may present virtual or augmented images to user 102. Head-mounted display 104 can also include a sensor 110. Sensor 110 can include an imaging, orientation, contact, or other sensor that may be used to detect movement, position, or configuration of a control object used, by user 102, to perform a gesture and/or to determine appropriate information to display to user 102 via displays 106. In certain embodiments, sensor 110 can be used to determine a gesture performed by hand 112 or another control object (such as an appendage or an object carried by a user));
c. performing a first operation corresponding to the gesture; and in accordance with a determination that the gesture is not the first type of hand gesture determined based at least in part on the shape of the hand during performance of the gesture, forgoing performing the first operation read on ¶ 0040, (the gesture of attempting to insert digit 304 into ear 308 of user 302 can be interpreted as a gesture to modify audio within a virtualized, or other, environment. For example, state 312 of object 310 can correspond to a state wherein digit 304 is not inserted into ear 308. As such, audio 316 generated corresponding to object 310 can be in state 312. State 314 of object 310 can correspond to a state wherein digit 304 is inserted into ear 308. As such audio 318 generated corresponding to object 310 can be modified, as illustrated. For example, audio 318 can have a decreased volume and/or amplitude as compared to audio 316). Park discloses in different embodiment, “sensor 110 can be used to determine a gesture performed by hand 112 or another control object (such as an appendage or an object carried by a user).” However, Park in sound control by various hand gestures cures deficiency by teaching that it may be beneficial to incorporate sensors of a Head-mounted display 104 can also include a sensor 110. Sensor 110 can include an imaging, orientation, contact, or other sensor that may be used to detect movement, position, or configuration of a control object used, by user 102, to perform a gesture and/or to determine appropriate information to display to user 102 via displays 106. In certain embodiments, sensor 110 can be used to determine a gesture performed by hand 112 or another control object (such as an appendage or an object carried by a user)) as disclosed in ¶ 0050 & ¶ 0051.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the sound control by various hand gestures of Park in order to provide an immersive environment and/or to supplement real world interactions with a physical environment. Park does not explicitly disclose in response to detecting the gesture: in accordance with a determination that the gesture is detected within a respective distance of a side of the user’s head and is a first type of hand gesture determined based at least in part on a shape of the hand during performance of the gesture.
However, Arai in an electronic device and a control method for controlling the device cures deficiency by teaching that it may be beneficial wherein:
d. in response to detecting the gesture: in accordance with a determination that the gesture is detected within a respective distance of a side of the user’s head and is a first type of hand gesture determined based at least in part on a shape of the hand during performance of the gesture read on Page 12, Para. 1, (if the hand gesture recognition unit 300 detects a movement of lowering the hand, the process returns to block 1302. That is, when the user performs a gesture of lowering his / her hand, the state in which the select operation mode is selected is canceled, that is, the operation in the select operation mode is ended. Note that the gesture of lowering the hand is a gesture of moving the hand out of a shooting area that can be shot by the video camera 20, for example. of course, the gesture of lowering the hand may be a gesture that is not included in the gesture in the select operation mode recognized by the hand gesture recognition unit 300 in block 1320, and may be a gesture that indicates a change in a specific hand shape).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the electronic apparatus and control method of Arai into Park in order to provide an electronic apparatus which can easily select an operation mode to be used in hand gesture recognition.
As to claim 2, Arai further teaches:
a. in response to detecting the gesture and in accordance with a determination that the gesture is detected more than the respective distance of the side of the user’s head, forgoing performing the first operation read on Page 12, Para. 1, (if the hand gesture recognition unit 300 detects a movement of lowering the hand, the process returns to block 1302. That is, when the user performs a gesture of lowering his / her hand, the state in which the select operation mode is selected is canceled, that is, the operation in the select operation mode is ended. Note that the gesture of lowering the hand is a gesture of moving the hand out of a shooting area that can be shot by the video camera 20, for example. of course, the gesture of lowering the hand may be a gesture that is not included in the gesture in the select operation mode recognized by the hand gesture recognition unit 300 in block 1320, and may be a gesture that indicates a change in a specific hand shape).
As to claim 3, Park further discloses:
a. in response to detecting the gesture and in accordance with a determination that the gesture is detected within the respective distance of the side of the user’s head and is a second type of hand gesture determined based at least in part on the shape of the hand during performance of the gesture, performing a second operation corresponding to the gesture, wherein the second operation is different from the first operation read on ¶ 0040, (Fig. 3; the gesture of attempting to insert digit 304 (finger) into ear 308 of user 302 can be interpreted as a gesture to modify audio within a virtualized, or other, environment. For example, state 312 of object 310 (audio 316 can have an increased volume and/or amplitude as compared to audio 318) can correspond to a state wherein digit 304 is not inserted into ear 308. As such, audio 316 generated corresponding to object 310 can be in state 312. State 314 of object 310 can correspond to a state wherein digit 304 (finger) is inserted into ear 308. As such audio 318 generated corresponding to object 310 can be modified, as illustrated. For example, audio 318 can have a decreased volume and/or amplitude as compared to audio 316).
As to claim 4, Park further discloses:
a. in response to detecting the gesture and in accordance with a determination that the gesture is not the second type of hand gesture determined based at least in part on the shape of the hand during performance of the gesture, forgoing performing the second operation read on ¶ 0040, (the gesture of attempting to insert digit 304 into ear 308 of user 302 can be interpreted as a gesture to modify audio within a virtualized, or other, environment. For example, state 312 of object 310 can correspond to a state wherein digit 304 is not inserted into ear 308. As such, audio 316 generated corresponding to object 310 can be in state 312. State 314 of object 310 can correspond to a state wherein digit 304 is inserted into ear 308. As such audio 318 generated corresponding to object 310 can be modified, as illustrated. For example, audio 318 can have a decreased volume and/or amplitude as compared to audio 316).
As to claim 6, Park further discloses:
a. wherein the gesture comprises an air gesture read on ¶ 0032, (head-mounted display 104 can include integrated displays 106 that may present virtual or augmented images to user 102. Head-mounted display 104 can also include a sensor 110. Sensor 110 can include an imaging, orientation, contact, or other sensor that may be used to detect movement, position, or configuration of a control object used, by user 102, to perform a gesture and/or to determine appropriate information to display to user 102 via displays 106. In certain embodiments, sensor 110 can be used to determine a gesture performed by hand 112 or another control object (such as an appendage or an object carried by a user)).
As to claim 7, Park further discloses:
a. wherein the gesture is detected via one or more image sensors read on ¶ 0030, (in certain embodiments, a head-mounted display can be utilized to detect gestures utilizing integrated sensor(s), such as imaging, pressure, contact, or other sensor(s). In certain embodiments, gestures can be used to adjust audio in a non-virtualized environment. For example, a user listening to music from a smartphone or other portable device can adjust audio generated by the device utilizing the gestures. In certain embodiments, user gesture(s) can be detected by a remote sensor. For example, a user may be within a movie theater that utilizes personalized audio generation and the user can modify the personalized audio generated for the user using the disclosed techniques. The gestures can, for example, be detected by an imaging or other sensor).
As to claim 8, Park further discloses:
a. wherein the gesture is detected by the wearable audio output device read on ¶ 0032, (Head-mounted display 104 can include integrated displays 106 that may present virtual or augmented images to user 102. Head-mounted display 104 can also include a sensor 110. Sensor 110 can include an imaging, orientation, contact, or other sensor that may be used to detect movement, position, or configuration of a control object used, by user 102, to perform a gesture and/or to determine appropriate information to display to user 102 via displays 106).
As to claim 9, Park further discloses:
a. wherein the wearable audio output device is an ear-worn device read on Fig. 1 & ¶ 0032, (Head-mounted display 104 can include integrated displays 106 that may present virtual or augmented images to user 102. Head-mounted display 104 can also include a sensor 110. Sensor 110 can include an imaging, orientation, contact, or other sensor that may be used to detect movement, position, or configuration of a control object used, by user 102, to perform a gesture and/or to determine appropriate information to display to user 102 via displays 106).
As to claim 10, Park further discloses:
a. detecting an end of the gesture, and in response to detecting the end of the gesture, ceasing to perform the first operation read on ¶ 0056, (for all cases, upon removal of the gesture, at 1020, the volume/audio control can return to a default value).
As to claim 12, Arai further teaches:
a. wherein detecting the end of the gesture comprises detecting the hand being further than the respective distance of the side of the user’s head read on Page 12, Para. 1, (if the hand gesture recognition unit 300 detects a movement of lowering the hand, the process returns to block 1302. That is, when the user performs a gesture of lowering his / her hand, the state in which the select operation mode is selected is canceled, that is, the operation in the select operation mode is ended. Note that the gesture of lowering the hand is a gesture of moving the hand out of a shooting area that can be shot by the video camera 20, for example. of course, the gesture of lowering the hand may be a gesture that is not included in the gesture in the select operation mode recognized by the hand gesture recognition unit 300 in block 1320, and may be a gesture that indicates a change in a specific hand shape).
As to claim 13, Park further discloses:
a. wherein the gesture comprises a cupping gesture read on ¶ 0051, (FIG. 7 illustrates a simplified diagram embodying several features of the disclosure regarding gesture recognition. Illustrated are several hand shapes/gestures 700 that can be performed using a hand of a user (such as hand 112). As illustrated, the hand can be cupped with a digit extended at various different angles compared to a plane formed by the palm of the hand. For example, in order from 702, 704, and to 706, the angle between the corresponding finger and the plane is decreasing. As illustrated, a progressively smaller diameter circular shape (703, 705, and 707) can be formed by the contour of the finger and palm. The illustrated gestures 700 can be formed when, for example, a user is cupping their hand as illustrated in FIG. 5. A user may also form the gesture illustrated to form an outline around an object (e.g., by locating the hand in the field of view of the user in a position that one of shapes 703, 705, or 707 includes the desired object)).
As to claim 17, Park further discloses:
a. wherein the gesture is detected within a gesture region read on ¶ 0050, (thus, when user 602 performs any of the gestures illustrated in FIGS. 3-5, audio corresponding to objects 610, 612, and/or 614 can be generated and/or modified as disclosed herein with regards to FIGS. 3-5. Selection of which of objects 610, 612, and/or 614 to generate or modify audio for can be determined using various techniques, such as gaze detection or gesture detection. For example, as illustrated, a field of view 607 can be determined for a user and a corresponding region of interest 608 projected into environment 606. If objects(s) are within region of interest 608, then audio can be generated and/or modified according to the correspondingly performed gesture).
As to claim 18, Park further discloses:
a. detecting the hand of the user entering the gesture region; and in response to detecting the hand of the user entering the gesture region, providing first feedback indicating that the hand has entered the gesture region read on ¶ 0050, (for example, as illustrated, a field of view 607 can be determined for a user and a corresponding region of interest 608 projected into environment 606. If objects(s) are within region of interest 608, then audio can be generated and/or modified according to the correspondingly performed gesture).
As to claim 19, Arai further teaches:
a. detecting the hand of the user exiting the gesture region; and in response to detecting the hand of the user exiting the gesture region, providing second feedback indicating that the hand has left the gesture region read on Page 12, Para. 1, (if the hand gesture recognition unit 300 detects a movement of lowering the hand, the process returns to block 1302. That is, when the user performs a gesture of lowering his / her hand, the state in which the select operation mode is selected is canceled, that is, the operation in the select operation mode is ended. Note that the gesture of lowering the hand is a gesture of moving the hand out of a shooting area that can be shot by the video camera 20, for example. of course, the gesture of lowering the hand may be a gesture that is not included in the gesture in the select operation mode recognized by the hand gesture recognition unit 300 in block 1320, and may be a gesture that indicates a change in a specific hand shape).
As to claim 20, Park further discloses:
a. wherein the first operation comprises adjusting a volume of audio output at the wearable audio output device read on ¶ 0030, (in certain embodiments, gestures can be used to adjust audio in a non-virtualized environment. For example, a user listening to music from a smartphone or other portable device can adjust audio generated by the device utilizing the gestures).
As to claim 22, Park further discloses:
a. wherein the first operation comprises adjusting playback of media content read on ¶ 0032, (head-mounted display 104 can include integrated displays 106 that may present virtual or augmented images to user 102. Head-mounted display 104 can also include a sensor 110. Sensor 110 can include an imaging, orientation, contact, or other sensor that may be used to detect movement, position, or configuration of a control object used, by user 102, to perform a gesture and/or to determine appropriate information to display to user 102 via displays 106. In certain embodiments, sensor 110 can be used to determine a gesture performed by hand 112 or another control object (such as an appendage or an object carried by a user). Example control objects can include a glove, game controller, wand, etc. For example, image tracking and analysis techniques can be used to identify various gesture(s) performed by hand 112. Sensor 110 may include a contact, pressure, or proximity sensor to detect physical direct contact between hand 112 and head-mounted display 104).
As to claim 23, Park further discloses:
a. detecting the hand of the user entering a gesture region; and in response to detecting the hand of the user entering the gesture region, activating a gesture detection state for the wearable audio output device, wherein the gesture is detected while the gesture detection state is active read on ¶ 0050, (thus, when user 602 performs any of the gestures illustrated in FIGS. 3-5, audio corresponding to objects 610, 612, and/or 614 can be generated and/or modified as disclosed herein with regards to FIGS. 3-5. Selection of which of objects 610, 612, and/or 614 to generate or modify audio for can be determined using various techniques, such as gaze detection or gesture detection. For example, as illustrated, a field of view 607 can be determined for a user and a corresponding region of interest 608 projected into environment 606. If objects(s) are within region of interest 608, then audio can be generated and/or modified according to the correspondingly performed gesture).
. As to claim 25, the claim is interpreted and rejected as to claim 1. Park further discloses:
a. one or more processors; and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs read on ¶ 0064, (he computer system 1200 may further include (and/or be in communication with) one or more non-transitory storage devices 1206, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like. Such storage devices may be configured to implement any appropriate data storage, including without limitation, various file systems, database structures, and/or the like).
As to claim 26, the claim is interpreted and rejected as to claim 1. Park further discloses:
a. a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a wearable audio output device read on ¶ 0064, (he computer system 1200 may further include (and/or be in communication with) one or more non-transitory storage devices 1206, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like. Such storage devices may be configured to implement any appropriate data storage, including without limitation, various file systems, database structures, and/or the like).
8. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable Park in view of Arai and further in view of Maharyta (US 11556184 B1).
As to claim 5, Park in view of Arai do not explicitly recite in response to detecting the gesture and in accordance with a determination that the gesture is detected more than the respective distance of the side of the user’s head, forgoing performing the second operation.
However, Maharyta in proximity-sensing systems configurable to detect gestures directionally and at a high distance cures deficiency by teaching that it may be beneficial wherein:
a. in response to detecting the gesture and in accordance with a determination that the gesture is detected more than the respective distance of the side of the user’s head, forgoing performing the second operation read on Col. 6, Lines 5-13, (the four electrodes (or any number of electrodes) located on a first plane and a shield electrode located on a second plane parallel to the first plane can be used for proximity detection within a proximity detection area above the first plane, such as illustrated and described below with respect to FIGS. 2A-2E, while rejecting objects and gestures detected outside of the proximity detection area).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the high-distance directional proximity sensor of Maharyta into Park in view of Arai in order to provide an intuitive way for the user to interact with the device while increasing a proximity detection distance, providing directionality to the proximity sensing within a specific area, and providing proximity detection in the presence of metal objects or other noise sources near the device.
9. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable Park in view of Arai and further in view of Lee (KR 20160111881 A).
As to claim 11, Park does not explicitly recite wherein detecting the end of the gesture comprises detecting a change in the shape of the hand.
However, Lee in scanner capable of acquiring tactile information of an object cures deficiency by teaching that it may be beneficial wherein
a. wherein detecting the end of the gesture comprises detecting a change in the shape of the hand read on Page 4, Para. 8, (then, the hand position value and the hand shape sensed through the sensing unit 10 are collected and analyzed to grasp the hand shape change pattern, and it is determined whether the hand shape change pattern detected corresponds to the scanning start gesture or the scanning end gesture. For example, if the user confirms that the index finger bends twice in succession, the user will be able to acknowledge and notify the occurrence of the scanning start gesture, and if the user will slowly confirm the bending motion once, the occurrence of the scanning end gesture will be acknowledged and reported).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the utilization methods method for representing haptic information of Lee into Park in view of Arai in order to confirms that the index finger bends twice in succession, the user will be able to acknowledge and notify the occurrence of the scanning start gesture, and if the user will slowly confirm the bending motion once, the occurrence of the scanning end gesture will be acknowledged and reported.
10. Claims 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Arai and further in view of Lindmeier (US 20220084279 A1).
As to claim 14, Park does not explicitly recite wherein the gesture comprises an air pinch gesture.
However, Lindmeier in input devices that present graphical user interfaces cures deficiency by teaching that it may be beneficial wherein:
a. the gesture comprises an air pinch gesture read on ¶ 0193, (In FIG. 9A, device 101 detects that the first hand 916-1 of the user has performed a particular gesture (e.g., “Gesture B”). In some embodiments, Gesture B includes a pinch gesture by two or more fingers of firsthand 916-1 (e.g., by a thumb and forefinger). In some embodiments, Gesture B is interpreted as a request to manipulate cylinder 912. In some embodiments, in response to detecting that firsthand 916-1 performed Gesture B, device 101 displays manipulation globe 914).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the methods for manipulating objects in an environment of Lindmeier into Park in order to provide quick and efficient method to manipulate the virtual object (e.g., by displaying the indication near the hand(s) to is being used or will be used to perform the manipulation of the object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
As to claim 15, Lindmeier further discloses:
a. wherein the gesture comprises a double air pinch gesture read on ¶ 0329, (in some embodiments, Gesture G is a double pinch gesture performed by hand 1110 or a double tap gesture on a stylus held by hand 1110. In some embodiments, cylinder 1106 is moved to a location associated with the pinch and/or tap (e.g., at or near the location of the pinch and/or tap). In some embodiments, cylinder 1106 is moved to within a threshold distance (e.g., 1 inch, 3 inches, 6 inches, 1 foot, etc.) of the representation of hand 1110 such that a pinch gesture by hand 1110 is interpreted as a selection of cylinder 1106 (e.g., without requiring hand 1110 to move towards cylinder 1106 to select cylinder 1106)).
As to claim 16, Lindmeier further discloses:
a. wherein the gesture comprises an air pinch and twist gesture read on ¶ 0195, (for example, device 101 displays a dot at a particular position on the circular element corresponding to the pitch rotation to indicate that if second hand 916-2 performs a selection input (e.g., a pinch gesture), then the pitch rotation is selected and the user is able to cause the virtual object to rotate in the pitch orientation (e.g., by moving second hand 916-2 in a circular arc in a manner indicated by the selected circular element, optionally while maintaining the selection input)).
11. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Arai and further in view of Applicant Admitted Prior Art (AAPA).
As to claim 21, AAPA further discloses:
a. wherein the first operation comprises adjusting a magnitude of ambient sound from the physical environment read on ¶ 0005, (wearable audio output devices are incapable of detecting alert conditions related to spatial context of a user. In other cases, the wearable audio output devices do not automatically provide audio feedback about the alert conditions and/or modify a magnitude of ambient sound from the physical environment (e.g., so that the user
may better hear sounds related to the alert conditions)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the providing indication of alert conditions of AAPA into Park in view of Arai in order to detect alert conditions related to spatial context of a user of the wearable device.
12. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable Park in view of Arai and further in view of Ottens (WO 2024206642 A1).
As to claim 11, Park in view of Arai do not explicitly recite detecting a second gesture performed by the user’s head; and in response to detecting the second gesture, activating a gesture detection state for the wearable audio output device, wherein the gesture is detected while the gesture detection state is active.
However, Ottens in interacting with audio data via motion inputs cures deficiency by teaching that it may be beneficial wherein
a. detecting a second gesture performed by the user’s head; and in response to detecting the second gesture, activating a gesture detection state for the wearable audio output device, wherein the gesture is detected while the gesture detection state is active read on ¶ 0218, (The one or more audio output devices (e.g., speakers, headphones, and/or earbuds) (in some embodiments, the one or more audio output devices are in communication with an external electronic device and/or computer system (e.g., a smart phone, a smart watch, a tablet computer, and/or a personal computer)) detect (802) (e.g., based on processing of the sensor measurements at the one or more audio output devices and/or based on processing of the sensor measurements at a companion device such as a smartphone, smartwatch, tablet, wearable computing device, laptop computer, and/or desktop computer) one or more sensor measurements (e.g., via one or more sensors (e.g., one or more accelerometers, gyroscopes, magnetometers, inertial measurement units, optical sensors and/or other sensors that are capable of detecting movement of the one or more audio output devices in space)) that correspond to a start of a motion gesture (e.g., 634b and/or 634c) (e.g., wherein a complete motion gesture requires the detection of multiple sub portions (e.g., an initial sub portion (e.g., 634b and/or 634c), an intermediate sub portion (e.g., 638b and 642b, and/or 638c and 642c), and an end sub portion (e.g., 644b and/or 644c)) of a motion (e.g., head rotation) of the user of the computer system, wherein, for each sequential sub portion of motion that is detected, the computer system has an increasing level of confidence that the motion being detected corresponds to a predefined motion gesture (e.g., a head motion gesture (e.g., one or more head rotations along a lateral axis (e.g., pitch rotation) indicative of a head nod gesture or one or more head rotations along a vertical axis (e.g., yaw rotation) indicative of a head shake gesture)) (e.g., as illustrated in FIGS. 6C, 6D, 6G, and 6H)) (in some embodiments, detecting the start of a motion gesture corresponds to the detection of an initial sub portion of motion (e.g., a start of a head rotation), wherein the computer system has an initial threshold level of confidence that motion being detected corresponds to a predefined motion gesture) (in some embodiments, the one or more sensors detect motion (e.g., slight head movement) that does not meet an initial threshold level of confidence that the motion being detected corresponds to a predefined motion gesture)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the methods and systems for interacting with audio events via motion inputs of Ottens into Park in view of Arai in order to provide detecting motion inputs to interact with audio notifications, providing audio feedback for detected motion gestures, and detecting motion inputs in spatial audio arrangements to provide audio output devices with faster and more efficient methods for interacting with audio data.
Citation of pertinent Prior Arts
13. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: see PTO-892 Notice of References Cited.
Conclusion
14. If the claimed invention is amended, Applicant is respectfully requested to indicate the portion(s) of the specification, which dictate(s) the structure/description relied upon to assist the Examiner in proper interpretation of the amended language and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fekadeselassie Girma whose telephone number is (571) 270-5886. The examiner can normally be reached on Monday thru Friday, 8:30 – 5:00. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta W. Goins can be reached on (571) 272-2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Fekadeselassie Girma/
Primary Examiner Art Unit 2689