DETAILED ACTION
Status of the Application
1. Applicant’s Amendment to the Claims and Request for Continued Examination filed March 4, 2026 are received and entered.
2. Claims 1, 16, and 19 – 20 are amended. Claims 1 – 20 are pending and are under examination in this action
3. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments / Amendment
4. The Request for Necessary Information is WITHDRAWN in view of Applicant’s remarks on page 10 of the Response.
5. On page 11 of the Response, Applicant argues that paragraph [0298] of Chalmers is “directed to utilizing head movements for listening to expanded . . . or condensed . . . versions of audio notifications, and not to skipping a currently playing message”.
The Office finds Applicant’s arguments unpersuasive for at least the following reasons. Paragraph [0298] of Chalmers discloses that movement of one or more wearable audio output devices detected in a first direction, as detected by accelerometers, causes a first operation to be performed. This movement may include backward movement or tilting, as examples. The first operation may include “forgoing outputting a second portion of audio content”. Chalmers continues by stating that “the second portion of audio content . . . would have been output if the input directed to the one or more wearable audio deices had not occurred”. By forgoing this “second portion of audio content” that would have been output but for the “movement . . . in a first direction”, the “second portion of audio content” is skipped in response to the “movement . . . in a first direction” (gesture).
With regard to Applicant’s reliance on FIGS. 5I – 5K, this reliance is misplaced. These figures are only referred to in order to provide examples of simulated spatial locations. However, simulated spatial locations correspond to only one of the examples of “movement . . . in a first direction”. Accordingly, it is the other “movements” [backward movement or tilting] that are relied upon by the Examiner in making this rejection.
For at least these reasons, Applicant’s arguments are unpersuasive with respect to Chalmers.
6. On page 11 of the Response, Applicant argues that “Chalmers . . . is directed to utilizing head movements for listening to expanded . . . or condensed . . . versions of audio notifications, and not to skipping a currently playing message”.
The Office finds Applicant’s arguments unpersuasive for at least the following reasons. In paragraph [0285], Chalmers discloses that a head movement and a tap movement may cause the same operation to be performed. While the particular examples are directed towards “hearing more information”, this example is non-limiting and this concept applies broadly to other functions in response to gesture inputs. A key clue that this is just an example is usage of “e.g.” which means “exempli gratia” in Latin, which translates to “for example”. The term “for example” is just that, a singular example, not every encompassed embodiment. Accordingly, the disclosure of paragraph [0285] is a broadly applicable principle that a head movement and a tap movement may cause the same operation to be performed which is provided with one specific example to facilitate understanding.
A person of ordinary skill in the art would clearly understand that Chalmers’ usage of “e.g.” to merely provide a specific example encompassed by the disclosure, but not every single embodiment of the corresponding broadly applicable principle. While arguments against interest are typically given additional weight, i.e., by Applicant attempting to limit the disclosure of Chalmers (Applicant’s own prior application) through argument, this particular argument is unpersuasive due to the meaning of the phrase “for example” and its usage in this particular context.
Additionally, it is well established in patent law that the phrase “for example” is open ended. So much so that inclusion of such language is commonly indefinite if in a claim. See §2173.05(d).
For at least these reasons, Applicant’s arguments are unpersuasive with respect to Chalmers. Additionally, the reliance on paragraph [0285] of Chalmers is not necessary to teach the claimed subject matter and was merely provided to more clearly tie the disclosed “skip” gesture to head movement. Since this caused confusion to Applicant, references to this paragraph are removed and these features will be rejected by relying on paragraph [0298] instead.
7. On page 12 of the Response, Applicant argues that “the gesture of step 814 is not applied via head movements” because the ‘tap input’ referred to in ¶[0285] is not used for ‘skipping’ a currently playing message (as described in FIGS. 5N-5P).”
The Office finds Applicant’s arguments unpersuasive for at least the following reasons. As set forth above, paragraph [0298] of Chalmers discloses that movement of one or more wearable audio output devices detected in a first direction [backward movement or tilting], as detected by accelerometers, causes a first operation to be performed [forgoing outputting a second portion of audio content]. Chalmers continues by stating that “the second portion of audio content . . . would have been output if the input directed to the one or more wearable audio deices had not occurred”. By forgoing this “second portion of audio content” that would have been output but for the “movement . . . in a first direction”, the “second portion of audio content” is skipped in response to the “movement . . . in a first direction” (gesture).
Accordingly, the reliance on paragraph [0285] of Chalmers is not necessary to teach the claimed subject matter and was merely provided to more clearly tie the disclosed “skip” gesture to head movement. Since this caused confusion to Applicant, references to this paragraph are removed and these features will be rejected by relying on paragraph [0298] instead.
8. On pages 14 – 15 of the Response, Applicant argues that the newly added subject matter of claims 1, 19, and 20 is not taught or suggested by Chalmers.
Applicant’s arguments have been fully considered and are persuasive in view of the newly added subject matter. However, upon further consideration, a new ground(s) of rejection is made in view of Takahashi et al. (U.S. Pub. 2021/0055910).
Claim Rejections - 35 USC § 103
9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. Claims 1 – 3, 7 – 10, 12 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chalmers et al. (U.S. Pub. 2020/0104194) in view of Takahashi et al. (U.S. Pub. 2021/0055910).
Regarding claim 1, Chalmers teaches: a system (FIGS. 3A, 3B; paragraphs [0140], [0142]; device 300 and wearable audio output device 301 are part of a “system”), comprising:
a first electronic device comprising one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors (FIG. 3A; paragraphs [0011], [0140]; [first] device 300 includes a CPU 310, memory 370, and programs stored in memory 370. As disclosed, processors, such as CPU 310, execute programs stored in memory 370), the one or more programs including instructions for:
detecting an event alert corresponding to an application notification (FIGS. 3A, 5B, 8; paragraphs [0072], [0186], [0278]; in step 802, a first event is detected which may correspond to application events such as incoming messages via an instant messaging application, calendar event invitations or reminders, etc. For example, the “first event” may be receiving message 508-1 through an instant messaging application on first device 300);
generating a message to be provided at a second electronic device, wherein the message includes contents corresponding to the application notification (FIGS. 5B, 8; paragraphs [0143], [0187], [0279]; first device 300 generates content [message] that is transmitted to wearable audio output [second] device 301 / 502 for audio output from first device 300 in response to receiving message 508-1, the “application notification”);
in accordance with a determination that the message does not include a question prompt (FIG. 5B; paragraph [0187]; message 508-1, and other similar messages, are received from other persons such as “Harold Smith” and “Mom”. These messages are direct communications from these other persons and therefore are not a “question prompt” because there is no response required from the user in order for the device to continue receiving content of each individual message);
causing the message to be provided at the second electronic device notification (FIGS. 5B, 8; paragraphs [0143], [0187], [0279]; wearable audio output [second] device 301 / 502 receives content [message] for audio output from first device 300 in response to receiving message 508-1, the “application notification”. This “message” may include the content of the application notification in audio form and is output to a user via second device 301 / 502 in step 804);
at a first time, while the message is being provided at the second electronic device: receiving, from the second electronic device, motion data corresponding to movement of the second electronic device (FIGS. 8; paragraphs [0143], [0297], [0298]; second device 301 / 502 includes accelerometers and/or attitude sensors as part of the input device 308. These sensors allow second device 301 / 502 to receive head movement inputs from a user. Head movement detected by accelerometers is detected in step 832 [first time] while the first portion of audio content is being output);
determining a gesture based on the motion data (FIGS. 8; paragraphs [0297], [0298]; a user may perform an input gesture to control the audio output by moving their head in a predetermined manner);
in accordance with a determination that the gesture satisfies a predetermined criteria, at a second time after the first time, ceasing providing the message at the second electronic device (FIGS. 8; paragraphs [0297], [0298]; movement of the one or more wearable audio output devices in a first direction [backward or tilting], via head movement, corresponds to a gesture input that results in forgoing outputting a second portion of audio content that would have been output but for the gesture input. Accordingly, movement in a first direction corresponds to a skip gesture that skips [forgoes] remaining [second portion] audio content)
Chalmers fails to explicitly disclose: in accordance with a determination that the message includes a question prompt: causing the message to be provided at the second electronic device; and after the message is provided at the second electronic device, performing a task in furtherance of a response to the question prompt, wherein the response is received from the second electronic device.
However, in a related field of endeavor, Takahashi discloses in-ear headphones that allow a user to control audio content provided thereby in response to movement gestures (FIG. 2; Abstract).
With regard to claim 1, Takahashi teaches: in accordance with a determination that the message includes a question prompt: causing the message to be provided at the second electronic device (FIGS. 3, 6, 7; paragraphs [0058], [0084]; audio content [message] may include a question prompt, such as “do you want to listen to news?”. This question prompt is provided to a user’s left ear via second headphone unit 1b); and
after the message is provided at the second electronic device, performing a task in furtherance of a response to the question prompt, wherein the response is received from the second electronic device (FIGS. 6, 7; paragraphs [0068], [0084], [0085]; after the question prompt is provided via second headphone unit 1b, audio output [task] of “I will play news” followed by news content is performed in furtherance of a user’s gesture [response to the question prompt] confirming that they want to hear the news. The user’s gesture [response] is a rotation of the user’s head in the left direction, as illustrated, and is detected by the motion sensor 7b of second headphone unit 1b [second electronic device]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield predictable results. More specifically, the teachings of a system including a device and earbud where the audio presentation of a message may be cancelled or aborted in response to a skip gesture via head movement, as taught by Chalmers, are known. Additionally, the teachings of a system including a device and earbud where a user can request more information by a head movement gesture detected by the earbud in response to a question prompt, as taught by Takahashi, are known as well. The combination of the known teachings of Chalmers and Takahashi would yield the predictable results of a system including a device and earbud where the audio presentation of a message may be cancelled or aborted in response to a skip gesture via head movement and a user can request more information by a head movement gesture detected by the earbud in response to a question prompt. In other words, it would have been obvious to incorporate the question prompt and responsive gestures of Takahashi into the system of Chalmers. Both references disclose similar systems of providing audio content to an earbud, and applying head movement gestures to control the provision of this audio content. It would have been obvious to incorporate the embodiment of Takahashi where the earbud asks, and allows a user to select via head movement gestures, whether the user wants specific types of content to be received. Such a modification of Chalmers requires nothing more than an additional type of content and user head movement gesture, as disclosed by Takahashi, thereby increasing the usability and functionality of the system of Chalmers. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield the aforementioned predictable results.
Regarding claim 2, Chalmers teaches: wherein the first electronic device corresponds to one of a smartphone, a smart watch, a tablet computer, a desktop computer, and a laptop computer (FIG. 3A; paragraph [0140], [0211]; device 300 may be a laptop computer, a desktop computer, a tablet computer, a smart watch, or a smart phone).
Regarding claim 3, Chalmers teaches: wherein the second electronic device corresponds to one of a headphone device and an earbud device (FIG. 3B; paragraph [0142]; second device 301 / 502 may be a single earphone [earbud]).
Regarding claim 7, Chalmers teaches: the one or more programs including instructions for: in response to causing the message to be provided at the second electronic device, receiving the motion data corresponding to movement of the second electronic device (FIGS. 5B, 8; paragraphs [0143], [0187], [0279], [0280], [0286]. [0292] – [0298]; wearable audio output [second] device 301 / 502 receives content [message] and outputs audially the “message” in step 804. Head movement of a user is monitored by second device 301 / 502 in step 814 to determine whether a skip gesture has been input to forgo outputting any remaining portion of the “message”. Since step 814 is subsequent to step 804, the motion data is received and monitored in response to outputting the audio content of the “message”).
Chalmers fails to explicitly disclose: wherein the motion data is received until the message ceases to be provided at the second electronic device.
However, Takahashi teaches: wherein the motion data is received until the message ceases to be provided at the second electronic device (FIGS. 5, 11, 15; paragraphs [0147], [0213], [0278], [0283]; in step 331, control section 12 of determines that the gesture reception period T has been completed after outputting a voice output of the last item of the menu. When the voice output of the last item of the menu, and the corresponding gesture reception period T, have been completed, a notification of this completion is received in step 125. The operations then return to step 101 to restart the processing performed by information processing apparatus 103. Accordingly, the “motion data” received at first headphone unit 1a [second electronic device] “ceases” by the process restarting for a subsequent iteration).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield predictable results. More specifically, the teachings of an earbud outputting received audio content and then monitoring head motion to determine whether a gesture input has been received to skip remaining audio content, as taught by Chalmers, are known. Additionally, the teachings of headphones that outputs received audio content where motion data reception ends when the audio content ceases, as taught by Takahashi, are known. The combination of the known teachings of Chalmers and Takahashi would yield the predictable result of ceasing the reception of motion data when outputting the audio content ends. Such a modification to Chalmers requires nothing more than implementing the repetitive process type pattern of Takahashi where when a notification audio output is completed, motion data monitoring ends and the process restarts. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield the aforementioned predictable results.
Regarding claim 8, Chalmers teaches: wherein the event alert corresponding to the application notification is based on an incoming text message (FIG. 5B; paragraphs [0072], [0186]; the first event may correspond to an application notification of an incoming text message 508-1).
Chalmers fails to explicitly disclose: the message provided at the second electronic device corresponds to an indication that the incoming text message exceeds a threshold character length.
However, Chalmers discloses that when a length of audio content exceeds a threshold, a summary of the audio content may be presented instead of the actual text (FIG. 5H; paragraph [0233]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to modify the known teachings of Chalmers to yield predictable results. Specifically, it would have been obvious to apply the threshold length of audio content to a threshold number of characters contained in the audio content. Such a modification of Chalmers merely requires adding further specificity as to what the explicitly disclosed threshold length of audio content refers.
Additionally, it would have been obvious to try utilizing a number of characters as the threshold for outputting or summarizing audio content of incoming text messages. There are only two possible thresholds encompassed by the disclosure of Chalmers: (1) duration of audio output, and (2) character length.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to modify the known teachings of Chalmers to yield the aforementioned predictable results.
Regarding claim 9, Chalmers teaches wherein the event alert corresponding to the application notification is based on a reminder (paragraph [0072]; the first event may correspond to calendar reminder).
Chalmers fails to explicitly disclose: the message provided at the second electronic device corresponds to an indication that the reminder includes a plurality of items exceeding an item threshold.
However, Chalmers discloses that when a length of audio content exceeds a threshold, a summary of the audio content may be presented instead of the actual text (FIG. 5H; paragraph [0233]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to modify the known teachings of Chalmers to yield predictable results. Specifically, it would have been obvious to apply the threshold length of audio content to calendar reminders contained in the audio content. Such a modification of Chalmers merely requires adding further specificity as to what the explicitly disclosed threshold length of audio content refers.
Additionally, it would have been obvious to try utilizing a number of items in a calendar reminder as the threshold for outputting or summarizing audio content of incoming text messages. There are only three possible thresholds encompassed by the disclosure of Chalmers with respect to calendar reminders: (1) duration of audio output, (2) character length, (3) number of reminders.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to modify the known teachings of Chalmers to yield the aforementioned predictable results.
Regarding claim 10, Chalmers teaches: wherein the motion data corresponding to movement of the second electronic device includes at least one or more rotation rates corresponding to the second electronic device and at least one or more acceleration rates corresponding to the second electronic device (paragraph [0066], [0143], [0297], [0298]; attitude sensors, such as that included in second device 301 / 502, detect changes in the attitude [roll, pitch, yaw], and thus rotation rates, of a device. Additionally, accelerometers detect acceleration rates of a corresponding device. Second device 301 / 502 detects head movement using the attitude sensors and accelerometers to determine whether the movement corresponds to predetermined inputs).
Regarding claim 12, Chalmers fails to explicitly disclose: the one or more programs including instructions for: in accordance with a determination that the gesture satisfies the predetermined criteria, causing an audible tone to be provided concurrently with ceasing to provide the message at the second electronic device.
However, Takahashi teaches: the one or more programs including instructions for: in accordance with a determination that the gesture satisfies the predetermined criteria, causing an audible tone to be provided concurrently with ceasing to provide the message at the second electronic device (FIG. 13; paragraphs [0225], [0226], [0230]; once a user’s input gesture is detected as matching the item selection gesture in step 112 [predetermined criteria], first voice data [message] is stopped in step 114 and a detection sound [audible tone] is output in step 115).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield predictable results. Specifically, it would have been obvious to incorporate the audio tone as a confirmation sound output in response to a predetermined gesture, as taught by Takahashi, in the device of Chalmers in response to successful application of the skip gesture.
Regarding claim 13, Chalmers fails to explicitly disclose: Takahashi teaches: wherein the determination that the gesture satisfies the predetermined criteria comprises: in accordance with a determination that the message provided at the second electronic device is associated with a plurality of predetermined responses, determining that the gesture does not satisfy the predetermined criteria.
However, Takahashi teaches: wherein the determination that the gesture satisfies the predetermined criteria comprises: in accordance with a determination that the message provided at the second electronic device is associated with a plurality of predetermined responses, determining that the gesture does not satisfy the predetermined criteria (FIGS. 13, 14; paragraphs [0223], [0226], [0227], [0246], [0258], [0259]; in step 112, it is determined whether a user’s input gesture matches an item selection gesture [predetermined criteria]. This matching of a gesture to “predetermined criteria” also occurs with regard to a menu pause gesture [step 117] and vague behavior [step 121]. These gesture detections are accomplished by comparing a user’s movement, as detected by first headphone unit 1a, to the item selection gesture with respect to the particular voice data [message] being output. When a user’s turning of their face to the left or right is either too slow or an insufficient angle [plurality of predetermined responses], step 112 determines that item selection gesture has not detected [predetermined criteria is not satisfied]. Each of the gestures of steps 112, 117, and 121 correspond to a “plurality of predetermined responses” either matching or not matching each of the above gestures. Since each gesture corresponds to the specific content of the output voice data [message], the voice data [message] itself is associated with the predetermined responses).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield predictable results. Specifically, it would have been obvious to incorporate vague movements as not clearly corresponding to a predetermined gesture, as taught by Takahashi, in the device of Chalmers to disregard movements that do not clearly match the skip gesture.
Regarding claim 14, Chalmers fails to explicitly disclose: the one or more programs including instructions for: in accordance with a determination that the message provided at the second electronic device is associated with a plurality of predetermined responses and a determination that the gesture corresponds to a rejection gesture: causing a rejection tone to be provided at the second electronic device, ceasing to receive the motion data.
However, Takahashi teaches: the one or more programs including instructions for: in accordance with a determination that the message provided at the second electronic device is associated with a plurality of predetermined responses (FIGS. 13, 14; paragraphs [0226], [0246]; the first voice data [message] is associated with a plurality of predetermined responses including an item selection gesture [step 112] and a menu pause gesture [step 117]) and a determination that the gesture corresponds to a rejection gesture (FIG. 14; paragraph [0246]; in step 117, the user’s input gesture may be detected as a menu pause [rejection] gesture. This is a “rejection” gesture because the menu item corresponding to the first voice data [message] is not selected and is therefore rejected):
causing a rejection tone to be provided at the second electronic device (FIG. 14; paragraph [0250]; in step 120, a detection sound [rejection tone] is output corresponding to the menu pause [rejection] gesture from step 117); and
ceasing to receive the motion data (FIGS. 11, 14; paragraphs [0250], [0251]; after step 120, the operations then return to step 101 to restart the processing performed by information processing apparatus 103. Accordingly, the “motion data” received at first headphone unit 1a [second electronic device] “ceases” by the process restarting for a subsequent iteration).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield predictable results. Specifically, it would have been obvious to incorporate the audio tone as a sound output in response to a rejection gesture, as taught by Takahashi, in the device of Chalmers.
Regarding claim 15, Chalmers fails to explicitly disclose: the one or more programs including instructions for: in accordance with a determination that the message provided at the second electronic device is associated with a plurality of predetermined responses and a determination that the gesture corresponds to an acceptance gesture: causing an acceptance tone to be provided at the second electronic device; and ceasing to receive the motion data.
However, Takahashi teaches: the one or more programs including instructions for: in accordance with a determination that the message provided at the second electronic device is associated with a plurality of predetermined responses (FIGS. 13, 14; paragraphs [0226], [0246]; the first voice data [message] is associated with a plurality of predetermined responses including an item selection gesture [step 112] and a menu pause gesture [step 117]) and a determination that the gesture corresponds to an acceptance gesture (FIG. 13; paragraphs [0225], [0226]; in step 112, the user’s input gesture may be detected as an item selection [acceptance] gesture. This is an “acceptance” gesture because the menu item corresponding to the first voice data [message] is selected and is therefore accepted):
causing an acceptance tone to be provided at the second electronic device (FIG. 13; paragraphs [0225], [0226], [0230]; once a user’s input gesture is detected as matching the item selection gesture in step 112 [predetermined criteria], a detection sound [audible tone] is output in step 115); and
ceasing to receive the motion data (FIGS. 11, 13; paragraph [0246]; after step 116, the operations then return to step 101 to restart the processing performed by information processing apparatus 103. Accordingly, the “motion data” received at first headphone unit 1a [second electronic device] “ceases” by the process restarting for a subsequent iteration).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers and Takahashi to yield predictable results. Specifically, it would have been obvious to incorporate the audio tone as a sound output in response to an acceptance gesture, as taught by Takahashi, in the device of Chalmers.
Regarding claim 16, Chalmers teaches: wherein the message includes one or more words corresponding to an interrogatory (FIG. 5AB; paragraph [0223]; the “message” may correspond to a calendar event invitation 560-3 by announcing the details of the invitation and asking the user via second device 301 / 502 whether they would like to “Accept or Decline?” [interrogatory]), and
the plurality of predetermined responses include an acceptance of the interrogatory and a rejection of the interrogatory (FIG. 5AB; paragraph [0223]; the user may either “Accept” or “Decline” the invitation).
Regarding claim 17, Chalmers teaches: the one or more programs including instructions for: in accordance with a determination that the gesture does not satisfy the predetermined criteria, continuing to provide the message at the second electronic device (FIGS. 8; paragraphs [0297], [0298]; as set forth above with regard to claim 1, when a skip gesture of a user’s head movement in the first direction [backward or tilting] is not input by the user, the audio content continues being output, including the second portion thereof).
Regarding claim 18, Chalmers teaches: the one or more programs including instructions for: in accordance with a determination that the gesture does not satisfy the predetermined criteria:
continuing to receive, from the second electronic device, motion data corresponding to movement of the second electronic device (FIGS. 8; paragraphs [0297], [0298]; when audio content is being presented to a user, it is implied that the second electronic device continues to monitor head movement for movement in the first direction [backward or tilting] to determine whether any remaining audio content [second portion] should be forgone / skipped); and
continuing to determine the gesture based on the motion data (FIGS. 8; paragraphs [0297], [0298]; when audio content is being presented to a user, it is implied that the monitoring for the skip gesture [head movement in the first direction] is continued to determine whether any remaining audio content [second portion] should be forgone / skipped).
Regarding claim 19, Chalmers teaches: a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a first electronic device (FIG. 3A; paragraphs [0005], [0011], [0140]; [first] device 300 includes a CPU 310, memory 370, and programs stored in memory 370. As disclosed, processors, such as CPU 310, execute programs stored in memory 370, i.e., a non-transitory computer readable storage medium), the one or more programs including instructions for performing operations.
The remainder of this claim is identical in scope to the subject matter rejected above with regard to claim 1. Accordingly, the remainder of this claim is rejected for at least the same reasons set forth above with regard to claim 1. A duplication of the above rejection is not included in this Office Action for the purpose of brevity.
Regarding claim 20, Chalmers teaches: a computer-implemented method, comprising: at a first electronic device with one or more processors and memory: performing operations (FIG. 3A; paragraphs [0011], [0140]; [first] device 300 includes a CPU 310, memory 370, and programs stored in memory 370. As disclosed, processors, such as CPU 310, execute programs stored in memory 370).
The remainder of this claim is identical in scope to the subject matter rejected above with regard to claim 1. Accordingly, the remainder of this claim is rejected for at least the same reasons set forth above with regard to claim 1. A duplication of the above rejection is not included in this Office Action for the purpose of brevity.
11. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Chalmers in view of Takahashi, as applied to claim 1 above, in view of Lasater et al. (U.S. Pub. 2022/0035479).
Regarding claim 4, neither Chalmers nor Takahashi explicitly disclose: the one or more programs including instructions for: prior to causing the message to be provided at the second electronic device, providing an informational prompt indicating that the providing of the message at the second electronic device may be aborted based on a respective gesture using the second electronic device.
However, Chalmers discloses that a particular user’s input gesture may correspond to a skip gesture which results in stopping the “message” from being output (FIGS. 8; paragraphs [0286], [0292], [0293], [0297], [0298]).
Additionally, Lasater discloses that a pop-up message may be provided that informs a user how to input a cancel command (paragraph [0058]).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers, Takahashi, and Lasater to yield predictable results. More specifically, the teachings of a system including a device and earbud where the audio presentation of a message may be cancelled or aborted in response to a skip gesture, as taught by Chalmers, are known. Additionally, the teachings of a system that provides a pop-up message to inform a user how to input a cancel command, as taught by Lasater, are known as well. The combination of the known teachings of Chalmers and Lasater would yield the predictable results of a system including a device and earbud where the audio presentation of a message may be cancelled or aborted in response to a skip gesture, where the user is informed of the skip gesture via a pop-up message / prompt. In other words, it would have been obvious to either display on a device instructions to cancel the provision of a message to an earbud or include cancellation instructions at the outset of providing audio of the message to the earbud. Such a modification of Chalmers requires nothing more than providing instructions to a user of how to cancel an operation, as disclosed by Lasater, so that a user knows how to cancel or abort the audio presentation of a message, as disclosed by Chalmers . Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers, Takahashi, and Lasater to yield the aforementioned predictable results.
12. Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Chalmers in view of Takahashi, as applied to claim 1 above, in view of Spencer-Harper et al. (U.S. Pub. 2017/0177074).
Regarding claim 5, Chalmers fails to explicitly disclose: the one or more programs including instructions for: while the message is being provided at a second electronic device: determining a first partial gesture based on the motion data; and in response to a determination that the first partial gesture satisfies a first predetermined partial gesture criteria, causing a first audible tone to be provided at the second electronic device.
However, Chalmers discloses that the input gestures in general may be directional [left vs. right, forward vs backward] (paragraphs [0137], [0298]).
Additionally, it was well-known and conventional in the art before the effective filing date of Applicant’s claimed invention for input gestures to include a single click, double click, triple click, etc.
Accordingly, it would have been obvious for any disclosed input gesture, including the skip gesture, to include a double movement such backward or tilting in a specific direction.
Moreover, Takahashi discloses that a detection sound [audible tone] is output in response to a user’s input gesture being detected as matching a predefined gesture in step 112 (FIG. 13; paragraphs [0225], [0226], [0230]).
Furthermore, with regard to claim 5, Spencer-Harper discloses: a gesture may include multiple conditions and may be partially satisfied and fully satisfied (paragraph [0044]). Spencer-Harper further discloses that a user may be informed of the progress of a partially completed gesture (paragraph [0076]).
The combination of the known teachings of Chalmers, Takahashi, and Spencer-Harper teaches: the one or more programs including instructions for: while the message is being provided at a second electronic device:
determining a first partial gesture based on the motion data (Chalmers; paragraph [0298]; input gestures, such as the skip input gesture, may be directional, i.e., left vs. right or forward vs backward. Spencer-Harper; paragraph [0044]; a gesture may include multiple conditions and may be partially satisfied and fully satisfied. Additionally, it was well-known and conventional in the art before the effective filing date of Applicant’s claimed invention for input gestures to include a single click, double click, triple click, etc. Accordingly, it would have been obvious for the skip gesture to include, as an example, a double head tilt towards the right direction. When the teachings of Spencer-Harper of a partial satisfaction of a gesture input is combined with Chalmers in view of well-known and conventional input gestures, a first head tilt towards the right direction would be a “first partial gesture” and a second head tilt towards the right direction would be a “second partial gesture”); and
in response to a determination that the first partial gesture satisfies a first predetermined partial gesture criteria (Chalmers; as set forth above, when the “first partial gesture” corresponds to a portion of the skip gesture), causing a first audible tone to be provided at the second electronic device (Spencer-Harper; paragraphs [0044], [0076]; a gesture may include multiple conditions and may be partially satisfied and fully satisfied. Takahashi; FIG. 13; paragraphs [0225], [0226], [0230]; a detection sound [audible tone] is output in response to a user’s input gesture being detected as matching the item selection gesture in step 112. When this teaching is applied to the partial satisfaction of the item selection gesture via the “first partial gesture” of a first head turn towards the right direction of Chalmers set forth above, it would have been obvious to output a first detection sound [first audible tone] of Takahashi to indicate the progress of a partially completed gesture, as disclosed by Spencer-Harper).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers, Takahashi, and Spencer-Harper, in view of well-known and conventional teachings in the art, to yield predictable results. More specifically, the teachings of an earbud that receives a skip gesture via directional head movement, such as a right tilt, to control the output of audio, as taught by Chalmers, are known. Additionally, the teachings of outputting an audio tone when a user input gesture matches an predetermined gesture, as taught by Takahashi, are known. Moreover, the teachings of a gesture input including multiple conditions and may be partially and fully satisfied where a user may be informed of the progress of a partially completed gestures, as taught by Spencer-Harper, are known as well. Furthermore, the teachings of a double input type gesture, such a double click, was well-known and conventional in the art.
The combination of the known teachings of Chalmers, Takahashi, Spencer-Harper, and well-known and conventional teachings in the art would yield the predictable results of a skip gesture that includes multiple conditions such as a double tilt to the left or right where an audio tone is output when the user’s input matches a first part of the skip gesture. In other words, it would have been obvious to use well-known convention double input gestures as the skip gesture of Chalmers such that two tilts to the right skips the rest of a message that is being output audially by the earbud. When combined with Chalmers and Spencer-Harper, an audio tone of Takashi would be output to indicate completion of a first partial gesture, i.e., a first tilt to the right, of the skip gesture. All that is required by this combination is the recognition that more complex gestures may be utilized in Chalmers, as set forth above, along with the partial gesture recognition and user notification thereof as disclosed by Spencer-Harper and Takahashi. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Chalmers, Takahashi, and Spencer-Harper, in view of well-known and conventional teachings in the art, to yield the aforementioned predictable results.
Regarding claim 6, Chalmers fails to explicitly disclose: the one or more programs including instructions for: while the message is being provided at a second electronic device: determining a second partial gesture based on the motion data; and in response to a determination that the second partial gesture satisfies a second predetermined partial gesture criteria, causing a second audible tone to be provided at the second electronic device.
However, Chalmers discloses that the input gestures in general may be directional [left vs. right, forward vs backward] (paragraphs [0137], [0298]).
Additionally, it was well-known and conventional in the art before the effective filing date of Applicant’s claimed invention for input gestures to include a single click, double click, triple click, etc.
Accordingly, it would have been obvious for any disclosed input gesture, including the skip gesture, to include a double movement such backward, forward, or tilting in a specific direction.
Moreover, Takahashi discloses that a detection sound [audible tone] is output in response to a user’s input gesture being detected as matching the item selection gesture in step 112 (FIG. 13; paragraphs [0225], [0226], [0230]).
Furthermore, with regard to claim 6, Spencer-Harper discloses: a gesture may include multiple conditions and may be partially satisfied and fully satisfied (paragraph [0044]). Spencer-Harper further discloses that a user may be informed of the progress of a partially completed gesture (paragraph [0076]).
The combination of the known teachings of Chalmers, Takahashi, and Spencer-Harper teaches: the one or more programs including instructions for: while the message is being provided at a second electronic device:
determining a second partial gesture based on the motion data (Chalmers; paragraph [0298]; input gestures, such as the skip input gesture, may be directional, i.e., left vs. right or forward vs backward. Spencer-Harper; paragraph [0044]; a gesture may include multiple conditions and may be partially satisfied and fully satisfied. Additionally, it was well-known and conventional in the art before the effective filing date of Applicant’s claimed invention for input gestures to include a single click, double click, triple click, etc. Accordingly, it would have been obvious for the item selection gesture of step 112 to include a double head turn towards the right direction. When the teachings of Spencer-Harper of a partial satisfaction of a gesture input is combined with Chalmers in view of well-known and conventional input gestures, a first head turn towards the right direction would be a “first partial gesture” and a second head turn towards the right direction would be a “second partial gesture”); and
in response to a determination that the second partial gesture satisfies a second predetermined partial gesture criteria (Chalmers; as set forth above, when the “first partial gesture” corresponds to a portion of the skip gesture), causing a second audible tone to be provided at the second electronic device (Spencer-Harper; paragraphs [0044], [0076]; a gesture may include multiple conditions and may be partially satisfied and fully satisfied. Takahashi; FIG. 13; paragraphs [0225], [0226], [0230]; a detection sound [audible tone] is output in response to a user’s input gesture being detected as matching the item selection gesture in step 112. When this teaching is applied to the partial satisfaction of the item selection gesture via the “second partial gesture” of a second head turn towards the right direction of Chalmers set forth above, it would have been obvious to output a second detection sound [second audible tone] of Takahashi to indicate the completion of a multi-part gesture, as disclosed by Spencer-Harper).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the known teachings of Takahashi and Spencer-Harper, in view of well-known and conventional teachings in the art, to yield predictable results for at least the reasons set forth above with regard to claim 6.
13. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Chalmers in view of Takahashi, as applied to claim 1 above, as evidenced by Vleugels et al. (U.S. Pub. 2021/0177306).
Regarding claim 11, neither Chalmers nor Takahashi explicitly disclose: wherein determining the gesture based on the motion data comprises: obtaining a motion classification probability based on the motion data; and in accordance with a determination that the motion classification probability exceeds a motion classification probability threshold, determining the gesture based on a respective motion classification.
However, it was well-known and conventional in the art before the effective filing date of Applicant’s claimed invention to use confidence values and probability with regard to gesture inputs and predetermined gesture commands to ensure that a user’s gesture input is accurately detected and processed.
Please see at least paragraph [0036] of Vleugels for evidence of this well-known and conventional teaching.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to modify the known teachings of Chalmers with well-known and conventional teachings in the art to yield predictable results. Specifically, it would have been obvious to utilize the well-known and conventional concepts of confidence values and probability with regard to matching gesture inputs with predetermined gesture commands to ensure that a user’s gesture input is accurately detected and processed. While not explicitly disclosed in Chalmers, it is suggested, based on well-known and conventional teachings in the art, that confidence values and probability values are incorporated into the gesture detection thereof.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN A LUBIT whose telephone number is (571)270-3389. The examiner can normally be reached M - F, ~6am - 3pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae can be reached at 571-272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN A LUBIT/Primary Examiner, Art Unit 2626