Prosecution Insights
Last updated: April 19, 2026
Application No. 18/638,399

Devices, Methods, and User Interfaces for Providing Audio Notifications

Final Rejection §103§112
Filed
Apr 17, 2024
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action is in response to the amendment filed 19 December 2025. Claims 1-3, 5-16, 18-20, and 22-24 are pending. Claims 1, 14, and 20 are independent claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3, 5-16, 18-20, and 22-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to independent claims 1, 14, and 20, the claims recite: “in response to detecting the first movement of the one or more wearable audio output devices, and in accordance with a determination that the first movement of the one or more wearable audio output devices meets first movement criteria, outputting, via the one or more wearable audio output devices, additional audio content corresponding to the one or more events; detecting second movement of the one or more wearable audio output devices, corresponding to head movement of the user of the one or more wearable audio output devices; and in response to detecting the second movement of the one or more wearable audio output devices, and in accordance with a determination that the second movement of the one or more wearable audio output devices meets second movement criteria different from the first movement criteria, forgoing outputting, via the one or more wearable audio output devices, additional audio content corresponding to the one or more events (claim 1, lines 11-22; emphasis added).” The claims appear to recite, in sequence, receiving a first input to output the additional audio content corresponding to the one or more events, receiving a second output to forgo output of the additional audio content corresponding to the one or more events. It is unclear how the additional audio content can be output and then after this has occurred, suppressed. For this reason, independent claims 1, 14, and 20 are indefinite. Dependent claims 2-3, 5-13, 15-16, 18-19, and 22-24 fail to cure the deficiencies of independent claims 1, 14, and 20. Claims 2-3, 5-13, 15-16, 18-19, and 22-24 are rejected under similar rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5-7, 12-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Paek et al. (US 2013/0155237, published 20 June 2013, hereafter Paek) and further in view of Hardi (US 2018/0048750, published 15 February 2018) and further in view of VanBlon et al. (US 2015/0177841, published 25 June 2015, hereafter VanBlon) and further in view of Binder et al. (US 10282057, filed 28 July 2015, hereafter Binder) and further in view of Chesluk et al. (US 2016/0065641, published 3 March 2016, hereafter Chesluk). As per independent claim 1, Paek discloses a method comprising, at an electronic device (Figure 4, item 104: Here, a handheld device operates in gesture capture mode (paragraph 0138)) detecting an occurrence of one or more first events (Figures 24-25; paragraphs 0138-0139: Here, a device is placed into a mount (first event) and gesture capture mode is triggered) detecting movement (Figure 27; paragraph 0141: Here, a camera is used to detect user gestures by the device in gesture capture mode) in response to detecting the movement, and in accordance with a determination that a first movement meets first movement criteria, outputting, via the one or more audio output devices, additional audio content corresponding to one or more events (Figures 12-13; paragraph 0122: Here, a gesture is detected to change playback of media to a user. This includes gestures to stop playback (Figure 10), begin playback (Figure 11), return to a previous item (Figure 12), and advance to a next item (Figure 13)) Paek fails to specifically disclose: an electronic device that is in communication with one or more wearable audio output devices after outputting the audio content corresponding the one or more first events, detecting movement of the one or more wearable audio device movement of the one or more wearable audio devices and determining that movement of the audio device meets a first criteria However, Hardi, which is analogous to the claimed invention because it is directed toward a wearable audio output device capturing gesture events to control audio data, discloses: an electronic device that is in communication with one or more wearable audio output devices (Figures 8-9; paragraphs 0075-0076: Here, a camera is included in the earcups of a set of headphones. The headphones are wearable audio output devices) after outputting the audio content corresponding the one or more first events, detecting movement of the one or more wearable audio device (paragraph 0196: Here, a user performs a gesture input, such as a forward swipe, to advance the audio track) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined the wearable audio device including a camera of Hardi with the camera capturing gestures of Paek, with a reasonable expectation of success, as it would have allowed a user to capture gestures using a wearable audio device. This would have provided a user with the advantage of using an integrated forward facing camera to capture gesture inputs instead of using a handheld/mounted device while allowing a user to interact with their mobile device (Hardi: paragraph 0002). Further, VanBlon, which is analogous to the claimed invention because it is directed toward moving a wearable device to indicate a gesture, discloses movement of the one or more wearable audio devices and determining that movement of the audio device meets a first criteria (paragraph 0006: Here, an initial position of a wearable device, such as a watch, is determined and a plurality of sensors are used to determine gestures of the device (paragraph 0046)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined VanBlon with Paek-Hardi, with a reasonable expectation of success, as it would have allowed for determining gestures based upon device movement. This would have allowed for more precise tracking, as it would have enabled identifying gestures outside of a field of view of a camera. Paek fails to specifically disclose detecting movement of the one or more wearable audio output devices corresponding to head movement of a user of the one or more wearable audio output devices. However, Binder, which is analogous to the claimed invention because it is directed toward detecting gesture movements, discloses detecting movement of the one or more wearable audio output devices corresponding to head movement of a user of the one or more wearable audio output devices (column 6, lines 16-28: Here, a wearable device is equipped with one or more gyroscopes/accelerometers that sense head gestures). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Binder with Paek-Hardi-VanBlon, with a reasonable expectation of success, as it would have allowed for capturing head movements via a wearable device (Binder: column 6, lines 16-28). This would have provided the user with the advantage of performing hands free gestures. While Paek discloses detecting a second movement of a plurality of movements of one or more wearable audio devices (Figures 12-13; paragraph 0122: Here, a gesture is detected to change playback of media to a user. This includes gestures to stop playback (Figure 10), begin playback (Figure 11), return to a previous item (Figure 12), and advance to a next item (Figure 13)) and Binder discloses wherein the movement is a head movement (column 6, lines 16-28), the combination of Paek-Hardi-VanBlon-Binder fails to specifically disclose in response to detecting the second movement, and in accordance with a determination that the second movement meets a second movement criteria different from the first movement criteria, forgoing outputting, additional content corresponding to the one or more events. However, Chesluk, which is analogous to the claimed invention because it is directed toward receiving input to either play or forego playing content after receiving a preview, discloses in accordance with a determination that the second movement meets a second movement criteria different from the first movement criteria, forgoing outputting, additional content corresponding to the one or more events (Figure 2f; paragraph 0302: Here, a playback preview is provided (item 276). User input is received (item 277). If the negative feedback to skip, forego playback is received, the feedback is provided and content progresses to the next item (item 280). If a non-negative feedback is received, the content item plays to completion (item 279)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Chesluk with Paek-Hardi-VanBlon-Binder, with a reasonable expectation of success, as it would have allowed a user to either confirm or skip playback of items (Chesluk: Figure 2f). As per dependent claim 2, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Paek discloses wherein the additional audio content comprises additional audio content for the one or more first events (paragraphs 0120-0122: Here, a user performs a gesture to initiate playback of a media content item (paragraph 0120). Based upon another gesture, the user may advance to the next item (additional audio content)). As per dependent claim 3, Paek, Hardi, VanBlon, Binder, and Chesluk discloses the limitations similar to those in claim 1, and the same rejection is incorporated herein. Paek discloses wherein the audio content corresponding to the one or more first events includes a simulated spatial location that is associated with the audio content corresponding to the one or more first events, and wherein the first movement criteria include a criterion with respect to movement of the one or more wearable audio output devices toward the simulated spatial location (Figure 4; paragraph 0052: Here, an interaction space is a simulated spatial location that allows for interacting with the device. The gesture recognition engine identifies gestures within the interaction space to control the device (paragraph 0088). This includes controlling playback of media content items (paragraph 0120)). As per dependent claim 5, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 4, and the same rejection is incorporated herein. Paek discloses wherein the audio content corresponding to the one or more first events includes a simulated spatial location that is associated with the audio content corresponding to the one or more first events, and wherein the second movement criteria include a criterion with respect to movement away from the simulated spatial location (Figures 12-13; paragraphs 0120 and 0122-0123). As per dependent claim 6, Paek, Hardi, VanBlon, and Binder disclose the limitations similar to those in claim 5, and the same rejection is incorporated herein. Paek discloses wherein the second movement criteria include a criterion with respect to movement away from the simulated spatial location during output of the audio content corresponding to the one or more first events (Figures 12-13; paragraphs 0120 and 0122-0123). As per dependent claim 7, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 4, and the same rejection is incorporated herein. VanBlon discloses wherein the movement of the one or more wearable output devices that meets the second movement criteria includes backward movement, backward tilting, or movement away from a simulated spatial location (paragraph 0046: Here, a gesture detector determines a movement away from an initial position of the electronic device). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined VanBlon with Paek-Hardi, with a reasonable expectation of success, as it would have allowed for determining gestures based upon device movement. This would have allowed for more precise tracking, as it would have enabled identifying gestures outside of a field of view of a camera. As per dependent claim 12, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Paek discloses wherein the first movement of the one or more wearable audio output devices includes forward movement, forward tilting, or movement toward a simulated spatial location associated with the audio content corresponding to the one or more first events (Figure 10; paragraph 0120: Here, a user “extends his or her hand 1002 such that its palm generally faces the front surface of the mobile device.” This gesture includes a forward movement or forward tilting). As per dependent claim 13, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Paek discloses wherein the one or more devices includes one or more accelerometers, and wherein detecting movement of the one or more devices is based on movement detected by the one or more accelerometers (paragraph 0063: Here, the handheld device includes an accelerometer to detect movement). With respect to claims 14-16 and 18-19, the applicant discloses the limitations substantially similar to those in claims 1-3 and 5-6, respectively. Paek further discloses one or more processors (paragraph 0144) and memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors (paragraph 145). Claims 14-16 and 18-19 are similarly rejected. With respect to claim 20, the applicant discloses the limitations substantially similar to those in claim 1. Paek further discloses a compute readable storage medium storing one or more programs, the one or more programs comprising instructions (paragraph 0145). Claim 20 is similarly rejected. Claims 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Paek, Hardi, VanBlon, Binder, and Chesluk and further in view of MacArthur (US 2015/0347403, published 3 December 2015). As per dependent claim 8, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Paek fails to specifically disclose in response to detecting a second movement of the one or more wearable audio output devices, and in accordance with a determination that the second movement of the one or more wearable audio output devices meets second movement criteria, reducing a verbosity of subsequent audio content corresponding to one or more second events. However, MacArthur, which is analogous to the claimed invention because it is directed toward receiving a gesture and reducing verbosity of content, discloses in response to detecting a movement via a gesture, and in accordance with a determination that the movement meets movement criteria, reducing a verbosity of subsequent content corresponding to one or more events (Figure 4; paragraphs 0048-0051: Here, a gesture input is detected (item 410) and interpreted as correlating to a summarization command (item 420). A summary is a less verbose content item than the original content. This summary content is then presented to a user (item 430)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined MacArthur’s teaching of summarizing contents based upon a received gesture with Paek-Hardi-VanBlon’s use of an, as it would have allowed a user to perform gestures to receive summarized audio contents. This would have allowed a user to save time in reviewing/receiving audio contents. As per dependent claim 9, Paek, Hardi, VanBlon, Binder, Chesluk, and MacArthur disclose the limitations similar to those in claim 8, and the same rejection is incorporated herein. Paek discloses wherein the audio content corresponding to the one or more first events includes a simulated spatial location that is associated with the audio content corresponding to the one or more first events, and wherein the second movement criteria include a criterion with respect to movement away from the simulated spatial location (Figures 12-13; paragraphs 0120 and 0122-0123). As per dependent claim 10, Paek, Hardi, VanBlon, Binder, Chesluk, and MacArthur disclose the limitations similar to those in claim 9, and the same rejection is incorporated herein. Paek discloses wherein the second movement criteria include a criterion with respect to movement away from the simulated spatial location during output of the audio content corresponding to the one or more first events (Figures 12-13; paragraphs 0120 and 0122-0123). As per dependent claim 11, Paek, Hardi, VanBlon, Binder, Chesluk, and MacArthur disclose the limitations similar to those in claim 8, and the same rejection is incorporated herein. VanBlon discloses wherein the movement of the one or more wearable output devices that meets the second movement criteria includes backward movement, backward tilting, or movement away from a simulated spatial location (paragraph 0046: Here, a gesture detector determines a movement away from an initial position of the electronic device). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined VanBlon with Paek-Hardi, with a reasonable expectation of success, as it would have allowed for determining gestures based upon device movement. This would have allowed for more precise tracking, as it would have enabled identifying gestures outside of a field of view of a camera. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Paek, Hardi, VanBlon, Binder, and Chesluk and further in view of Gruber et al. (US 2013/0275138, published 17 October 2013, hereafter Gruber). As per dependent claim 22, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Paek fails to specifically disclose wherein the audio content corresponding to the one or more first events comprises a verbal summary of the one or more inputs. However, Gruber, which is analogous to the claimed invention because it is directed toward a verbal summary, discloses wherein the audio content corresponding to the one or more first events comprises a verbal summary of the one or more inputs (paragraphs 0199 and 0270: Here, verbal outputs are provided to a user to allow for interacting with the system. This includes providing a verbal summary to a user, wherein the summary is read aloud, to allow the user to interact with the summary (paragraph 0270)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Gruber with Paek-Hardi-VanBlom-Binder, with a reasonable expectation of success, as it would have allowed for providing summarized information to a user in a hands-free environment (Gruber: paragraph 0270). This would have allowed a user to interact in this hands-free environment in order to improve the safety of user while performing tasks, such as driving. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Paek, Hardi, VanBlon, Binder, and Chesluk and further in view of Yang et al. (US 8250493, patented 21 August 2012, hereafter Yang). As per dependent claim 23, Paek, Hardi, VanBlon, Binder, and Chesluk disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Paek fails to specifically disclose wherein the movement criteria include a requirement that the movement is detected within a predefined time period of the outputting of the content corresponding to the one or more first events. However, Yang, which is analogous to the claimed invention because it is directed toward detecting a gesture within a predefined time period, discloses wherein the movement criteria include a requirement that the movement is detected within a predefined time period of the outputting of the audio content corresponding to the one or more first events (claim 1: Here, a gesture is received within a time period). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Yang with Paek-Hardi-VanBlom-Binder, with a reasonable expectation of success, as it would have allowed for determining that association of a gesture based upon a time period in which it is received (Yang: claim 1). Response to Arguments Applicant’s arguments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Paek, Hardi, VanBlon, Binder, and Chesluk. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Marko et al. (US 2013/0287212): Discloses playing a preview audio and responsive to user input playing the entirety of the audio (paragraph 0065) or skipping the content (paragraph 0120). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Jan 04, 2025
Non-Final Rejection — §103, §112
Apr 04, 2025
Applicant Interview (Telephonic)
Apr 04, 2025
Examiner Interview Summary
Apr 08, 2025
Response Filed
Jun 03, 2025
Final Rejection — §103, §112
Jul 22, 2025
Interview Requested
Aug 05, 2025
Applicant Interview (Telephonic)
Aug 09, 2025
Examiner Interview Summary
Aug 27, 2025
Request for Continued Examination
Sep 04, 2025
Response after Non-Final Action
Sep 18, 2025
Non-Final Rejection — §103, §112
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 14, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103, §112
Mar 16, 2026
Examiner Interview Summary
Mar 16, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month