Prosecution Insights
Last updated: April 19, 2026
Application No. 18/770,207

Real-time Adaptation of Audio Playback

Non-Final OA §103§DP
Filed
Jul 11, 2024
Examiner
YU, NORMAN
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
525 granted / 598 resolved
+25.8% vs TC avg
Moderate +14% lift
Without
With
+13.5%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
35 currently pending
Career history
633
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 598 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 21-40 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-30 of Patent 12041424 in view of Hartung (US 2021/0112354). Although the conflicting claims are not identical, they are not patentably distinct from each other because: claims 1, 28-29 of Patent 12041424 teaches all the limitations in the claims 21 and 39-40 of instant application except for “detecting a change from the first media playback application to a second media playback application; responsive to the detecting, calibrating a second streaming media content provided by the second media playback application; and playing, by the audio output component to the environment, the calibrated second streaming media content.” Hartung teaches detecting a change from the first media playback application to a second media playback application (Hartung ¶0155, “the playback device may apply a certain calibration based on the audio content that the playback device is playing back (or that it has been instructed to play back). For instance, the playback device may detect that it is playing back media content that consists of only audio (e.g., music). In such cases, the playback device may apply a particular calibration, such as a spectral calibration” and ¶0156, “the playback device may receive media content that is associated with both audio and video (e.g., a television show or movie). When playing back such content, the playback device may apply a particular calibration. In some cases, the playback device may apply a spatial calibration,” with BRI, playing back audio only vs audio and video is considered first and second media playback applications. In addition ¶0157 discuss receiving content via different sources and applying different calibration in response, figure 14 and ¶0161 discusses selecting a particular calibration ); responsive to the detecting, calibrating a second streaming media content provided by the second media playback application (Hartung ¶0155, “apply a certain calibration based on the audio content that the playback device is playing back”); and playing, by the audio output component to the environment, the calibrated second streaming media content (Hartung ¶0156, “when playing back such content”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Hartung to improve the known device of Patent 12041424 to achieve the predictable result of optimizing audio reproduction by adapting to the audio source. Dependent claims 22-24, 27-38 are also rejected because they are obvious variants of the patented claims. Dependent claims of the instant application maps to : claims of Patent 12041424 Claim 22 : Claim 23 Claim 23 : Claim 24 Claim 24 : Claim 25 Claim 25 : Claim 1, 28-29 Claim 26 : Claim 27 Claim 27 : Claim 20 Claim 28 : Claim 21 Claim 29 : Claim 2 Claim 30 : Claim 3 Claim 31 : Claim 4 Claim 32 : Claim 5 Claim 33 : Claim 6 Claim 34 : Claim 7 Claim 35 : Claim 8 Claim 36 : Claim 10 Claim 37 : Claim 11 Claim 38 : Claim 13 Patent 12041424 Instant Application 18/770207 1. A device comprising: an audio output controller communicatively linked to: an audio output component, an audio input component, and one or more media playback applications; and one or more processors operable to perform operations, the operations comprising: determining, by the audio output controller during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a media playback application to the audio output component, one or more audio characteristics of the streaming media content; playing, by the audio output component into an environment, the first portion of the streaming media content; capturing, by the audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment; determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment; determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content; calibrating a second portion of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated, wherein the calibrating comprises applying the difference as an input to a machine learning model in order to determine an output audio setting; and playing, by the audio output component to the environment, the second portion as calibrated. 28. A computer-implemented method comprising: determining, during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a media playback application to an audio output component, one or more audio characteristics of the streaming media content; playing, by the audio output component into an environment, the first portion of the streaming media content; capturing, by an audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment; determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment; determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content; calibrating a second portion of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated, wherein the calibrating comprises applying the difference as an input to a machine learning model in order to determine an output audio setting; and playing, by the audio output component to the environment, the second portion as calibrated. 29. An article of manufacture including a non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by one or more processors of a computing device, cause the computing device to carry out operations comprising: determining, during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a media playback application to an audio output component, one or more audio characteristics of the streaming media content; playing, by the audio output component into an environment, the first portion of the streaming media content; capturing, by an audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment; determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment; determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content; calibrating a second portion of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated, wherein the calibrating comprises applying the difference as an input to a machine learning model in order to determine an output audio setting; and playing, by the audio output component to the environment, the second portion as calibrated. 21. A device comprising: an audio output controller communicatively linked to: an audio output component, an audio input component, and one or more media playback applications; and one or more processors operable to perform operations, the operations comprising: determining, by the audio output controller during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a first media playback application to the audio output component, one or more audio characteristics of the streaming media content; playing, by the audio output component into an environment, the first portion of the streaming media content; capturing, by the audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment; determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment; determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content; calibrating a second portion of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated; playing, by the audio output component to the environment, the second portion as calibrated; detecting a change from the first media playback application to a second media playback application; responsive to the detecting, calibrating a second streaming media content provided by the second media playback application; and playing, by the audio output component to the environment, the calibrated second streaming media content. 39. A computer-implemented method comprising: determining, during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a first media playback application to an audio output component, one or more audio characteristics of the streaming media content; playing, by the audio output component into an environment, the first portion of the streaming media content; capturing, by an audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment; determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment; determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content; calibrating a second portion of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated; playing, by the audio output component to the environment, the second portion as calibrated; detecting a change from the first media playback application to a second media playback application; responsive to the detecting, calibrating a second streaming media content provided by the second media playback application; and playing, by the audio output component to the environment, the calibrated second streaming media content. 40. An article of manufacture including a non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by one or more processors of a computing device, cause the computing device to carry out operations comprising: determining, during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a first media playback application to an audio output component, one or more audio characteristics of the streaming media content; playing, by the audio output component into an environment, the first portion of the streaming media content; capturing, by an audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment; determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment; determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content; calibrating a second portion of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated; playing, by the audio output component to the environment, the second portion as calibrated; detecting a change from the first media playback application to a second media playback application; responsive to the detecting, calibrating a second streaming media content provided by the second media playback application; and playing, by the audio output component to the environment, the calibrated second streaming media content. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-24, 29, 36, and 38-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Master (US 2017/0041724) in view of Hartung (US 2021/0112354). Regarding claim 21, Master teaches A device comprising: an audio output controller (Master figure 1, Calibration 112) communicatively linked to: an audio output component (Master figure 1, speakers 118), an audio input component (Master figure 1, microphone 122), and one or more media playback applications (Master figure 1, GUI 130 and ¶0046); and one or more processors operable to perform operations (Master figure 2B, processing 214), the operations comprising: determining, by the audio output controller during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a first media playback application to the audio output component (Master figure 2B, and ¶0030, Original audio signal A), one or more audio characteristics of the streaming media content (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B” and ¶0028 “overall requirements for the system 100 are that the device has both original signal and speaker output signal knowledge…information about a desired sound signal being output”); playing, by the audio output component into an environment, the first portion of the streaming media content (Master figure 2B, “an original digital audio signal 211 is played through the output and playback stages 212 of a device (e.g., amp and speakers) as an original signal A (denoted “digital 1”)”); capturing, by the audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment (Master figure 2B, “The output from the speaker is heard by the user and received back through the microphone as an analog signal B”); determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B”); determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B”); calibrating a second portion (Master figure 2B, and ¶0030, “so that the user first hears signal A and then the corrected signal B′ thereafter”) of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated (Master figure 2B, and ¶0030, “A tuning/calibration circuit 217 is used to correct the effects of any such distortion by filtering, equalization, phase shift or other similar means”); playing, by the audio output component to the environment, the second portion as calibrated (Master figure 2B, and ¶0030, “so that the user first hears signal A and then the corrected signal B′ thereafter”); however does not explicitly teach detecting a change from the first media playback application to a second media playback application; responsive to the detecting, calibrating a second streaming media content provided by the second media playback application; and playing, by the audio output component to the environment, the calibrated second streaming media content. Hartung teaches detecting a change from the first media playback application to a second media playback application (Hartung ¶0155, “the playback device may apply a certain calibration based on the audio content that the playback device is playing back (or that it has been instructed to play back). For instance, the playback device may detect that it is playing back media content that consists of only audio (e.g., music). In such cases, the playback device may apply a particular calibration, such as a spectral calibration” and ¶0156, “the playback device may receive media content that is associated with both audio and video (e.g., a television show or movie). When playing back such content, the playback device may apply a particular calibration. In some cases, the playback device may apply a spatial calibration,” with BRI, playing back audio only vs audio and video is considered first and second media playback applications. In addition ¶0157 discuss receiving content via different sources and applying different calibration in response, figure 14 and ¶0161 discusses selecting a particular calibration ); responsive to the detecting, calibrating a second streaming media content provided by the second media playback application (Hartung ¶0155, “apply a certain calibration based on the audio content that the playback device is playing back”); and playing, by the audio output component to the environment, the calibrated second streaming media content (Hartung ¶0156, “when playing back such content”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Hartung to improve the known device of Master to achieve the predictable result of optimizing audio reproduction by adapting to the audio source. Regarding claim 22, Master in view of Hartung teaches determining the first digital signal representing the first portion of the streaming media content; and providing the first portion of the streaming media content to the audio output component of the device after the determining of the first digital signal (Master figure 2B, and ¶0030, Original audio signal A, with BRI, it has to be determined that audio signal A is to be outputted before it is sent to the speaker). Regarding claim 23, Master in view of Hartung teaches receiving the streaming media content from a third party content provider over a communications network (Hartung ¶0157, “receiving content via a network interface”). Regarding claim 24, Master in view of Hartung teaches wherein the operations for the determining of the one or more audio characteristics of the streaming media content further comprises: determining one or more artist-intended audio characteristics of the streaming media content by tapping into the first portion of the streaming media content, wherein the tapping occurs subsequent to the first portion being provided by the first media playback application, and prior to the first portion being received by the audio output component (Master ¶0030 “uses sound that a system user chooses to listen to,” it is known in the art to use tapping commands to make selections in a touch screen GUI). Regarding claim 29, Master in view of Hartung teaches determining a target output signal by determining a value offset for an audio output associated with the device, and wherein the calibrating of the second portion further comprising: calibrating the second portion to be within a threshold of the target output signal (Master ¶0048, “It will modify player output in addition to the default tuning, but limit how much it modifies the sound. For example, the system shall not deviate from the default tuning of a given speaker device by more than 10 dB, or some other threshold”). Regarding claim 36, Master in view of Hartung teaches wherein the audio output component comprises a plurality of speakers, and the operations further comprising: detecting one or more active speakers of the plurality of speakers, and wherein the calibrating of the second portion is performed to calibrate the second portion to be played from the one or more active speakers (Master ¶0021, “wireless speakers 119 that are connected to the device over a wireless link,” “Bluetooth”). Regarding claim 38, Master in view of Hartung teaches wherein the one or more media playback applications are installed on the device, and wherein the audio output controller is configured to securely access streaming media content played by the one or more media playback applications (Master ¶0021, “The device 104 is generally a portable audio or music player or small computer or mobile telecommunication device that runs applications that allow for the playback of audio content”). Regarding claim 39, Master teaches A computer-implemented method comprising: determining, during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a first media playback application to an audio output component (Master figure 2B, and ¶0030, Original audio signal A), one or more audio characteristics of the streaming media content (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B” and ¶0028 “overall requirements for the system 100 are that the device has both original signal and speaker output signal knowledge…information about a desired sound signal being output”); playing, by the audio output component into an environment, the first portion of the streaming media content (Master figure 2B, “an original digital audio signal 211 is played through the output and playback stages 212 of a device (e.g., amp and speakers) as an original signal A (denoted “digital 1”)”); capturing, by an audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment (Master figure 2B, “The output from the speaker is heard by the user and received back through the microphone as an analog signal B”); determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B”); determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B”); calibrating a second portion (Master figure 2B, and ¶0030, “so that the user first hears signal A and then the corrected signal B′ thereafter”) of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated (Master figure 2B, and ¶0030, “A tuning/calibration circuit 217 is used to correct the effects of any such distortion by filtering, equalization, phase shift or other similar means”); playing, by the audio output component to the environment, the second portion as calibrated (Master figure 2B, and ¶0030, “so that the user first hears signal A and then the corrected signal B′ thereafter”), however does not explicitly teach detecting a change from the first media playback application to a second media playback application; responsive to the detecting, calibrating a second streaming media content provided by the second media playback application; and playing, by the audio output component to the environment, the calibrated second streaming media content. Hartung teaches detecting a change from the first media playback application to a second media playback application (Hartung ¶0155, “the playback device may apply a certain calibration based on the audio content that the playback device is playing back (or that it has been instructed to play back). For instance, the playback device may detect that it is playing back media content that consists of only audio (e.g., music). In such cases, the playback device may apply a particular calibration, such as a spectral calibration” and ¶0156, “the playback device may receive media content that is associated with both audio and video (e.g., a television show or movie). When playing back such content, the playback device may apply a particular calibration. In some cases, the playback device may apply a spatial calibration,” with BRI, playing back audio only vs audio and video is considered first and second media playback applications. In addition ¶0157 discuss receiving content via different sources and applying different calibration in response, figure 14 and ¶0161 discusses selecting a particular calibration ); responsive to the detecting, calibrating a second streaming media content provided by the second media playback application (Hartung ¶0155, “apply a certain calibration based on the audio content that the playback device is playing back”); and playing, by the audio output component to the environment, the calibrated second streaming media content (Hartung ¶0156, “when playing back such content”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Hartung to improve the known device of Master to achieve the predictable result of optimizing audio reproduction by adapting to the audio source. Regarding claim 40, Master teaches An article of manufacture including a non-transitory computer-readable medium, having stored thereon program instructions that, upon execution by one or more processors of a computing device (Master figure 2B), cause the computing device to carry out operations comprising: determining, during a dynamic audio calibration process and based on a first digital signal representing a first portion of streaming media content provided by a first media playback application to an audio output component (Master figure 2B, and ¶0030, Original audio signal A), one or more audio characteristics of the streaming media content (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B” and ¶0028 “overall requirements for the system 100 are that the device has both original signal and speaker output signal knowledge…information about a desired sound signal being output”); playing, by the audio output component into an environment, the first portion of the streaming media content (Master figure 2B, “an original digital audio signal 211 is played through the output and playback stages 212 of a device (e.g., amp and speakers) as an original signal A (denoted “digital 1”)”); capturing, by an audio input component, a second digital signal representing the first portion of the streaming media content as played by the audio output component into the environment (Master figure 2B, “The output from the speaker is heard by the user and received back through the microphone as an analog signal B”); determining, based on the captured second digital signal, one or more detected audio characteristics of the streaming media content as played into the environment (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B”); determining a difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content (Master figure 2B, and ¶0030, “The original audio signal A sent to the speakers 212 is compared to the received signal B (after digital conversion) in a comparator circuit 216. Due to operating conditions, device characteristics, ambient effects and other possible distortion or interference, the transmitted signal A may differ to some extent from the received signal B”); calibrating a second portion (Master figure 2B, and ¶0030, “so that the user first hears signal A and then the corrected signal B′ thereafter”) of the streaming media content based on the difference and based on a determination that the second portion is to be calibrated (Master figure 2B, and ¶0030, “A tuning/calibration circuit 217 is used to correct the effects of any such distortion by filtering, equalization, phase shift or other similar means”); playing, by the audio output component to the environment, the second portion as calibrated (Master figure 2B, and ¶0030, “so that the user first hears signal A and then the corrected signal B′ thereafter”); however does not explicitly teach detecting a change from the first media playback application to a second media playback application; responsive to the detecting, calibrating a second streaming media content provided by the second media playback application; and playing, by the audio output component to the environment, the calibrated second streaming media content. Hartung teaches detecting a change from the first media playback application to a second media playback application (Hartung ¶0155, “the playback device may apply a certain calibration based on the audio content that the playback device is playing back (or that it has been instructed to play back). For instance, the playback device may detect that it is playing back media content that consists of only audio (e.g., music). In such cases, the playback device may apply a particular calibration, such as a spectral calibration” and ¶0156, “the playback device may receive media content that is associated with both audio and video (e.g., a television show or movie). When playing back such content, the playback device may apply a particular calibration. In some cases, the playback device may apply a spatial calibration,” with BRI, playing back audio only vs audio and video is considered first and second media playback applications. In addition ¶0157 discuss receiving content via different sources and applying different calibration in response, figure 14 and ¶0161 discusses selecting a particular calibration ); responsive to the detecting, calibrating a second streaming media content provided by the second media playback application (Hartung ¶0155, “apply a certain calibration based on the audio content that the playback device is playing back”); and playing, by the audio output component to the environment, the calibrated second streaming media content (Hartung ¶0156, “when playing back such content”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Hartung to improve the known device of Master to achieve the predictable result of optimizing audio reproduction by adapting to the audio source. Claim(s) 27-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Master (US 2017/0041724) in view of Hartung (US 2021/0112354) in further view of Naylor (US 2021/0110812). Regarding claim 27, Master in view of Hartung does not explicitly teach applying a trained machine learning model to optimize the captured second digital signal prior to the determining of the difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content. Naylor teaches applying a trained machine learning model to optimize the captured second digital signal prior to the determining of the difference between the one or more audio characteristics of the streaming media content and the one or more detected audio characteristics of the streaming media content (Naylor ¶0044, “used as training data for a machine learning model. This training data may additionally include information about the input audio data (e.g., input microphone type/quality, gain/amplitude/volume, background noise etc.) and/or information about the input speech (e.g., pitch, language, accent etc.) as well as information about the type and arrangement of the relevant sound system. In this way, a machine learning model may be developed to accurately predict the best configuration for a particular sound system for a particular type of input speech (e.g., a particular category of human speaker). The model may then be used to intelligently and/or concurrently adjust multiple configurable parameters (e.g., relating to the same and/or different audio components) of the sound system for optimized output speech quality”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Naylor to improve the known device of Master in view of Hartung to achieve the predictable result of adaptively processing with machine learning to achieve optimal results. Regarding claim 28, Master in view of Hartung in further view of Naylor teaches applying a trained machine learning model to automatically respond to user preferences (Naylor ¶0044, “In this way, a machine learning model may be developed to accurately predict the best configuration for a particular sound system for a particular type of input speech (e.g., a particular category of human speaker”). Claim(s) 30-32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Master (US 2017/0041724) in view of Hartung (US 2021/0112354) in further view of Jarvis (US 2017/0289718). Regarding claim 30, Master in view of Hartung teaches playing, by the audio output component to a test environment of the device, a signal (Master ¶0030, “an original digital audio signal 211 is played through the output”); and capturing, by the audio input component, the test signal as played into the test environment (Master ¶0030, “The output from the speaker is heard by the user and received back through the microphone”), and wherein the value offset is based on a difference between respective spectral shapes of the test signal as played and the test signal as captured (Master ¶0030 “A tuning/calibration circuit 217 is used to correct the effects of any such distortion by filtering, equalization, phase shift or other similar means,” wherein equalization and phase shift are considered changes to spectral shape), however does not explicitly teach that the signal is a test signal. Jarvis teaches using a test signal (Jarvis ¶0071 “test or calibration tone”), and wherein the value offset is based on a difference between respective spectral shapes of the test signal as played and the test signal as captured (Jarvis ¶0071 “spectral profile”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Jarvis to improve the known device of Master in view of Hartung to achieve the predictable result of an adaptively reproduce audio based on the environment. Regarding claim 31, Master in view of Hartung in further view of Jarvis teaches generating a test signal based on at least one of the environment, the device (Master ¶0030, “an original digital audio signal 211 is played through the output”), or the first media playback application; playing, by the audio output component to the environment of the device (Master ¶0030, “an original digital audio signal 211 is played through the output”), the test signal as generated (Jarvis ¶0071 “test or calibration tone”); and capturing, by the audio input component, the test signal as played into the environment (Master ¶0030, “The output from the speaker is heard by the user and received back through the microphone”), and wherein the value offset is based on a difference between respective spectral shapes (Jarvis ¶0071 “spectral profile”) of the test signal as played and the test signal as captured (Master ¶0030 “A tuning/calibration circuit 217 is used to correct the effects of any such distortion by filtering, equalization, phase shift or other similar means,” wherein equalization and phase shift are considered changes to spectral shape). Regarding claim 32, Master in view of Hartung in further view of Jarvis teaches wherein determining the target output signal is based on a user profile, wherein the user profile is based on a user indication (Jarvis ¶0096 and ¶0107) via a display component of the device (Jarvis ¶0056-0057 and ¶0051 teaches uses touch screen for a user interface), or a history of user preferences, or both. Claim(s) 33-35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Master (US 2017/0041724) in view of Hartung (US 2021/0112354) in further view of Nerriec (US 2016/0212535). Regarding claim 33, Master in view of Hartung does not explicitly teach triggering the capturing of the second digital signal. Nerriec teaches triggering the capturing of the second digital signal (Nerriec ¶0089, “an event or condition is detected which initiates a change in the channel configuration and or other selections (e.g., selection of the particular leader device, or motive implementation etc.) (750). By way of example, the event or condition can correspond to… movement by the user sufficient to trigger calibration actions”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Nerriec to improve the known device of Master in view of Hartung to achieve the predictable result of conveniently activating calibration without manual input on the device. Regarding claim 34, Master in view of Hartung in further view of Nerriec teaches detecting a movement of the device, and wherein the triggering is performed in response to a determination that a movement measurement exceeds a threshold value (Nerriec ¶0089, “an event or condition is detected which initiates a change in the channel configuration and or other selections (e.g., selection of the particular leader device, or motive implementation etc.) (750). By way of example, the event or condition can correspond to… movement by the user sufficient to trigger calibration actions”). Regarding claim 35, Master in view of Hartung in further view of Nerriec teaches wherein the triggering is based on one or more of: (i) a change in the streaming media content (Nerriec ¶0089 “a change in content being outputted”), (ii) an amount of elapsed time since a prior calibration was performed, or (iii) detecting a change from the first media playback application to the second media playback application. Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Master (US 2017/0041724) in view of Hartung (US 2021/0112354) in further view of Sheen (US 2016/0011850). Regarding claim 37, Master in view of Hartung does not explicitly teach determining a type of the environment; and determining that the second portion of the streaming media content is not to be calibrated based on the type of the environment. Sheen teaches determining a type of the environment (Sheen ¶0143 “when the ambient noise is at an appropriate level for calibration”); and determining that the second portion of the streaming media content is not to be calibrated based on the type of the environment (Sheen ¶0144 “steps backward in the calibration procedure”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the known technique of Sheen to improve the known device of Master in view of Hartung to achieve the predictable result of avoiding incorrect calibration in a high noise environment. Allowable Subject Matter Claims 25-26 are objected to as being dependent upon a rejected base claim, but would be allowable if 1) a terminal disclaimer is filed to overcome the double patenting rejection(s) set forth in this office action and 2) rewritten in independent form including all of the limitations of the base claim and any intervening claims because the closest prior art either alone or in combination, fail to anticipate or render obvious, the claimed limitation of “wherein the calibrating of the second portion of the streaming media content comprises applying the difference as an input to a machine learning model in order to determine an output audio setting” in combination with all other limitations in the claim(s) as defined by the applicant. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NORMAN YU whose telephone number is (571)270-7436. The examiner can normally be reached on Mon - Fri 11am-7pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Any response to this action should be mailed to: Commissioner of Patents and Trademarks P.O. Box 1450 Alexandria, Va. 22313-1450 Or faxed to: (571) 273-8300, for formal communications intended for entry and for informal or draft communications, please label “PROPOSED” or “DRAFT”. Hand-delivered responses should be brought to: Customer Service Window Randolph Building 401 Dulany Street Arlington, VA 22314 Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NORMAN YU/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Jul 11, 2024
Application Filed
Feb 14, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604123
APPARATUS AND VEHICULAR APPARATUS INCLUDING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12598409
IN-EAR WEARABLE DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12594882
AUTOMOTIVE SOUND AMPLIFICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12593165
ACOUSTIC INPUT-OUTPUT DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12581238
BINDING BAND ASSEMBLY FOR HEADSET AND HEADSET
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+13.5%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 598 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month