Prosecution Insights
Last updated: April 19, 2026
Application No. 18/789,199

SMART ROUTING FOR AUDIO OUTPUT DEVICES

Non-Final OA §102§103§DP
Filed
Jul 30, 2024
Examiner
SAUNDERS JR, JOSEPH
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
538 granted / 740 resolved
+10.7% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
767
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 740 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is based on the communications filed July 30, 2024. Claims 1 – 20 are currently pending and considered below. Information Disclosure Statement The information disclosure statement (IDS) submitted on July 30, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1 – 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 20 of U.S. Patent No. 12,075,220 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because while obvious variations in wording are present claims 1 – 20 of U.S. Patent No. 12,075,220 B2 are narrower and therefore anticipate all of the required limitations of claims 1 – 20 of the instant application. In regards to claim 1, U.S. Patent No. 12,075,220 B2 discloses a method, comprising (see, “A method executed by two or more audio signal source devices for routing communication to a common audio output device connected to each of the audio signal source devices, the method comprising,” claim 1 of U.S. Patent No. 12,075,220 B2): assessing, for a first audio signal source device of a plurality of audio signal source devices, an input comprising at least one of an operational state of the first audio signal source device, a user interaction with the first audio signal source device within a time duration threshold, an audio-producing application executing on the first audio signal source device, or a degree of user interaction with the audio-producing application (see, “assessing, for each particular audio signal source device of the two or more audio signal source devices, a set of inputs comprising an operational state of the particular audio signal source device, a user activity with the particular audio signal source device within a time duration threshold, an audio-producing application being executed by the particular audio signal source device, and a degree of user activity with the audio-producing application,” claim 1 of U.S. Patent No. 12,075,220 B2; and determining an audio signal routing decision for routing an audio signal from one of the plurality of audio signal source devices to an audio output device based at least in part on the assessment of the input (see, “determining an audio signal routing decision to route an audio signal from one of the two or more audio signal source devices to the audio output device, based on the audio routing score for each of the two or more audio signal source devices,” claim 1 of U.S. Patent No. 12,075,220 B2). In regards to claim 2, U.S. Patent No. 12,075,220 B2 discloses the method of claim 1, wherein the audio-producing application is associated with one of an alert, a sound effect, an audio stream, a ringtone, an alarm, or a call notification (see, “The method in accordance with claim 1, wherein the audio-producing application is one of an alert or sound effect, an audio streaming application, a ringtone or alarm, or a call notification,” claim 2 of US 12, 075,220 B2). In regards to claim 3, U.S. Patent No. 12,075,220 B2 discloses the method of claim 1, wherein the degree of user interaction with the audio-producing application is determined with reference to a second time duration threshold (see, “The method in accordance with claim 1, wherein the degree of user activity with the audio-producing application being executed by the particular audio signal source device includes a second time duration threshold,” claim 3 of US 12, 075,220 B2). In regards to claim 4, U.S. Patent No. 12,075,220 B2 discloses the method of claim 1, further comprising routing the audio signal from a selected audio signal source device of the plurality of audio signal source devices to the audio output device based at least in part on the audio signal routing decision (see, “The method in accordance with claim 1, further com-prising routing the audio signal from a selected audio signal source device of the two or more audio signal source devices,” claim 4 of US 12, 075,220 B2). In regards to claim 5, U.S. Patent No. 12,075,220 B2 discloses the method of claim 4, wherein routing the audio signal further comprises: disconnecting a second audio signal source device of the plurality of audio signal source devices from the audio output device; and connecting the selected audio signal source device of the plurality of audio signal source devices to the audio output device (see, “The method in accordance with claim 4, wherein routing the audio signal further comprises: disconnecting a first of the two or more audio signal source devices from the audio output device; and connecting the selected audio signal source device of the two or more audio signal source devices to the audio output device,” claim 5 of US 12, 075,220 B2). In regards to claim 6, U.S. Patent No. 12,075,220 B2 discloses the method of claim 1, wherein the operational state of the audio signal source device comprises a display-off operational state (see, “The method in accordance with claim 1, wherein the operational state of the audio signal source device includes a display-off operational state and a display-on operational state,” claim 6 of US 12, 075,220 B2). In regards to claim 7, U.S. Patent No. 12,075,220 B2 discloses the method of claim 1, wherein the operational state of the audio signal source device comprises a standby operational state (see, “The method in accordance with claim 1, wherein the operational state of the audio signal source device includes a standby operational state and an active operational state,” claim 7 of US 12, 075,220 B2). In regards to claim 8, U.S. Patent No. 12,075,220 B2 discloses a system for routing communication to a common audio output device connected to each of a plurality of audio signal source devices, the system comprising: computer hardware configured to perform operations comprising (see, “A system for routing communication to a common audio output device connected to each of two or more audio signal source devices, the system comprising: computer hardware configured to perform operations comprising,” claim 8 of US 12, 075,220 B2): assessing, for a first audio signal source device of a plurality of audio signal source devices, an input comprising at least one of an operational state of the first audio signal source device, a user interaction with the first audio signal source device within a time duration threshold, an audio-producing application executing on the first audio signal source device, or a degree of user interaction with the audio-producing application (see, “assess, for each particular audio signal source device of the two or more audio signal source devices, a set of inputs comprising an operational state of the particular audio signal source device, a user activity with the particular audio signal source device within a time duration threshold, an audio-producing application being executed by the particular audio signal source device, and a degree of user activity with the audio-producing application,” claim 8 of US 12, 075,220 B2); and determining an audio signal routing decision for routing an audio signal from one of the plurality of audio signal source devices to an audio output device based at least in part on the assessment of the input (see, “determine an audio signal routing decision to route an audio signal from one of the two or more audio signal source devices to the audio output device, based on the audio routing score for each of the two or more audio signal source devices,” claim 8 of US 12, 075,220 B2). In regards to claim 9, U.S. Patent No. 12,075,220 B2 discloses the system of claim 8, wherein the assessing and determining are performed by an operating system or firmware of at least one of the plurality of audio signal source devices (see, “The system in accordance with claim 8, wherein the assessing, generating, and determining are performed by an operating system or firmware of at least one of the two or more audio signal source devices,” claim 9 of US 12, 075,220 B2). In regards to claim 10, U.S. Patent No. 12,075,220 B2 discloses the system of claim 8, wherein the audio-producing application is associated with one of an alert, a sound effect, an audio stream, a ringtone, an alarm, or a call notification (see, “The system in accordance with claim 8, wherein the audio-producing application is one of an alert or sound effect, an audio streaming application, a ringtone or alarm, or a call notification,” claim 10 of US 12, 075,220 B2). In regards to claim 11, U.S. Patent No. 12,075,220 B2 discloses the system of claim 8, wherein the degree of user interaction with the audio-producing application is determined with reference to a second time duration threshold (see, “The system in accordance with claim 8, wherein the degree of user activity with the audio-producing application being executed by the particular audio signal source device includes a second time duration threshold,” claim 11 of US 12, 075,220 B2). In regards to claim 12, U.S. Patent No. 12,075,220 B2 discloses the system of claim 8, wherein the operations further comprise routing the audio signal from a selected audio signal source device of the plurality of audio signal source devices to the audio output device based at least in part on the audio signal routing decision (see, “The system in accordance with claim 8, wherein the operations further comprise routing the audio signal from a selected audio signal source device of the two or more audio signal source devices,” claim 12 of US 12, 075,220 B2). In regards to claim 13, U.S. Patent No. 12,075,220 B2 discloses the system of claim 12, wherein routing the audio signal further comprises: disconnecting a second audio signal source device of the plurality of audio signal source devices from the audio output device; and connecting the selected audio signal source device of the plurality of audio signal source devices to the audio output device (see, “The system in accordance with claim 12, wherein routing the audio signal further comprises: disconnecting a first of the two or more audio signal source devices from the audio output device; and connecting the selected audio signal source device of the two or more audio signal source devices to the audio output device,” claim 13 of US 12, 075,220 B2). In regards to claim 14, U.S. Patent No. 12,075,220 B2 discloses the system of claim 8, wherein the operational state of the audio signal source device comprises a display-off operational state (see, “A method executed by two or more audio signal source devices for routing communication to a common audio output device connected to each of the audio signal source devices, the method comprising:,” claim 1 of US 12, 075,220 B2, “The method in accordance with claim 1, wherein the operational state of the audio signal source device includes a display-off operational state and a display-on operational state,” claim 6 of US 12, 075,220 B2). In regards to claim 15, U.S. Patent No. 12,075,220 B2 discloses the system of claim 8, wherein the operational state of the audio signal source device comprises a standby operational state (see, “A method executed by two or more audio signal source devices for routing communication to a common audio output device connected to each of the audio signal source devices, the method comprising:,” claim 1 of US 12, 075,220 B2, “The method in accordance with claim 1, wherein the operational state of the audio signal source device includes a standby operational state and an active operational state,” claim 7 of US 12, 075,220 B2). In regards to claim 16, U.S. Patent No. 12,075,220 B2 discloses a non-transitory computer-readable medium for routing communication signals to a common audio output device connected to each of a plurality of audio signal source devices that, when executed by one or more processors, causes the one or more processors to perform operations comprising (see, “computer program product for routing communi-cation to a common audio output device connected to each of two or more audio signal source devices, the computer program product comprising: at least one programmable processor; and a non-transitory machine-readable medium storing instructions that, when executed by the processor, cause the at least one programmable processor to perform operations comprising:,” claim 15 of US 12, 075,220 B2): assessing, for a first audio signal source device of a plurality of audio signal source devices, an input comprising at least one of an operational state of the first audio signal source device, a user interaction with the first audio signal source device within a time duration threshold, an audio-producing application executing on the first audio signal source device, or a degree of user interaction with the audio-producing application (see, “assessing, for each particular audio signal source device of two or more audio signal source devices, a set of inputs comprising an operational state of the particular audio signal source device, a user activity with the audio signal source device within a time duration threshold, an audio-producing application being executed by the particular audio signal source device, and a degree of user activity with the audio-producing application,” claim 15 of US 12, 075,220 B2); and determining an audio signal routing decision for routing an audio signal from one of the plurality of audio signal source devices to an audio output device based at least in part on the assessment of the input (see, “determining an audio signal routing decision to route an audio signal from one of the two or more audio signal source devices to the audio output device, based on the audio routing score for each of the two or more audio signal source devices,” claim 15 of US 12, 075,220 B2). In regards to claim 17, U.S. Patent No. 12,075,220 B2 discloses the non-transitory computer-readable medium of claim 16, wherein the audio-producing application is associated with one of an alert, a sound effect, an audio stream, a ringtone, an alarm, or a call notification (see, “The computer program product in accordance with claim 15, wherein the audio-producing application is one of an alert or sound effect, an audio streaming application, a ringtone or alarm, or a call notification,” claim 17 of US 12, 075,220 B2). In regards to claim 18, U.S. Patent No. 12,075,220 B2 discloses the non-transitory computer-readable medium of claim 16, wherein the degree of user interaction with the audio-producing application is determined with reference to a second time duration threshold (see, “The computer program product in accordance with claim 15, wherein the degree of user activity with the audio-producing application being executed by the particular audio signal source device includes a second time duration threshold,” claim 18 of US 12, 075,220 B2). In regards to claim 19, U.S. Patent No. 12,075,220 B2 discloses the non-transitory computer-readable medium of claim 16, further comprising routing the audio signal from a selected audio signal source device of the plurality of audio signal source devices to the audio output device based at least in part on the audio signal routing decision (see, “The computer program product in accordance with claim 15, further comprising routing the audio signal from a selected audio signal source device of the two or more audio signal source devices,” claim 19 of US 12, 075,220 B2). In regards to claim 20, U.S. Patent No. 12,075,220 B2 discloses the non-transitory computer-readable medium of claim 19, wherein routing the audio signal further comprises: disconnecting a second audio signal source device of the plurality of audio signal source devices from the audio output device; and connecting the selected audio signal source device of the plurality of audio signal source devices to the audio output device (see, “The computer program product in accordance with claim 19, wherein routing the audio signal further comprises: disconnecting a first of the two or more audio signal source devices from the audio output device; and connecting the selected audio signal source device of the two or more audio signal source devices to the audio output device,” claim 20 of US 12, 075,220 B2). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 4, 5, 8 – 10, 12, 13, 16, 17, 19, and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Denis et al. (US 11,080,011 B1), hereinafter Denis. Claim 1: Denis discloses a method, comprising: assessing, for a first audio signal source device of a plurality of audio signal source devices (see at least, “It should be noted that the audio configurator device 20 may be integrated in an audio stream source device, in which case it can both configure the audio rendering device 10 and transmit audio streams to the audio rendering device 10,” Denis Column 17 Lines 11 – 14), an input comprising at least one of an operational state of the first audio signal source device (see at least, “The method 50 for playing audio streams comprises also a step 53 of identifying, by the processing circuit 11, audio streams that are available via wireless links. An audio stream is referred to as "available" when the corresponding audio stream source device has a wireless link with the wireless communication unit 12 of the audio rendering device 10, and has audio stream content available for transmission over its wireless link with the audio rendering device 10. For instance, the audio rendering device 10 may monitor the list of available connected Audio/Video Distribution Transport Protocol, AVDTP, links and their respective statuses,” Denis Column 12 Lines 11 – 22), a user interaction with the first audio signal source device within a time duration threshold, an audio-producing application executing on the first audio signal source device (see at least, “In the present disclosure, an audio stream corresponds to application-level data representing an audio signal. For instance, it can be a bit stream coming from an audio streaming application, a music file stored locally read by an application, the output of an audio server, etc. The audio stream content corresponds to the useful data (vs. metadata information) representing the audio data that is actually played by the audio rendering device 10. The audio stream content can comprise one or more audio channels (e.g., mono or stereo music, etc.),” Denis Column 10 Lines 10 – 19, “According to another example, an audio stream attribute can be an identifier of an audio stream content provider. For instance, a first audio stream content provider can be Spotify®, a second audio stream content provider can be Youtube®, a third specific audio stream content provider can be Facebook®, etc. In that case, it is possible to e.g., prioritize an audio stream from Spotify® over an audio stream from Youtube®, etc., Denis Column 3 Lines 50 – 57), or a degree of user interaction with the audio-producing application (see at least, “In some embodiments, the audio configurator device 20 may receive, from the audio rendering device 10, a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10. In that case, the audio configurator device 20 configures the audio stream prioritization policy based on the user's input, but also based on the received list and/or based on the local audio stream prioritization policy received from the audio rendering device 10. For instance, the received list and/or the received local audio stream prioritization policy may be presented to the user before receiving the user's input. Whether or not the audio configurator device 20 receives from the audio rendering device 10 a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10 might depend on the elected implementation and/or on a configuration phase. For instance, if the configuration phase corresponds to the first configuration of the audio rendering device 10 (i.e., initialization), then the audio stream prioritization policy might be configured only by using the user's input or it can use also a received list of audio stream attributes associated to available audio streams. Subsequently, the audio configurator device 20 might receive, on a regular basis and/or when the audio rendering device 10 is switched ON, the local audio stream prioritization policy of the audio rendering device 10 in order to prompt the user for any required modification,” Denis Column 17 Line 50 – Column 18 Line 9, “In some embodiments, the audio rendering device 10 may also be configured to determine autonomously updated priority values of audio stream attributes. For instance, the audio rendering device 10 may determine an updated priority value based on user habits and machine learning,” Denis Column 19 Lines 14 – 18); and determining an audio signal routing decision for routing an audio signal from one of the plurality of audio signal source devices to an audio output device based at least in part on the assessment of the input (see at least, “According to other examples, determining a priority score for an available audio stream having at least two audio stream attributes listed in the local audio stream prioritization policy may comprise combining the priority values retrieved from the local audio stream prioritization policy. For instance, it is possible to compute the priority score as the mean value of the retrieved priority values, or as a weighted sum of said retrieved priority values, etc. When the priority values are retrieved from different lists of associations having different respective priority levels, then the priority score may be computed as a weighted sum of the retrieved priority values, wherein the weighting coefficients depend on the priority levels of the different priority values, etc.,” Denis Column 16 Lines 9 – 22, “The selected available audio stream is then played by the audio rendering unit 13 of the audio rendering device 10, during a playing step 56,” Denis Column 16 Lines 23 – 25). Claim 2: Denis discloses the method of claim 1, wherein the audio-producing application is associated with one of an alert, a sound effect, an audio stream, a ringtone, an alarm, or a call notification (see at least, “identifier of a type of audio stream contents (e.g., alarms, VoIP calls, phone calls, music streams, video streams, etc.), etc.,” Denis Column 11 Lines 6 – 8). Claim 4: Denis discloses the method of claim 1, further comprising routing the audio signal from a selected audio signal source device of the plurality of audio signal source devices to the audio output device based at least in part on the audio signal routing decision (see at least, “According to other examples, determining a priority score for an available audio stream having at least two audio stream attributes listed in the local audio stream prioritization policy may comprise combining the priority values retrieved from the local audio stream prioritization policy. For instance, it is possible to compute the priority score as the mean value of the retrieved priority values, or as a weighted sum of said retrieved priority values, etc. When the priority values are retrieved from different lists of associations having different respective priority levels, then the priority score may be computed as a weighted sum of the retrieved priority values, wherein the weighting coefficients depend on the priority levels of the different priority values, etc.,” Denis Column 16 Lines 9 – 22, “The selected available audio stream is then played by the audio rendering unit 13 of the audio rendering device 10, during a playing step 56,” Denis Column 16 Lines 23 – 25). Claim 5: Denis discloses the method of claim 4, wherein routing the audio signal further comprises: disconnecting a second audio signal source device of the plurality of audio signal source devices from the audio output device; and connecting the selected audio signal source device of the plurality of audio signal source devices to the audio output device (see at least, “For instance, it is possible to have, in some embodiments, at most one active audio stream. In that case, the first audio stream needs to be deactivated before the second audio stream is activated, in a kind of "break before make" approach. Hence, the audio rendering device 10 stops the reception of the first audio stream content before starting the reception of the second audio stream content (see e.g., FIGS. 6 to 8),” Denis Column 23 Lines 14 – 20). Claim 8: Denis discloses a system for routing communication to a common audio output device connected to each of a plurality of audio signal source devices, the system comprising: computer hardware (see at least, “As illustrated in FIG. 1, the audio configurator device 20 comprises a processing circuit 21 and a wireless communication unit 22. For example, the processing circuit 21 comprises one or more processors and storage means (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.) in which a computer program product is stored, in the form of a set of program-code instructions to be executed in order to implement all or a part of the steps of a method 60 for configuring the audio rendering device 10. Alternatively, or in combination thereof, the processing circuit 21 can com-prise one or more programmable logic circuits (FPGA, PLD, etc.), and/or one or more specialized integrated circuits (ASIC), and/or a set of discrete electronic components, etc., adapted for implementing all or part of said steps,” Denis Column 16 Lines 39 – 53, “In other words, the processing circuit 21, the wireless communication unit 22 and the user interface 23 of the audio configurator device 20 form a set of means configured by software (specific computer program product) and/or by hardware (processor, FPGA, PLD, ASIC, discrete electronic components, radiofrequency circuit, etc.) to implement all or part of the steps of a method 60 for configuring the audio rendering device 10. It should be noted that the audio configurator device 20 may be integrated in an audio stream source device, in which case it can both configure the audio rendering device 10 and transmit audio streams to the audio rendering device 10,” Denis Column 17 Lines 3 – 14) configured to perform operations comprising: assessing, for a first audio signal source device of a plurality of audio signal source devices (see at least, “It should be noted that the audio configurator device 20 may be integrated in an audio stream source device, in which case it can both configure the audio rendering device 10 and transmit audio streams to the audio rendering device 10,” Denis Column 17 Lines 11 – 14), an input comprising at least one of an operational state of the first audio signal source device (see at least, “The method 50 for playing audio streams comprises also a step 53 of identifying, by the processing circuit 11, audio streams that are available via wireless links. An audio stream is referred to as "available" when the corresponding audio stream source device has a wireless link with the wireless communication unit 12 of the audio rendering device 10, and has audio stream content available for transmission over its wireless link with the audio rendering device 10. For instance, the audio rendering device 10 may monitor the list of available connected Audio/Video Distribution Transport Protocol, AVDTP, links and their respective statuses,” Denis Column 12 Lines 11 – 22), a user interaction with the first audio signal source device within a time duration threshold, an audio-producing application executing on the first audio signal source device (see at least, “In the present disclosure, an audio stream corresponds to application-level data representing an audio signal. For instance, it can be a bit stream coming from an audio streaming application, a music file stored locally read by an application, the output of an audio server, etc. The audio stream content corresponds to the useful data (vs. metadata information) representing the audio data that is actually played by the audio rendering device 10. The audio stream content can comprise one or more audio channels (e.g., mono or stereo music, etc.),” Denis Column 10 Lines 10 – 19, “According to another example, an audio stream attribute can be an identifier of an audio stream content provider. For instance, a first audio stream content provider can be Spotify®, a second audio stream content provider can be Youtube®, a third specific audio stream content provider can be Facebook®, etc. In that case, it is possible to e.g., prioritize an audio stream from Spotify® over an audio stream from Youtube®, etc., Denis Column 3 Lines 50 – 57), or a degree of user interaction with the audio-producing application (see at least, “In some embodiments, the audio configurator device 20 may receive, from the audio rendering device 10, a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10. In that case, the audio configurator device 20 configures the audio stream prioritization policy based on the user's input, but also based on the received list and/or based on the local audio stream prioritization policy received from the audio rendering device 10. For instance, the received list and/or the received local audio stream prioritization policy may be presented to the user before receiving the user's input. Whether or not the audio configurator device 20 receives from the audio rendering device 10 a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10 might depend on the elected implementation and/or on a configuration phase. For instance, if the configuration phase corresponds to the first configuration of the audio rendering device 10 (i.e., initialization), then the audio stream prioritization policy might be configured only by using the user's input or it can use also a received list of audio stream attributes associated to available audio streams. Subsequently, the audio configurator device 20 might receive, on a regular basis and/or when the audio rendering device 10 is switched ON, the local audio stream prioritization policy of the audio rendering device 10 in order to prompt the user for any required modification,” Denis Column 17 Line 50 – Column 18 Line 9, “In some embodiments, the audio rendering device 10 may also be configured to determine autonomously updated priority values of audio stream attributes. For instance, the audio rendering device 10 may determine an updated priority value based on user habits and machine learning,” Denis Column 19 Lines 14 – 18); and determining an audio signal routing decision for routing an audio signal from one of the plurality of audio signal source devices to an audio output device based at least in part on the assessment of the input (see at least, “According to other examples, determining a priority score for an available audio stream having at least two audio stream attributes listed in the local audio stream prioritization policy may comprise combining the priority values retrieved from the local audio stream prioritization policy. For instance, it is possible to compute the priority score as the mean value of the retrieved priority values, or as a weighted sum of said retrieved priority values, etc. When the priority values are retrieved from different lists of associations having different respective priority levels, then the priority score may be computed as a weighted sum of the retrieved priority values, wherein the weighting coefficients depend on the priority levels of the different priority values, etc.,” Denis Column 16 Lines 9 – 22, “The selected available audio stream is then played by the audio rendering unit 13 of the audio rendering device 10, during a playing step 56,” Denis Column 16 Lines 23 – 25). Claim 9: Denis discloses the system of claim 8, wherein the assessing and determining are performed by an operating system or firmware of at least one of the plurality of audio signal source devices (see at least, “As illustrated in FIG. 1, the audio configurator device 20 comprises a processing circuit 21 and a wireless communication unit 22. For example, the processing circuit 21 comprises one or more processors and storage means (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.) in which a computer program product is stored, in the form of a set of program-code instructions to be executed in order to implement all or a part of the steps of a method 60 for configuring the audio rendering device 10. Alternatively, or in combination thereof, the processing circuit 21 can com-prise one or more programmable logic circuits (FPGA, PLD, etc.), and/or one or more specialized integrated circuits (ASIC), and/or a set of discrete electronic components, etc., adapted for implementing all or part of said steps,” Denis Column 16 Lines 39 – 53, “In other words, the processing circuit 21, the wireless communication unit 22 and the user interface 23 of the audio configurator device 20 form a set of means configured by software (specific computer program product) and/or by hardware (processor, FPGA, PLD, ASIC, discrete electronic components, radiofrequency circuit, etc.) to implement all or part of the steps of a method 60 for configuring the audio rendering device 10. It should be noted that the audio configurator device 20 may be integrated in an audio stream source device, in which case it can both configure the audio rendering device 10 and transmit audio streams to the audio rendering device 10,” Denis Column 17 Lines 3 – 14). Claim 10: Denis discloses the system of claim 8, wherein the audio-producing application is associated with one of an alert, a sound effect, an audio stream, a ringtone, an alarm, or a call notification (see at least, “identifier of a type of audio stream contents (e.g., alarms, VoIP calls, phone calls, music streams, video streams, etc.), etc.,” Denis Column 11 Lines 6 – 8). Claim 12: Denis discloses the system of claim 8, wherein the operations further comprise routing the audio signal from a selected audio signal source device of the plurality of audio signal source devices to the audio output device based at least in part on the audio signal routing decision (see at least, “According to other examples, determining a priority score for an available audio stream having at least two audio stream attributes listed in the local audio stream prioritization policy may comprise combining the priority values retrieved from the local audio stream prioritization policy. For instance, it is possible to compute the priority score as the mean value of the retrieved priority values, or as a weighted sum of said retrieved priority values, etc. When the priority values are retrieved from different lists of associations having different respective priority levels, then the priority score may be computed as a weighted sum of the retrieved priority values, wherein the weighting coefficients depend on the priority levels of the different priority values, etc.,” Denis Column 16 Lines 9 – 22, “The selected available audio stream is then played by the audio rendering unit 13 of the audio rendering device 10, during a playing step 56,” Denis Column 16 Lines 23 – 25). Claim 13: Denis discloses the system of claim 12, wherein routing the audio signal further comprises: disconnecting a second audio signal source device of the plurality of audio signal source devices from the audio output device; and connecting the selected audio signal source device of the plurality of audio signal source devices to the audio output device (see at least, “For instance, it is possible to have, in some embodiments, at most one active audio stream. In that case, the first audio stream needs to be deactivated before the second audio stream is activated, in a kind of "break before make" approach. Hence, the audio rendering device 10 stops the reception of the first audio stream content before starting the reception of the second audio stream content (see e.g., FIGS. 6 to 8),” Denis Column 23 Lines 14 – 20). Claim 16: Denis discloses a non-transitory computer-readable medium for routing communication signals to a common audio output device connected to each of a plurality of audio signal source devices that, when executed by one or more processors, causes the one or more processors (see at least, “As illustrated in FIG. 1, the audio configurator device 20 comprises a processing circuit 21 and a wireless communication unit 22. For example, the processing circuit 21 comprises one or more processors and storage means (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.) in which a computer program product is stored, in the form of a set of program-code instructions to be executed in order to implement all or a part of the steps of a method 60 for configuring the audio rendering device 10. Alternatively, or in combination thereof, the processing circuit 21 can com-prise one or more programmable logic circuits (FPGA, PLD, etc.), and/or one or more specialized integrated circuits (ASIC), and/or a set of discrete electronic components, etc., adapted for implementing all or part of said steps,” Denis Column 16 Lines 39 – 53, “In other words, the processing circuit 21, the wireless communication unit 22 and the user interface 23 of the audio configurator device 20 form a set of means configured by software (specific computer program product) and/or by hardware (processor, FPGA, PLD, ASIC, discrete electronic components, radiofrequency circuit, etc.) to implement all or part of the steps of a method 60 for configuring the audio rendering device 10. It should be noted that the audio configurator device 20 may be integrated in an audio stream source device, in which case it can both configure the audio rendering device 10 and transmit audio streams to the audio rendering device 10,” Denis Column 17 Lines 3 – 14) to perform operations comprising: assessing, for a first audio signal source device of a plurality of audio signal source devices (see at least, “It should be noted that the audio configurator device 20 may be integrated in an audio stream source device, in which case it can both configure the audio rendering device 10 and transmit audio streams to the audio rendering device 10,” Denis Column 17 Lines 11 – 14), an input comprising at least one of an operational state of the first audio signal source device (see at least, “The method 50 for playing audio streams comprises also a step 53 of identifying, by the processing circuit 11, audio streams that are available via wireless links. An audio stream is referred to as "available" when the corresponding audio stream source device has a wireless link with the wireless communication unit 12 of the audio rendering device 10, and has audio stream content available for transmission over its wireless link with the audio rendering device 10. For instance, the audio rendering device 10 may monitor the list of available connected Audio/Video Distribution Transport Protocol, AVDTP, links and their respective statuses,” Denis Column 12 Lines 11 – 22), a user interaction with the first audio signal source device within a time duration threshold, an audio-producing application executing on the first audio signal source device (see at least, “In the present disclosure, an audio stream corresponds to application-level data representing an audio signal. For instance, it can be a bit stream coming from an audio streaming application, a music file stored locally read by an application, the output of an audio server, etc. The audio stream content corresponds to the useful data (vs. metadata information) representing the audio data that is actually played by the audio rendering device 10. The audio stream content can comprise one or more audio channels (e.g., mono or stereo music, etc.),” Denis Column 10 Lines 10 – 19, “According to another example, an audio stream attribute can be an identifier of an audio stream content provider. For instance, a first audio stream content provider can be Spotify®, a second audio stream content provider can be Youtube®, a third specific audio stream content provider can be Facebook®, etc. In that case, it is possible to e.g., prioritize an audio stream from Spotify® over an audio stream from Youtube®, etc., Denis Column 3 Lines 50 – 57), or a degree of user interaction with the audio-producing application (see at least, “In some embodiments, the audio configurator device 20 may receive, from the audio rendering device 10, a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10. In that case, the audio configurator device 20 configures the audio stream prioritization policy based on the user's input, but also based on the received list and/or based on the local audio stream prioritization policy received from the audio rendering device 10. For instance, the received list and/or the received local audio stream prioritization policy may be presented to the user before receiving the user's input. Whether or not the audio configurator device 20 receives from the audio rendering device 10 a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10 might depend on the elected implementation and/or on a configuration phase. For instance, if the configuration phase corresponds to the first configuration of the audio rendering device 10 (i.e., initialization), then the audio stream prioritization policy might be configured only by using the user's input or it can use also a received list of audio stream attributes associated to available audio streams. Subsequently, the audio configurator device 20 might receive, on a regular basis and/or when the audio rendering device 10 is switched ON, the local audio stream prioritization policy of the audio rendering device 10 in order to prompt the user for any required modification,” Denis Column 17 Line 50 – Column 18 Line 9, “In some embodiments, the audio rendering device 10 may also be configured to determine autonomously updated priority values of audio stream attributes. For instance, the audio rendering device 10 may determine an updated priority value based on user habits and machine learning,” Denis Column 19 Lines 14 – 18); and determining an audio signal routing decision for routing an audio signal from one of the plurality of audio signal source devices to an audio output device based at least in part on the assessment of the input (see at least, “According to other examples, determining a priority score for an available audio stream having at least two audio stream attributes listed in the local audio stream prioritization policy may comprise combining the priority values retrieved from the local audio stream prioritization policy. For instance, it is possible to compute the priority score as the mean value of the retrieved priority values, or as a weighted sum of said retrieved priority values, etc. When the priority values are retrieved from different lists of associations having different respective priority levels, then the priority score may be computed as a weighted sum of the retrieved priority values, wherein the weighting coefficients depend on the priority levels of the different priority values, etc.,” Denis Column 16 Lines 9 – 22, “The selected available audio stream is then played by the audio rendering unit 13 of the audio rendering device 10, during a playing step 56,” Denis Column 16 Lines 23 – 25). Claim 17: Denis discloses the non-transitory computer-readable medium of claim 16, wherein the audio-producing application is associated with one of an alert, a sound effect, an audio stream, a ringtone, an alarm, or a call notification (see at least, “identifier of a type of audio stream contents (e.g., alarms, VoIP calls, phone calls, music streams, video streams, etc.), etc.,” Denis Column 11 Lines 6 – 8). Claim 19: Denis discloses the non-transitory computer-readable medium of claim 16, further comprising routing the audio signal from a selected audio signal source device of the plurality of audio signal source devices to the audio output device based at least in part on the audio signal routing decision (see at least, “According to other examples, determining a priority score for an available audio stream having at least two audio stream attributes listed in the local audio stream prioritization policy may comprise combining the priority values retrieved from the local audio stream prioritization policy. For instance, it is possible to compute the priority score as the mean value of the retrieved priority values, or as a weighted sum of said retrieved priority values, etc. When the priority values are retrieved from different lists of associations having different respective priority levels, then the priority score may be computed as a weighted sum of the retrieved priority values, wherein the weighting coefficients depend on the priority levels of the different priority values, etc.,” Denis Column 16 Lines 9 – 22, “The selected available audio stream is then played by the audio rendering unit 13 of the audio rendering device 10, during a playing step 56,” Denis Column 16 Lines 23 – 25). Claim 20: Denis discloses the non-transitory computer-readable medium of claim 19, wherein routing the audio signal further comprises: disconnecting a second audio signal source device of the plurality of audio signal source devices from the audio output device; and connecting the selected audio signal source device of the plurality of audio signal source devices to the audio output device (see at least, “For instance, it is possible to have, in some embodiments, at most one active audio stream. In that case, the first audio stream needs to be deactivated before the second audio stream is activated, in a kind of "break before make" approach. Hence, the audio rendering device 10 stops the reception of the first audio stream content before starting the reception of the second audio stream content (see e.g., FIGS. 6 to 8),” Denis Column 23 Lines 14 – 20). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3, 6, 7, 11, 14, 15, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Denis in view of Belt et al. (US 6,193,422 B1), hereinafter Belt. Claim 3: Denis discloses the method of claim 1, wherein the degree of user interaction with the audio-producing application is determined (see at least, “In some embodiments, the audio configurator device 20 may receive, from the audio rendering device 10, a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10. In that case, the audio configurator device 20 configures the audio stream prioritization policy based on the user's input, but also based on the received list and/or based on the local audio stream prioritization policy received from the audio rendering device 10. For instance, the received list and/or the received local audio stream prioritization policy may be presented to the user before receiving the user's input. Whether or not the audio configurator device 20 receives from the audio rendering device 10 a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10 might depend on the elected implementation and/or on a configuration phase. For instance, if the configuration phase corresponds to the first configuration of the audio rendering device 10 (i.e., initialization), then the audio stream prioritization policy might be configured only by using the user's input or it can use also a received list of audio stream attributes associated to available audio streams. Subsequently, the audio configurator device 20 might receive, on a regular basis and/or when the audio rendering device 10 is switched ON, the local audio stream prioritization policy of the audio rendering device 10 in order to prompt the user for any required modification,” Denis Column 17 Line 50 – Column 18 Line 9, “In some embodiments, the audio rendering device 10 may also be configured to determine autonomously updated priority values of audio stream attributes. For instance, the audio rendering device 10 may determine an updated priority value based on user habits and machine learning,” Denis Column 19 Lines 14 – 18) but does not disclose with reference to a second time duration threshold. However, Belt discloses wherein in regards to user activity and power consumption wherein user interaction is in reference to a first time duration threshold and a second time duration threshold (see at least, “In the preferred embodiment, the preset for the idle timer at 44 in FIG. 1 is set to a value representing 8 seconds. So long as there is system activity generating input signals to the system event selector 36, the selector 36 will be producing periodic pulses on the SYSTEM EVENT line, which periodically restart the idle timer 42 and prevent it from expiring,” Belt Column 5 Lines 14 – 20, “Following eight seconds of inactivity, the computer system 10 enters the idle mode, in which the system continues to operate but with certain power saving factors, such as having the processor 11 run at a slower clock speed. The idle mode is entirely transparent to the user, in that there is no visible sign to the user that the system has entered or exited idle mode. If certain events occur while the system is in idle mode, for example if the user presses a key or the executing program reaches a portion where it updates the video display, the system will initiate an exit from the idle mode. On the other hand, if the system remains in the idle mode for a predetermined period of time, then the system will automatically transition to a standby mode, which in and of itself is conventional. In the standby mode, various system peripherals are shifted to a low power state, the backlight for the video display is turned off, and the processor 11 is halted,” Belt Column 5 Lines 30 – 46). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned power reduction features of Belt based on the degree of user interaction in the invention of Denis thereby offering an additional technique that “is advantageous in that it may reduce the power consumption and the processing of the audio rendering device 10,” Denis Column 23 Lines 22 – 24. Claim 6: Denis discloses the method of claim 1, but does not disclose wherein the operational state of the audio signal source device comprises a display-off operational state. However, Belt discloses in regards to operational state and power consumption wherein the operational state of the device comprises a display-off operational state (see at least, “In the preferred embodiment, the preset for the idle timer at 44 in FIG. 1 is set to a value representing 8 seconds. So long as there is system activity generating input signals to the system event selector 36, the selector 36 will be producing periodic pulses on the SYSTEM EVENT line, which periodically restart the idle timer 42 and prevent it from expiring,” Belt Column 5 Lines 14 – 20, “Following eight seconds of inactivity, the computer system 10 enters the idle mode, in which the system continues to operate but with certain power saving factors, such as having the processor 11 run at a slower clock speed. The idle mode is entirely transparent to the user, in that there is no visible sign to the user that the system has entered or exited idle mode. If certain events occur while the system is in idle mode, for example if the user presses a key or the executing program reaches a portion where it updates the video display, the system will initiate an exit from the idle mode. On the other hand, if the system remains in the idle mode for a predetermined period of time, then the system will automatically transition to a standby mode, which in and of itself is conventional. In the standby mode, various system peripherals are shifted to a low power state, the backlight for the video display is turned off, and the processor 11 is halted,” Belt Column 5 Lines 30 – 46). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned power reduction features of Belt in the audio signal source device of Denis thereby offering an additional technique that “is advantageous in that it may reduce the power consumption and the processing of the audio rendering device 10,” Denis Column 23 Lines 22 – 24. Claim 7: Denis discloses the method of claim 1, but does not disclose wherein the operational state of the audio signal source device comprises a standby operational state. However, Belt discloses in regards to operational state and power consumption wherein the operational state of the device comprises a display-off operational state (see at least, “In the preferred embodiment, the preset for the idle timer at 44 in FIG. 1 is set to a value representing 8 seconds. So long as there is system activity generating input signals to the system event selector 36, the selector 36 will be producing periodic pulses on the SYSTEM EVENT line, which periodically restart the idle timer 42 and prevent it from expiring,” Belt Column 5 Lines 14 – 20, “Following eight seconds of inactivity, the computer system 10 enters the idle mode, in which the system continues to operate but with certain power saving factors, such as having the processor 11 run at a slower clock speed. The idle mode is entirely transparent to the user, in that there is no visible sign to the user that the system has entered or exited idle mode. If certain events occur while the system is in idle mode, for example if the user presses a key or the executing program reaches a portion where it updates the video display, the system will initiate an exit from the idle mode. On the other hand, if the system remains in the idle mode for a predetermined period of time, then the system will automatically transition to a standby mode, which in and of itself is conventional. In the standby mode, various system peripherals are shifted to a low power state, the backlight for the video display is turned off, and the processor 11 is halted,” Belt Column 5 Lines 30 – 46). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned power reduction features of Belt in the audio signal source device of Denis thereby offering an additional technique that “is advantageous in that it may reduce the power consumption and the processing of the audio rendering device 10,” Denis Column 23 Lines 22 – 24. Claim 11: Denis discloses the system of claim 8, wherein the degree of user interaction with the audio-producing application is determined (see at least, “In some embodiments, the audio configurator device 20 may receive, from the audio rendering device 10, a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10. In that case, the audio configurator device 20 configures the audio stream prioritization policy based on the user's input, but also based on the received list and/or based on the local audio stream prioritization policy received from the audio rendering device 10. For instance, the received list and/or the received local audio stream prioritization policy may be presented to the user before receiving the user's input. Whether or not the audio configurator device 20 receives from the audio rendering device 10 a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10 might depend on the elected implementation and/or on a configuration phase. For instance, if the configuration phase corresponds to the first configuration of the audio rendering device 10 (i.e., initialization), then the audio stream prioritization policy might be configured only by using the user's input or it can use also a received list of audio stream attributes associated to available audio streams. Subsequently, the audio configurator device 20 might receive, on a regular basis and/or when the audio rendering device 10 is switched ON, the local audio stream prioritization policy of the audio rendering device 10 in order to prompt the user for any required modification,” Denis Column 17 Line 50 – Column 18 Line 9, “In some embodiments, the audio rendering device 10 may also be configured to determine autonomously updated priority values of audio stream attributes. For instance, the audio rendering device 10 may determine an updated priority value based on user habits and machine learning,” Denis Column 19 Lines 14 – 18) but does not disclose with reference to a second time duration threshold. However, Belt discloses wherein in regards to user activity and power consumption wherein user interaction is in reference to a first time duration threshold and a second time duration threshold (see at least, “In the preferred embodiment, the preset for the idle timer at 44 in FIG. 1 is set to a value representing 8 seconds. So long as there is system activity generating input signals to the system event selector 36, the selector 36 will be producing periodic pulses on the SYSTEM EVENT line, which periodically restart the idle timer 42 and prevent it from expiring,” Belt Column 5 Lines 14 – 20, “Following eight seconds of inactivity, the computer system 10 enters the idle mode, in which the system continues to operate but with certain power saving factors, such as having the processor 11 run at a slower clock speed. The idle mode is entirely transparent to the user, in that there is no visible sign to the user that the system has entered or exited idle mode. If certain events occur while the system is in idle mode, for example if the user presses a key or the executing program reaches a portion where it updates the video display, the system will initiate an exit from the idle mode. On the other hand, if the system remains in the idle mode for a predetermined period of time, then the system will automatically transition to a standby mode, which in and of itself is conventional. In the standby mode, various system peripherals are shifted to a low power state, the backlight for the video display is turned off, and the processor 11 is halted,” Belt Column 5 Lines 30 – 46). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned power reduction features of Belt based on the degree of user interaction in the invention of Denis thereby offering an additional technique that “is advantageous in that it may reduce the power consumption and the processing of the audio rendering device 10,” Denis Column 23 Lines 22 – 24. Claim 14: Denis discloses the system of claim 8, but does not disclose wherein the operational state of the audio signal source device comprises a display-off operational state. However, Belt discloses in regards to operational state and power consumption wherein the operational state of the device comprises a display-off operational state (see at least, “In the preferred embodiment, the preset for the idle timer at 44 in FIG. 1 is set to a value representing 8 seconds. So long as there is system activity generating input signals to the system event selector 36, the selector 36 will be producing periodic pulses on the SYSTEM EVENT line, which periodically restart the idle timer 42 and prevent it from expiring,” Belt Column 5 Lines 14 – 20, “Following eight seconds of inactivity, the computer system 10 enters the idle mode, in which the system continues to operate but with certain power saving factors, such as having the processor 11 run at a slower clock speed. The idle mode is entirely transparent to the user, in that there is no visible sign to the user that the system has entered or exited idle mode. If certain events occur while the system is in idle mode, for example if the user presses a key or the executing program reaches a portion where it updates the video display, the system will initiate an exit from the idle mode. On the other hand, if the system remains in the idle mode for a predetermined period of time, then the system will automatically transition to a standby mode, which in and of itself is conventional. In the standby mode, various system peripherals are shifted to a low power state, the backlight for the video display is turned off, and the processor 11 is halted,” Belt Column 5 Lines 30 – 46). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned power reduction features of Belt in the audio signal source device of Denis thereby offering an additional technique that “is advantageous in that it may reduce the power consumption and the processing of the audio rendering device 10,” Denis Column 23 Lines 22 – 24. Claim 15: Denis discloses the system of claim 8, but does not disclose wherein the operational state of the audio signal source device comprises a standby operational state. However, Belt discloses in regards to operational state and power consumption wherein the operational state of the device comprises a display-off operational state (see at least, “In the preferred embodiment, the preset for the idle timer at 44 in FIG. 1 is set to a value representing 8 seconds. So long as there is system activity generating input signals to the system event selector 36, the selector 36 will be producing periodic pulses on the SYSTEM EVENT line, which periodically restart the idle timer 42 and prevent it from expiring,” Belt Column 5 Lines 14 – 20, “Following eight seconds of inactivity, the computer system 10 enters the idle mode, in which the system continues to operate but with certain power saving factors, such as having the processor 11 run at a slower clock speed. The idle mode is entirely transparent to the user, in that there is no visible sign to the user that the system has entered or exited idle mode. If certain events occur while the system is in idle mode, for example if the user presses a key or the executing program reaches a portion where it updates the video display, the system will initiate an exit from the idle mode. On the other hand, if the system remains in the idle mode for a predetermined period of time, then the system will automatically transition to a standby mode, which in and of itself is conventional. In the standby mode, various system peripherals are shifted to a low power state, the backlight for the video display is turned off, and the processor 11 is halted,” Belt Column 5 Lines 30 – 46). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned power reduction features of Belt in the audio signal source device of Denis thereby offering an additional technique that “is advantageous in that it may reduce the power consumption and the processing of the audio rendering device 10,” Denis Column 23 Lines 22 – 24. Claim 18: Denis discloses the non-transitory computer-readable medium of claim 16, wherein the degree of user interaction with the audio-producing application is determined (see at least, “In some embodiments, the audio configurator device 20 may receive, from the audio rendering device 10, a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10. In that case, the audio configurator device 20 configures the audio stream prioritization policy based on the user's input, but also based on the received list and/or based on the local audio stream prioritization policy received from the audio rendering device 10. For instance, the received list and/or the received local audio stream prioritization policy may be presented to the user before receiving the user's input. Whether or not the audio configurator device 20 receives from the audio rendering device 10 a list of at least one audio stream attribute and/or the local audio stream prioritization policy stored by the audio rendering device 10 might depend on the elected implementation and/or on a configuration phase. For instance, if the configuration phase corresponds to the first configuration of the audio rendering device 10 (i.e., initialization), then the audio stream prioritization policy might be configured only by using the user's input or it can use also a received list of audio stream attributes associated to available audio streams. Subsequently, the audio configurator device 20 might receive, on a regular basis and/or when the audio rendering device 10 is switched ON, the local audio stream prioritization policy of the audio rendering device 10 in order to prompt the user for any required modification,” Denis Column 17 Line 50 – Column 18 Line 9, “In some embodiments, the audio rendering device 10 may also be configured to determine autonomously updated priority values of audio stream attributes. For instance, the audio rendering device 10 may determine an updated priority value based on user habits and machine learning,” Denis Column 19 Lines 14 – 18) but does not disclose with reference to a second time duration threshold. However, Belt discloses wherein in regards to user activity and power consumption wherein user interaction is in reference to a first time duration threshold and a second time duration threshold (see at least, “In the preferred embodiment, the preset for the idle timer at 44 in FIG. 1 is set to a value representing 8 seconds. So long as there is system activity generating input signals to the system event selector 36, the selector 36 will be producing periodic pulses on the SYSTEM EVENT line, which periodically restart the idle timer 42 and prevent it from expiring,” Belt Column 5 Lines 14 – 20, “Following eight seconds of inactivity, the computer system 10 enters the idle mode, in which the system continues to operate but with certain power saving factors, such as having the processor 11 run at a slower clock speed. The idle mode is entirely transparent to the user, in that there is no visible sign to the user that the system has entered or exited idle mode. If certain events occur while the system is in idle mode, for example if the user presses a key or the executing program reaches a portion where it updates the video display, the system will initiate an exit from the idle mode. On the other hand, if the system remains in the idle mode for a predetermined period of time, then the system will automatically transition to a standby mode, which in and of itself is conventional. In the standby mode, various system peripherals are shifted to a low power state, the backlight for the video display is turned off, and the processor 11 is halted,” Belt Column 5 Lines 30 – 46). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the aforementioned power reduction features of Belt based on the degree of user interaction in the invention of Denis thereby offering an additional technique that “is advantageous in that it may reduce the power consumption and the processing of the audio rendering device 10,” Denis Column 23 Lines 22 – 24. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH SAUNDERS whose telephone number is (571)270-1063. The examiner can normally be reached Monday-Thursday, 9:00 a.m. - 4 p.m., EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn R Edwards can be reached at (571)270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH SAUNDERS JR/Primary Examiner, Art Unit 2692 /CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jul 30, 2024
Application Filed
Mar 14, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596883
Audio Analysis for Text Generation
2y 5m to grant Granted Apr 07, 2026
Patent 12598420
AUDIO DEVICE WITH ELECTROSTATIC DISCHARGE PROTECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12593190
User Experience Localizing Binaural Sound During a Telephone Call
2y 5m to grant Granted Mar 31, 2026
Patent 12585425
Light-function audio parameters
2y 5m to grant Granted Mar 24, 2026
Patent 12585422
DATA PROCESSING METHOD OF PROCESSING MULTITRACK AUDIO DATA AND DATA PROCESSING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
93%
With Interview (+20.6%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 740 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month