Detailed Action
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . See 35 U.S.C. § 100 (note).
Art Rejections
Anticipation
The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1–20 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by US 2025/0362865 (filed 24 May 2024) (“Nash”).
Claim 1 is drawn to “a system.” The following table illustrates the correspondence between the claimed system and the Nash reference.
Claim 1
The Nash Reference
“1. A system comprising:
The Nash reference similarly describes an audio arbitration system 100 and method that adjusts audio playback of different types of audio based on the changing state of a vehicle 102. Nash at Abs., ¶¶ 6, 7, FIG.4.
“one or more processors; and
“one or more non-transitory computer-readable media storing instructions executable by the one or more processors, wherein the instructions, when executed, cause the system to perform operations comprising:
Nash’s system includes a computer 104 that includes a processor and memory with instructions that are executed by the processor to control the output of audio by audio devices 110. Id. at ¶¶ 29, 31, 33, FIG.1.
“determining a first state of a vehicle…
“determining that the vehicle has changed from the first state to a second state;
In order to control audio output by audio devices 110, Nash’s computer 104 monitors sensors 106 to determine the state of vehicle 102. Id. at ¶¶ 32, 36, 43, 74, 79–82. The vehicle’s state is monitored over time to detect changes from a first state to a second state. See id. For example, Nash detects changes in the vehicle’s motion state. Id.
“receiving first audio data for playback by the vehicle, the first audio data having a first audio priority;
“receiving second audio data for playback by the vehicle, the second audio data having a second audio priority…
Likewise, Nash’s computer 104 receives first and second content data 300, 302, including visual and audio data. Id. at ¶¶ 37–41, 52. The first content data 300 may correspond to content reproduced by one of displays 108-1 and 108-2 and audio devices 110. Id. Computer 104 assigns a priority to each audio data. Id. at ¶¶ 41, 59.
“determining, based at least in part on the first audio priority, the second audio priority, and that the vehicle has changed state, to modify a playback volume of the first audio data relative to the second audio data to create a modified playback volume; and
Based on the vehicle’s state, Nash’s computer 104 adjusts the volume of audio associated with first and second content data 300, 302. Id. at ¶¶ 59–72, FIG.4. For example, if a vehicle is in a first state (e.g., not moving) musical content is prioritized and amplified more than other audio. However, the music may be muted to give priority to a warning when the vehicle is in a second state (e.g., moving abnormally. Id. at ¶¶ 80–83.
“causing the first audio data to be played by the vehicle based at least in part on the modified playback volume.”
Similarly, Nash’s computer 104 outputs volume-modulated audio using audio devices 110. Id.
Table 1
For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 2 depends on claim 1, and further requires the following:
“wherein the first state is one or more states of a set of states comprising a driving state, a parked state, an idling state, a stopped state, a starting state, an ingress state, an egress state, an approaching destination state, a takeoff state, a seatbelt unfastened state, a reversing state, a proximal to other vehicles state, a proximal to pedestrians state, a proximal to emergency vehicles state, and the second state is one or more different states, or one more fewer states, of the set of states than the first state.”
Similarly, the Nash reference describes detecting the state of a vehicle and transitions from a first state to a second state including normal (e.g., driving) movement, abnormal movement (e.g., lane violations) and parking. Nash at ¶¶ 79–83. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 3 depends on claim 1, and further requires the following:
“wherein the instructions further cause the system to perform actions comprising:
“determining, based at least in part on the first audio data, a first audio category indicating a category of the first audio data;
“determining, based at least in part on the second audio data, a second audio category indicating a type of the second audio data;
“determining, based at least in part on the first audio category, the first audio priority; and
“determining, based at least in part on the second audio category, the second audio priority.”
The Nash reference similarly determines and uses the category of audio content 300, 302 to determine audio priorities. Nash at ¶¶ 59, 64–74. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 4 depends on claim 1, and further requires the following:
“wherein the instructions further cause the system to perform actions comprising:
“determining, based at least in part on the first audio data, a first semantic content indicating information communicated in the first audio data;
“determining, based at least in part on the second audio data, a second semantic content indicating information communicated f [sic, in] the second audio data;
“determining, based at least in part on the first content type, the first audio priority; and
“determining, based at least in part on the second content type, the second audio priority.”
The Nash reference similarly determines and uses the category of audio content 300, 302 to determine audio priorities. Nash at ¶¶ 59, 64–74. The categories include sub-categories recognized through audio analysis, or speech analysis. Id. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 5 is drawn to “a method.” The following table illustrates the correspondence between the claimed method and the Nash reference.
Claim 5
The Nash Reference
“5. A method comprising:
The Nash reference similarly describes an audio arbitration system 100 and method that adjusts audio playback of different types of audio based on the changing state of a vehicle 102. Nash at Abs., ¶¶ 6, 7, FIG.4.
“determining first vehicle data;
In order to control audio output by audio devices 110, Nash’s computer 104 monitors sensors 106 to determine the state of vehicle 102. Id. at ¶¶ 32, 36, 43, 74, 79–82. The vehicle’s state is monitored over time to detect changes from a first state to a second state. See id. For example, Nash detects changes in the vehicle’s motion state. Id.
“obtaining first audio data;
“obtaining second audio data;
Likewise, Nash’s computer 104 receives first and second content data 300, 302, including visual and audio data. Id. at ¶¶ 37–41, 52. The first content data 300 may correspond to content reproduced by one of displays 108-1 and 108-2 and audio devices 110. Id. Computer 104 assigns a priority to each audio data. Id. at ¶¶ 41, 59.
“determining, based at least in part on the first vehicle data, a first audio priority associated with the first audio data;
“determining a second audio priority associated with the second audio data;
“determining, based at least in part on data the first audio priority and the second audio priority, a relative volume of the first audio data to the second audio; and
Based on the vehicle’s state, Nash’s computer 104 adjusts the volume of audio associated with first and second content data 300, 302. Id. at ¶¶ 59–72, FIG.4. For example, if a vehicle is in a first state (e.g., not moving) musical content is prioritized and amplified more than other audio. However, the music may be muted to give priority to a warning when the vehicle is in a second state (e.g., moving abnormally. Id. at ¶¶ 80–83.
“causing the vehicle to play the first audio data at the relative volume to the second audio data.”
Similarly, Nash’s computer 104 outputs volume-modulated audio using audio devices 110. Id.
Table 2
For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 6 depends on claim 5, and further requires the following:
“further comprising:
“determining, based at least in part on the first audio data, a first audio category associated with the first audio data;
“determining, based at least in part on the second audio data, a second audio category associated with the second audio data;
“determining, based at least in part on the first audio category, the first audio priority; and
“determining, based at least in part on the second audio category, the second audio priority.”
The Nash reference similarly determines and uses the category of audio content 300, 302 to determine audio priorities. Nash at ¶¶ 59, 64–74. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 7 depends on claim 5, and further requires the following:
“further comprising:
“determining, based at least in part on the first audio data, a first semantic content indicating information communicated in the first audio data;
“determining, based at least in part on the second audio data, a second semantic content indicating information communicated in the second audio data;
“determining, based at least in part on the first semantic content, the first audio priority; and
“determining, based at least in part on the second semantic content, the second audio priority.”
The Nash reference similarly determines and uses the category of audio content 300, 302 to determine audio priorities. Nash at ¶¶ 59, 64–74. The categories include sub-categories recognized through audio analysis, or speech analysis. Id. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 8 depends on claim 5, and further requires the following:
“wherein causing the vehicle to play the first audio data at the relative volume to the second audio data is based at least in part on:
“determining a location of a user of the vehicle;
“determining, for a plurality of speakers of the vehicle and based at least in part on the relative volume, a plurality of gains; and
“causing the plurality of speakers to emit the first and second audio data based at least in part on the plurality of gains.”
Audio priorities are set for different zones in a vehicle, such that the priority for each piece of content 300, 302 for each audio device 110 (e.g., speakers in different cabin locations) is based on the user’s location (i.e., determined by a camera) relative to the audio devices. Nash at ¶¶ 36, 42–44, 55, 69. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 9 depends on claim 5, and further requires the following:
“further comprising:
“determining that the first or second audio data is intended for a specific user of the vehicle;
“determining, using a sensor of the vehicle, a first location of the specific user in relation to the vehicle; and
“controlling, the vehicle to play the first or second audio data by controlling an audio speaker based at least in part on the first location.”
Audio priorities are set for different zones in a vehicle, such that the priority for each piece of content 300, 302 for each audio device 110 (e.g., speakers in different cabin locations) is based on the user who selected the content and the user’s location. Nash at ¶¶ 42–44, 55, 69. Users are located using sensors 106, such as cameras. Id. at ¶ 36. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 10 depends on claim 5, and further requires the following:
“wherein the first vehicle data comprises a first state of the vehicle.”
Claim 11 depends on claim 10, and further requires the following:
“wherein the first state is one or more states of a set of states comprising a driving state, a parked state, an idling state, a stopped state, a starting state, an ingress state, an egress state, an approaching destination state, a takeoff state, a reversing state, a proximal to other vehicles state, a proximal to pedestrians state, a proximal to emergency vehicles state.”
Claim 12 depends on claim 10, and further requires the following:
“further comprising:
“determining that the vehicle has changed from the first state to a second state wherein the second state is different from the first state; and
“determining, based at least in part on the second state of the vehicle, the first audio priority, whereby the relative volume of the first audio data to the second audio is different at the second vehicle state than at the first vehicle state.”
Claims 10–12 are treated together because they commonly relate to detecting a vehicle state and changes in vehicle state. Similarly, the Nash reference describes adjusting priorities assigned to content 300 and 302 by detecting the state of a vehicle and transitions from a first state to a second state including normal (e.g., driving) movement, abnormal movement (e.g., lane violations) and parking. Nash at ¶¶ 79–83. For the foregoing reasons, the Nash reference anticipates all limitations of the claims.
Claim 13 depends on claim 5, and further requires the following:
“further comprising:
“determining the first audio priority associated with the first audio data, based at least in part on one or more of: a presence of a user of the vehicle, or a location of the user in relation to the vehicle.”
Audio priorities are set for different zones in a vehicle, such that the priority for each piece of content 300, 302 is based on the presence of a user in a vehicle and the user’s location. Nash at ¶¶ 42–44, 69. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 14 depends on claim 5, and further requires the following:
“further comprising:
“determining, using a sensor of the vehicle, an ambient sound level proximate the vehicle; and
“controlling, based at least in part on the ambient sound level, a playback volume for the first audio data or the second audio data.”
Similarly, Nash describes using a microphone sensor 106 to detect speech near vehicle 102. Nash at ¶ 74. If the speech is identified as a desired conversation, Nash then adjusts content 300, 302 accordingly to prevent the content audio from interfering with the conversation. Id. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 15 depends on claim 5, and further requires the following:
“further comprising:
“determining, based at least in part on at least in part of the higher audio priority of the first and second audio priorities, to one or more of attenuate or mute a playback volume of the audio data associated with lower audio priority.”
Nash similarly describes lowering the volume or muting lower priority audio associated with content 300, 302. Nash at ¶¶ 57, 63, 72, 74, 83, 86, 87, 88. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 16 is drawn to “one or more non-transitory computer-readable media.” The following table illustrates the correspondence between the claimed media and the Nash reference.
Claim 16
The Nash Reference
“16. One or more non-transitory computer-readable media storing instructions executable by one or more processors, wherein the instructions, when executed, cause the one or more processors to perform operations comprising:
The Nash reference similarly describes an audio arbitration system 100 and method that adjusts audio playback of different types of audio based on the changing state of a vehicle 102. Nash at Abs., ¶¶ 6, 7, FIG.4. Nash’s system includes a computer 104 that includes a processor and memory with instructions that are executed by the processor to control the output of audio by audio devices 110. Id. at ¶¶ 29, 31, 33, FIG.1.
Nash’s computer 104 receives first and second content data 300, 302, including visual and audio data. Id. at ¶¶ 37–41, 52. The first content data 300 may correspond to content reproduced by one of displays 108-1 and 108-2 and audio devices 110. Id. Computer 104 assigns a priority to each audio data. Id. at ¶¶ 41, 59.
“determining, based at least in part on the first audio data, a first audio category associated with the first audio data;
“determining, based at least in part on the second audio data, a second audio category associated with the second audio data;
“determining, based at least in part on the first audio category, the first audio priority; and
“determining, based at least in part on the second audio category, the second audio priority.”
The Nash reference similarly determines and uses the category of audio content 300, 302 to determine audio priorities. Id. at ¶¶ 59, 64–74. The categories include sub-categories recognized through audio analysis, or speech analysis. Id.
Table 3
For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 17 depends on claim 16, and further requires the following:
“wherein the instructions, when executed, cause the one or more processors to further perform operations comprising:
“determining, based at least in part on the first audio data, a first audio category associated with the first audio data;
“determining, based at least in part on the second audio data, a second audio category associated with the second audio data;
“determining, based at least in part on the first audio category, the first audio priority; and
“determining, based at least in part on the second audio category, the second audio priority.”
The Nash reference similarly determines and uses the category of audio content 300, 302 to determine audio priorities. Nash at ¶¶ 59, 64–74. The categories include sub-categories recognized through audio analysis, or speech analysis. Id. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 18 depends on claim 16, and further requires the following:
“wherein the instructions, when executed, cause the one or more processors to further perform operations comprising:
“determining a location of a user of the vehicle;
“determining, for a plurality of speakers of the vehicle and based at least in part on the relative volume, a plurality of gains; and
“causing the plurality of speakers to emit the first and second audio data based at least in part on the plurality of gains.”
Audio priorities are set for different zones in a vehicle, such that the priority for each piece of content 300, 302 for each audio device 110 (e.g., speakers in different cabin locations) is based on the user’s location (i.e., determined by a camera) relative to the audio devices. Nash at ¶¶ 36, 42–44, 55, 69. For the foregoing reasons, the Nash reference anticipates all limitations of the claim.
Claim 19 depends on claim 16, and further requires the following:
“wherein the instructions, when executed, cause the one or more processors to further perform operations comprising:
“determining, based on the vehicle data, a first state of the vehicle.”
Claim 20 depends on claim 19, and further requires the following:
“wherein the instructions, when executed, cause the one or more processors to further perform operations comprising:
“determining that the vehicle has changed from the first state to a second state wherein the second state is different from the first state; and
“determining, based at least in part on the second state of the vehicle, the first audio priority, whereby the relative volume of the first audio data to the second audio is different at the second vehicle state than at the first vehicle state.”
Claims 19–20 are treated together because they commonly relate to detecting a vehicle state and changes in vehicle state. Similarly, the Nash reference describes adjusting priorities assigned to content 300 and 302 by detecting the state of a vehicle and transitions from a first state to a second state including normal (e.g., driving) movement, abnormal movement (e.g., lane violations) and parking. Nash at ¶¶ 79–83. For the foregoing reasons, the Nash reference anticipates all limitations of the claims.
Summary
Claims 1–20 are rejected under at least one of 35 U.S.C. §§ 102 and 103 as being unpatentable over the cited prior art. In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention.
Claim Objections
Claim 4 includes a minor typographical error highlighted below:
“determining, based at least in part on the second audio data, a second semantic content indicating information communicated f [sic, in] the second audio data.”
Claim 17 is objected to because it does not materially limit the scope of its parent claim 16. 37 C.F.R. § 1.75(c); MPEP § 608.01(n). Instead, claim 17 repeats verbatim limitations of claim 16. Appropriate correction is required. No new matter may be entered.
Additional Citations
The following table lists additional reference that were identified during a search of this Application. While this Office action does not rely on these references, they are relevant to the subject matter disclosed and claimed. The Examiner advises careful consideration of these references in preparing a reply.
Citation
Relevance
US 2023/0118803
Priority set based on audio content type and semantics. Adjusts volume based on priority.
US 2021/0256952
Determines priority based on sound level of ambient audio.
Table 4
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WALTER F BRINEY III whose telephone number is (571)272-7513. The examiner can normally be reached M-F 8 am-4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached at 571-270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Walter F Briney III/
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692
Walter F Briney IIIPrimary ExaminerArt Unit 2692
2/24/2026