DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/07/2024 is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6, 8-14 and 16-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Anders et al (US Pub No. 2019/0349683, hereinafter Anders).
Regarding claim 1, Anders teaches a vehicle audio system (Fig 1, volume adjustment computing environment 100), comprising: at least one memory (Fig 4, computer readable storage media 908); and at least one processor coupled with the at least one memory (Fig 4, processor 902 coupled to computer readable storage media 908) and configured to cause the vehicle audio system in a vehicle to: detect a location of a first person in the vehicle (Fig 2 & ¶ [0034], step 202 detect internet of things devices associated with a user within established zones), the first person located in proximity of a first speaker device configured for audio output (Fig 3, users located in proximity to zone speakers 302-318); detect an additional location of a second person in the vehicle (Fig 2 & ¶ [0034], step 202 detect one or more internet of things devices associated with one or more users within established zones), the second person located closer to a second speaker device than the first speaker device (Fig 3, each user located closer to their specific zone speaker and further from the other zone speakers), the second speaker device configured for the audio output (¶ [0023], each area has a specific speaker defined for output to that specific area); determine a preference for the first person related to the audio output (¶ [0015], identify customized volume preference of a user (first user)) and an additional preference for the second person related to the audio output (Abstract & ¶ [0015], capable of detecting action state of one or more users including associated volume settings of one or more users (second user)); and adjust a volume of at least one of the first speaker device (Fig 2 & ¶ [0048] step 206 adjust audio volume of zones based on volume settings associated with detected internet of things devices associated with a user).
Regarding claim 2, Anders teaches the vehicle audio system of claim 1, wherein the at least one processor is configured to cause the vehicle audio system to identify a type of the audio output and determine whether the preference of the first person or the additional preference of the second person is favorable toward the type of the audio output (¶ [0056-0058], audio sources can be selected and audio output is based on user preferences for that specific audio source).
Regarding claim 3, Anders teaches the vehicle audio system of claim 1, wherein the at least one processor is configured to cause the vehicle audio system to determine whether the first person is asleep (¶ [0054], audio volume adjustment program 120 can detect if an infant (user) has fallen asleep).
Regarding claim 4, Anders teaches the vehicle audio system of claim 3, wherein the at least one processor is configured to cause the vehicle audio system to lower the volume of the first speaker device based on the first person being asleep (¶ [0054], volume adjusted based on pre-configured preferences after detecting an infant (user) has fallen asleep).
Regarding claim 5, Anders teaches the vehicle audio system of claim 1, wherein the at least one processor is configured to cause the vehicle audio system to use facial recognition to detect the location of the first person in the vehicle or the additional location of the second person in the vehicle (¶ [0040], user association module 124 can use image recognition).
Regarding claim 6, Anders teaches the vehicle audio system of claim 1, wherein the preference of the first person or the additional preference of the second person is predetermined based on a user input (¶ [0031], audio volume adjustment program 120 may receive input from a user).
Regarding claim 8, Anders teaches the vehicle audio system of claim 1, wherein the at least one processor is configured to cause the vehicle audio system to further adjust the volume of at least one of the first speaker device or the second speaker device in response to a change in the audio output (¶ [0058], audio volume adjustment program 120 allows users to control output volume based on output type such as navigation instructions or radio).
Regarding claim 9, Anders teaches a method (Abstract), comprising: determining a preference related to audio output for a first person located in a vehicle (¶ [0015], identify customized volume preference of a user (first user)); determining an additional preference related to the audio output for a second person located in the vehicle (Abstract & ¶ [0015], capable of detecting action state of one or more users including associated volume settings of one or more users (second user)); detecting a location of the first person and the second person relative to a plurality of speaker devices in the vehicle (Fig 2 & ¶ [0034], step 202 detect internet of things devices associated with a user within established zones); and adjusting a volume of a first speaker device located in proximity of the first person (Fig 2 & ¶ [0048] step 206 adjust audio volume of zones based on volume settings associated with detected internet of things devices associated with a user).
Regarding claim 10, Anders teaches the method of claim 9, further comprising identifying a type of the audio output and determining whether the preference of the first person or the additional preference of the second person is favorable toward the type of the audio output (¶ [0056-0058], audio sources can be selected and audio output is based on user preferences for that specific audio source).
Regarding claim 11, Anders teaches the method of claim 9, further comprising determining whether the first person is asleep (¶ [0054], audio volume adjustment program 120 can detect if an infant (user) has fallen asleep).
Regarding claim 12, Anders teaches the method of claim 11, further comprising lowering the volume of the first speaker device based on the first person being asleep (¶ [0054], volume adjusted based on pre-configured preferences after detecting an infant (user) has fallen asleep).
Regarding claim 13, Anders teaches the method of claim 9, further comprising using facial recognition to detect the location of one or more of the first person in the vehicle or the location of the second person in the vehicle (¶ [0040], user association module 124 can use image recognition).
Regarding claim 14, Anders teaches the method of claim 9, wherein at least one of the preference of the first person or the additional preference of the second person is predetermined based on a user input (¶ [0031], audio volume adjustment program 120 may receive input from a user).
Regarding claim 16, Anders teaches the method of claim 9, further comprising adjusting the volume of at least one of the first speaker device or the second speaker device in response to a change in the audio output (¶ [0058], audio volume adjustment program 120 allows users to control output volume based on output type such as navigation instructions or radio).
Regarding claim 17, Anders teaches a system (Fig 1, volume adjustment computing environment 100), comprising: one or more speaker devices in a vehicle (Fig 3, zone speakers 302-318), the one or more speaker devices configured for audio output (¶ [0023], each area has a specific speaker defined for output to that specific area); and a processor configured to implement an audio playback manager (Fig 4, processor 902) to: detect a location of a person in the vehicle (Fig 2 & ¶ [0034], step 202 detect internet of things devices associated with a user within established zones), the person located in proximity of a speaker device of the one or more speaker devices (Fig 3, users located in proximity to zone speakers 302-318); determine a preference for the person related to the audio output (¶ [0015], identify customized volume preference of a user); and adjust a volume of the speaker device based on the preference of the person for the audio output (Fig 2 & ¶ [0048] step 206 adjust audio volume of zones based on volume settings associated with detected internet of things devices associated with a user).
Regarding claim 18, Anders teaches the system of claim 17, wherein the audio playback manager is configured to identify a type of the audio output and determine whether the preference of the person is favorable toward the type of the audio output (¶ [0056-0058], audio sources can be selected and audio output is based on user preferences for that specific audio source).
Regarding claim 19, Anders teaches the system of claim 17, wherein the audio playback manager is configured to determine whether the person is asleep (¶ [0054], audio volume adjustment program 120 can detect if an infant (user) has fallen asleep) and to lower the volume of the speaker device based on the person being asleep (¶ [0054], volume adjusted based on pre-configured preferences after detecting an infant (user) has fallen asleep).
Regarding claim 20, Anders teaches the system of claim 19, wherein the audio playback manager is configured to further adjust the volume of the speaker device in response to a change in the audio output (¶ [0058], audio volume adjustment program 120 allows users to control output volume based on output type such as navigation instructions or radio).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anders et al (US Pub No. 2019/0349683, hereinafter Anders) as applied to claims above, and further in view of Bharitkar (US Pub No. 2024/0276143, hereinafter Bharitkar).
Regarding claim 7, Anders teaches the vehicle audio system of claim 1.
Anders does not explicitly teach a user preference determined by a machine learning model based on prior audio playback.
Bharitkar teaches a user preference determined by a machine learning model based on prior audio playback (See Bharitkar ¶ [0028], machine learning model used to estimate peak-level amplitude to determine content-adaptive gain of the input audio signal).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have implemented a machine learning model as taught by Bharitkar with the vehicle audio system taught by Anders. Doing so allows for cost-effective real-time customization of a user’s audio preferences allowing for an improved user experience.
Regarding claim 15, Anders teaches the method of claim 9.
Anders does not explicitly teach a user preference determined by a machine learning model based on prior audio playback.
Bharitkar teaches a user preference determined by a machine learning model based on prior audio playback (See Bharitkar ¶ [0028], machine learning model used to estimate peak-level amplitude to determine content-adaptive gain of the input audio signal).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have implemented a machine learning model as taught by Bharitkar with the method taught by Anders. Doing so allows for cost-effective real-time customization of a user’s audio preferences allowing for an improved user experience.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Golsch (US Patent No. 11548517) teaches vehicle function activation based on vehicle occupancy location.
Sanji et al (US Pub No. 2020/0269809) teaches a vehicle occupant information acquisition system.
Gean et al (US Pub No. 2023/0074058) teaches an audio entertainment system using an adaptive audio profile.
Gu et al (US Pub No. 2023/0356729) teaches customized vehicle settings for occupants based on identities and locations.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYLER LIEBGOTT whose telephone number is (703)756-1818. The examiner can normally be reached Mon-Fri 10-6:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.M.L./Examiner, Art Unit 2694
/FAN S TSANG/Supervisory Patent Examiner, Art Unit 2694