DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to because the drawings contain illegible numbers and/or letters (see 37 CFR 1.84(l) and 1.84(p)). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 7, 9-12, 16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jones et al. (US 2015/0023521 A1 and hereafter Jones) in view of Kobayashi (US 2017/0280261 A1).
Regarding claim 1, Jones teaches
“A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to performs operations” (see Jones, figure 1, units 62 and 64 and ¶ 0056) “comprising:
receiving, while a front speaker and a rear speaker that are in communication with an amplifier output audio from a first application, a request to output audio from a second application” (see Jones, figure 1, units 11, 60a-b, 62, 64, 100’, 200’, and 300’, and ¶ 0055-0059, 0091, and 0100, where an audio system includes front speakers and rear speakers that are in communication with an amplifier, such that the speakers output audio from a first application, such as any one of the plurality of audio sources taught by Jones; and see Jones, ¶ 0055, 0077-0079, and 0091, where a signal is currently being played on the speakers and a request for a new audio signal is received from another application, such as any other one of the plurality of audio sources taught by Jones);
“retrieving, for each of the first application and the second application, information representing a resource requirement” (see Jones, ¶ 0079-0084, where the new signal is identified such that the audio system will be configured for its processing and output);
“determining whether a head unit and an amplifier have sufficient resources to meet the resource requirement of each application for simultaneous audio output” (see Jones, figure 1, units 10’, 20’, 30’, and 60a-b, and ¶ 0057-0059 and 0084, where the audio system checks the availability to play a new audio signal, where the availability is based on the audio systems number of processing channels, priority rating of an audio signal, and/or number of available speakers, because Jones teaches an exemplary embodiment with three processing channels and various speakers);
“instructing, in response to determining that the head unit and the amplifier have sufficient resources for simultaneous audio output, the amplifier to modify a first channel to remove one of a front speaker input or a rear speaker input” (see Jones, ¶ 0055, 0059, 0066, 0079, 0081, 0084, 0091, and 0097, where Jones teaches audio signals are issued to all speakers in the vehicle, only front speakers, only rear speakers, etc.; Jones teaches the audio system determines a new channel or available channel is available, such that Jones teaches a scenario where different audio signals are output to the front speakers and rear speakers, and therefore the system is configurable with instructions to change the audio output for a first application from all speakers to only the front speakers while adding the new audio output for the second application to only the rear speakers); [and]
“instructing the amplifier to associate a second channel with the second application for output to the one of the front speaker input or the rear speaker input that was removed from the first channel” (see Jones, ¶ 0079, 0081-0082, 0084, 0091, and 0097, where the audio system is configured to output different audio to the front speakers and the rear speakers, such that the system outputs the second application audio with the rear speakers).
However, Jones does not appear to teach the feature for “instructing the second application to activate audio transmission”.
Kobayashi discloses an audio reproduction apparatus and system, such as a car audio system including inter-equipment communication with wired and wireless interfaces and/or protocols (see Kobayashi, abstract and ¶ 0002-0003 and 0008). Kobayashi teaches a smartphone connected to the car audio apparatus using a wired or wireless connection (see Kobayashi, figure 1, units 6 and 8, figure 2, units 32 and 34, and ¶ 0031 and 0055-0057), and the car audio apparatus ranks the different wired and wireless protocols according to playback quality, such that the communication between the smartphone and the car audio apparatus is performed with the best available playback quality (see Kobayashi, figure 6, units 53 and 55-56, and ¶ 0078-0080 and 0085-0087). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Jones with the teachings of Kobayashi to provide the best audio playback quality when playing audio from a smartphone or similar device (see Jones, ¶ 0055 in view of Kobayashi, ¶ 0008-0009 and 0013 and figure 1, units 6 and 8).
Herein, the combination of Jones and Kobayashi makes obvious to perform the operation for
“instructing the second application to activate audio transmission” (see Jones, ¶ 0078-0082, 0084, 0091, and 0097 in view of Kobayashi, figure 1, units 6, 8, and signal B, figure 10, and ¶ 0115 and 0123, because Kobayashi makes it obvious to use a smartphone as an audio source, where the system instructs the smartphone to start sound playback by sending a remote control command signal, and it is obvious to start playback after the car audio system has been appropriately configured for the new application audio).
Regarding claim 2, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “computer-readable medium of claim 1, wherein the resource requirement includes an interface resource requirement and a decoding resource requirement” (see Jones, ¶ 0088 in view of Kobayashi, ¶ 0104, where a request to communicate audio is rejected when a higher quality interface exists).
Regarding claim 3, see the preceding rejection with respect to claim 2 above. The combination makes obvious the “computer-readable medium of claim 2, wherein the decoding resource requirement includes a hardware decoder requirement” (see Jones, ¶ 0066, 0071, and 0080-0084, where the system determines settings based on the type or format of a supported digital audio signal, and makes it obvious that a resource requirement includes a file format or decoder requirement for playing the audio signal in order to avoid processing incompatible audio files).
Regarding claim 7, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “computer-readable medium of claim 1, wherein the operations further comprise:
retrieving user preference information” (see Jones, ¶ 0079-0081 and 0088, where users dynamically configure the processing of an audio source, such that stored settings for an audio source are user preferred settings); and
“determining whether the user preference information prohibits simultaneous audio output” (see Jones, ¶ 0084, where the system checks if the availability to play the new audio signal, such that are some instances where the new audio signal will not play because there is no availability in the system for playback).
Regarding claim 9, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “computer-readable medium of claim 1, wherein the second channel has at least one of an audio bitrate, a sampling rate, a buffer size, or an equalizer setting corresponding to the second application” (see Jones, ¶ 0066, 0071, 0080-0081, and 0088, where at least an equalizer setting is stored for a second application, and it is clear that uncompressed and compressed digital audio signals have an associated bitrate and/or sampling rate).
Regarding claim 10, see the preceding rejection with respect to claim 1 above. As state above with respect to claim 1, Jones does not appear to teach the feature for “instructing the second application to activate audio transmission”. Kobayashi makes obvious this feature, and it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Jones with the teachings of Kobayashi to provide the best audio playback quality when playing audio from a smartphone or similar device (see Jones, ¶ 0055 in view of Kobayashi, ¶ 0008-0009 and 0013 and figure 1, units 6 and 8).
Therefore, the combination of Jones and Kobayashi makes obvious:
“A method comprising:
receiving, while a front speaker and a rear speaker that are in communication with an amplifier output audio from a first application, a request to output audio from a second application” (see Jones, figure 1, units 11, 60a-b, 62, 64, 100’, 200’, and 300’, and ¶ 0055-0059, 0091, and 0100, where an audio system includes front speakers and rear speakers that are in communication with an amplifier, such that the speakers output audio from a first application, such as any one of the plurality of audio sources taught by Jones; and see Jones, ¶ 0055, 0077-0079, and 0091, where a signal is currently being played on the speakers and a request for a new audio signal is received from another application, such as any other one of the plurality of audio sources taught by Jones);
“retrieving, for each of the first application and the second application, information representing a resource requirement” (see Jones, ¶ 0079-0084, where the new signal is identified such that the audio system will be configured for its processing and output);
“determining whether a head unit and an amplifier have sufficient resources to meet the resource requirement of each application for simultaneous audio output” (see Jones, figure 1, units 10’, 20’, 30’, and 60a-b, and ¶ 0057-0059 and 0084, where the audio system checks the availability to play a new audio signal, where the availability is based on the audio systems number of processing channels, priority rating of an audio signal, and/or number of available speakers, because Jones teaches an exemplary embodiment with three processing channels and various speakers);
“instructing, in response to determining that the head unit and the amplifier have sufficient resources for simultaneous audio output, the amplifier to modify a first channel to remove one of a front speaker input or a rear speaker input” (see Jones, ¶ 0055, 0059, 0066, 0079, 0081, 0084, 0091, and 0097, where Jones teaches audio signals are issued to all speakers in the vehicle, only front speakers, only rear speakers, etc.; Jones teaches the audio system determines a new channel or available channel is available, such that Jones teaches a scenario where different audio signals are output to the front speakers and rear speakers, and therefore the system is configurable with instructions to change the audio output for a first application from all speakers to only the front speakers while adding the new audio output for the second application to only the rear speakers);
“instructing the amplifier to associate a second channel with the second application for output to the one of the front speaker input or the rear speaker input that was removed from the first channel” (see Jones, ¶ 0079, 0081-0082, 0084, 0091, and 0097, where the audio system is configured to output different audio to the front speakers and the rear speakers, such that the system outputs the second application audio with the rear speakers); and
“instructing the second application to activate audio transmission” (see Jones, ¶ 0078-0082, 0084, 0091, and 0097 in view of Kobayashi, figure 1, units 6, 8, and signal B, figure 10, and ¶ 0115 and 0123, because Kobayashi makes it obvious to use a smartphone as an audio source, where the system instructs the smartphone to start sound playback by sending a remote control command signal, and it is obvious to start playback after the car audio system has been appropriately configured for the new application audio).
Regarding claim 11, see the preceding rejection with respect to claim 10 above. The combination makes obvious the “method of claim 10, wherein the resource requirement includes an interface resource requirement and a decoding resource requirement” (see Jones, ¶ 0088 in view of Kobayashi, ¶ 0104, where a request to communicate audio is rejected when a higher quality interface exists).
Regarding claim 12, see the preceding rejection with respect to claim 11 above. The combination makes obvious the “method of claim 11, wherein the decoding resource requirement includes a hardware decoder requirement” (see Jones, ¶ 0066, 0071, and 0080-0084, where the system determines settings based on the type or format of a supported digital audio signal, and makes it obvious that a resource requirement includes a file format or decoder requirement for playing the audio signal in order to avoid processing incompatible audio files).
Regarding claim 16, see the preceding rejection with respect to claim 10 above. The combination makes obvious the “method of claim 10, further comprising:
retrieving user preference information” (see Jones, ¶ 0079-0081 and 0088, where users dynamically configure the processing of an audio source, such that stored settings for an audio source are user preferred settings); and
“determining whether the user preference information prohibits simultaneous audio output” (see Jones, ¶ 0084, where the system checks if the availability to play the new audio signal, such that are some instances where the new audio signal will not play because there is no availability in the system for playback).
Regarding claim 18, see the preceding rejection with respect to claim 10 above. The combination makes obvious the “method of claim 10, wherein the second channel has at least one of an audio bitrate, a sampling rate, a buffer size, or an equalizer setting corresponding to the second application” (see Jones, ¶ 0066, 0071, 0080-0081, and 0088, where at least an equalizer setting is stored for a second application, and it is clear that uncompressed and compressed digital audio signals have an associated bitrate and/or sampling rate).
Regarding claim 19, see the preceding rejection with respect to claim 1 above. As state above with respect to claim 1, Jones does not appear to teach the feature for “instructing the second application to activate audio transmission”. Kobayashi makes obvious this feature, and it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Jones with the teachings of Kobayashi to provide the best audio playback quality when playing audio from a smartphone or similar device (see Jones, ¶ 0055 in view of Kobayashi, ¶ 0008-0009 and 0013 and figure 1, units 6 and 8).
Therefore, the combination of Jones and Kobayashi makes obvious:
“A device comprising:
a controller including circuitry configured to perform operations” (see Jones, figure 1, units 62 and 64 and ¶ 0056) “including:
“receiving, while a front speaker and a rear speaker that are in communication with an amplifier output audio from a first application, a request to output audio from a second application” (see Jones, figure 1, units 11, 60a-b, 62, 64, 100’, 200’, and 300’, and ¶ 0055-0059, 0091, and 0100, where an audio system includes front speakers and rear speakers that are in communication with an amplifier, such that the speakers output audio from a first application, such as any one of the plurality of audio sources taught by Jones; and see Jones, ¶ 0055, 0077-0079, and 0091, where a signal is currently being played on the speakers and a request for a new audio signal is received from another application, such as any other one of the plurality of audio sources taught by Jones),
“retrieving, for each of the first application and the second application, information representing a resource requirement” (see Jones, ¶ 0079-0084, where the new signal is identified such that the audio system will be configured for its processing and output),
“determining whether a head unit and an amplifier have sufficient resources to meet the resource requirement of each application for simultaneous audio output” (see Jones, figure 1, units 10’, 20’, 30’, and 60a-b, and ¶ 0057-0059 and 0084, where the audio system checks the availability to play a new audio signal, where the availability is based on the audio systems number of processing channels, priority rating of an audio signal, and/or number of available speakers, because Jones teaches an exemplary embodiment with three processing channels and various speakers),
“instructing, in response to determining that the head unit and the amplifier have sufficient resources for simultaneous audio output, the amplifier to modify a first channel to remove one of a front speaker input or a rear speaker input” (see Jones, ¶ 0055, 0059, 0066, 0079, 0081, 0084, 0091, and 0097, where Jones teaches audio signals are issued to all speakers in the vehicle, only front speakers, only rear speakers, etc.; Jones teaches the audio system determines a new channel or available channel is available, such that Jones teaches a scenario where different audio signals are output to the front speakers and rear speakers, and therefore the system is configurable with instructions to change the audio output for a first application from all speakers to only the front speakers while adding the new audio output for the second application to only the rear speakers),
“instructing the amplifier to associate a second channel with the second application for output to the one of the front speaker input or the rear speaker input that was removed from the first channel” (see Jones, ¶ 0079, 0081-0082, 0084, 0091, and 0097, where the audio system is configured to output different audio to the front speakers and the rear speakers, such that the system outputs the second application audio with the rear speakers), and
“instructing the second application to activate audio transmission” (see Jones, ¶ 0078-0082, 0084, 0091, and 0097 in view of Kobayashi, figure 1, units 6, 8, and signal B, figure 10, and ¶ 0115 and 0123, because Kobayashi makes it obvious to use a smartphone as an audio source, where the system instructs the smartphone to start sound playback by sending a remote control command signal, and it is obvious to start playback after the car audio system has been appropriately configured for the new application audio).
Regarding claim 20, see the preceding rejection with respect to claim 19 above. The combination makes obvious the “device of claim 19, wherein the resource requirement includes an interface resource requirement and a decoding resource requirement” (see Jones, ¶ 0088 in view of Kobayashi, ¶ 0104, where a request to communicate audio is rejected when a higher quality interface exists).
Claim(s) 4-5 and 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Jones and Kobayashi as applied to claims 2 and 11 above, and further in view of Rathi et al. (US 2013/0332632 A1 and hereafter Rathi).
Regarding claim 4, see the preceding rejection with respect to claim 2 above. The combination of Jones and Kobayashi makes obvious the computer-readable medium of claim 2, but does not appear to teach or reasonably suggest the features “wherein the interface resource requirement includes a touchscreen requirement”.
Rathi teaches a holistic identification of an electronic device to facilitate interoperability between accessory devices and host devices, such as providing better interoperability between accessory devices and a vehicle’s console (see Rathi, abstract and ¶ 0002-0004). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayashi with the teachings of Rathi to improve interoperability between the device by sharing device capabilities (see Rathi, ¶ 0005-0007 and 0038).
Therefore, the combination of Jones, Kobayashi, and Rathi makes obvious the “computer-readable medium of claim 2, wherein the interface resource requirement includes a touchscreen requirement” (see Jones, ¶ 0079-0084 in view of Rathi, figure 10, and ¶ 0101, 0110, 0125, and 0131-0134, where Rathi makes it obvious to share capabilities and determine if the host can provide the appropriate functionality to the accessory device, such that it is obvious to determine if a video display and/or touchscreen is available for interoperability).
Regarding claim 5, see the preceding rejection with respect to claims 2 and 4 above. The combination of Jones and Kobayashi makes obvious the method of claim 2, but does not appear to teach or reasonably suggest the features “wherein the interface resource requirement includes a front speaker requirement”. For the same reasons as stated above with claim 4, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayashi with the teachings of Rathi to improve interoperability between the device by sharing device capabilities (see Rathi, ¶ 0005-0007 and 0038).
Therefore, the combination of Jones, Kobayashi, and Rathi makes obvious the “computer-readable medium of claim 2, wherein the interface resource requirement includes a front speaker requirement” (see Jones, ¶ 0079-0084 in view of Rathi, figure 10, and ¶ 0038, 0102-0103, 0125, and 0131-0134, where Rathi makes it obvious to share capabilities and determine if the host can provide the appropriate functionality to the accessory device, such that it is obvious to determine if front speakers in the vehicle are available when required).
Regarding claim 13, see the preceding rejection with respect to claims 4 and 11 above. The combination of Jones and Kobayashi makes obvious the method of claim 11, but as noted above with respect to claim 4, the combination does not appear to teach or reasonably suggest the features “wherein the interface resource requirement includes a touchscreen requirement”. For the same reasons as stated above with claim 4, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayashi with the teachings of Rathi to improve interoperability between the device by sharing device capabilities (see Rathi, ¶ 0005-0007 and 0038).
Therefore, the combination of Jones, Kobayashi, and Rathi makes obvious the “method of claim 11, wherein the interface resource requirement includes a touchscreen requirement” (see Jones, ¶ 0079-0084 in view of Rathi, figure 10, and ¶ 0101, 0110, 0125, and 0131-0134, where Rathi makes it obvious to share capabilities and determine if the host can provide the appropriate functionality to the accessory device, such that it is obvious to determine if a video display and/or touchscreen is available for interoperability).
Regarding claim 14, see the preceding rejection with respect to claims 5 and 11 above. The combination of Jones and Kobayashi makes obvious the method of claim 11, but does not appear to teach or reasonably suggest the features “wherein the interface resource requirement includes a front speaker requirement”. For the same reasons as stated above with claim 4, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayashi with the teachings of Rathi to improve interoperability between the device by sharing device capabilities (see Rathi, ¶ 0005-0007 and 0038).
Therefore, the combination of Jones, Kobayashi, and Rathi makes obvious the “method of claim 11, wherein the interface resource requirement includes a front speaker requirement” (see Jones, ¶ 0079-0084 in view of Rathi, figure 10, and ¶ 0038, 0102-0103, 0125, and 0131-0134, where Rathi makes it obvious to share capabilities and determine if the host can provide the appropriate functionality to the accessory device, such that it is obvious to determine if front speakers in the vehicle are available when required).
Claim(s) 6, 8, 15, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Jones and Kobayashi as applied to claims 1 and 10 above, and further in view of Tzirkel-Hancock et al. (US 2017/0323639 A1 and hereafter Tzirkel-Hancock).
Regarding claim 6, see the preceding rejection with respect to claim 1 above. The combination of Jones and Kobayashi makes obvious the computer-readable medium of claim 1, but does not appear to teach or reasonably suggest the features “wherein the resource requirement includes one of a privacy requirement or a security requirement”.
Tzirkel-Hancock teaches a system for providing occupant-specific acoustic functions in a vehicles including large vans, taxicabs, and busses (see Tzirkel-Hancock, abstract and ¶ 0003-0005 and 0007-0010). In particular, Tzirkel-Hancock teaches that passengers in taxicabs, or other shared rides, are not known to each other, such as they are not related and/or friends (see Tzirkel-Hancock, ¶ 0007 and 0137), and further teaches that private communications (e.g., phonecalls, etc.) are directed towards a specific identified individual, so that the private communications are generally not perceivable by other occupants (see Tzirkel-Hancock, ¶ 0018, 0145-0146, and 0173). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayahshi with the teachings of Tzirkel-Hancock for the purpose of providing private communications to only one or more specific individuals in a shared vehicle (see Jones, ¶ 0096-0097 in view of Tzirkel-Hancock, ¶ 0145-0146 and 0173).
Therefore, the combination of Jones, Kobayahshi, and Tzirkel-Hancock makes obvious the “computer-readable medium of claim 1, wherein the resource requirement includes one of a privacy requirement or a security requirement” (see Jones, ¶ 0079-0084, and Kobayashi, figure 1, units 6, 8, and signal B, figure 10, and ¶ 0115 and 0123, in view of Tzirkel-Hancock, ¶ 0007, 0018, 0137, 0145-0146, and 0173, where it is obvious to check a privacy, or security, requirement to determine where to send the second audio so that private communications are not overheard by other passengers).
Regarding claim 8, see the preceding rejection with respect to claims 1 and 6 above. The combination of Jones and Kobayashi makes obvious the computer-readable medium of claim 1, but does not appear to teach or reasonably suggest the features of “muting audio output in response to receiving the request”. For the same reasons as stated above with claim 6, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayahshi with the teachings of Tzirkel-Hancock for the purpose of providing private communications to only one or more specific individuals in a shared vehicle (see Jones, ¶ 0096-0097 in view of Tzirkel-Hancock, ¶ 0145-0146 and 0173).
Herein, the combination of Jones, Kobayahshi, and Tzirkel-Hancock makes obvious the “computer-readable medium of claim 1, wherein the operations further comprise:
muting audio output in response to receiving the request” (see Tzirkel-Hancock, ¶ 0004-0006 and 0124-0126, wherein it is obvious to mute audio for particular circumstances based on a user’s desired preferences, such as muting first audio to allow a user to hear an important message and/or muting based on learned responses from a user); and
“unmuting the audio output in response to determining that the second application is transmitting audio data” (see Jones, ¶ 0078-0082, 0084, 0091, and 0097 in view of Kobayashi, figure 1, units 6, 8, and signal B, figure 10, and ¶ 0115 and 0123, and further in view of Tzirkel-Hancock, ¶ 0004-0006 and 0124-0126, where it is obvious to start playback after the car audio system has been appropriately configured for the new application audio and to use preferred volume settings based on user preferences, such that it is obvious to unmute the speakers for the user to hear the requested audio).
Regarding claim 15, see the preceding rejection with respect to claims 6 and 10 above. The combination of Jones and Kobayashi makes obvious the method of claim 10, but as noted above with respect to claim 6, the combination does not appear to teach or reasonably suggest the features “wherein the resource requirement includes one of a privacy requirement or a security requirement”. For the same reasons as stated above with claim 6, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayahshi with the teachings of Tzirkel-Hancock for the purpose of providing private communications to only one or more specific individuals in a shared vehicle (see Jones, ¶ 0096-0097 in view of Tzirkel-Hancock, ¶ 0145-0146 and 0173).
Therefore, the combination of Jones, Kobayahshi, and Tzirkel-Hancock makes obvious the “method of claim 10, wherein the resource requirement includes one of a privacy requirement or a security requirement” (see Jones, ¶ 0079-0084, and Kobayashi, figure 1, units 6, 8, and signal B, figure 10, and ¶ 0115 and 0123, in view of Tzirkel-Hancock, ¶ 0007, 0018, 0137, 0145-0146, and 0173, where it is obvious to check a privacy, or security, requirement to determine where to send the second audio so that private communications are not overheard by other passengers).
Regarding claim 17, see the preceding rejection with respect to claims 8 and 10 above. The combination of Jones and Kobayashi makes obvious the method of claim 10, but as noted above with respect to claim 8, the combination does not appear to teach or reasonably suggest the features “muting audio output in response to receiving the request”. For the same reasons as stated above with claim 6, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Jones and Kobayahshi with the teachings of Tzirkel-Hancock for the purpose of providing private communications to only one or more specific individuals in a shared vehicle (see Jones, ¶ 0096-0097 in view of Tzirkel-Hancock, ¶ 0145-0146 and 0173).
Herein, the combination of Jones, Kobayahshi, and Tzirkel-Hancock makes obvious the “method of claim 10, further comprising:
muting audio output in response to receiving the request” (see Tzirkel-Hancock, ¶ 0004-0006 and 0124-0126, wherein it is obvious to mute audio for particular circumstances based on a user’s desired preferences, such as muting first audio to allow a user to hear an important message and/or muting based on learned responses from a user); and
“unmuting the audio output in response to determining that the second application is transmitting audio data” (see Jones, ¶ 0078-0082, 0084, 0091, and 0097 in view of Kobayashi, figure 1, units 6, 8, and signal B, figure 10, and ¶ 0115 and 0123, and further in view of Tzirkel-Hancock, ¶ 0004-0006 and 0124-0126, where it is obvious to start playback after the car audio system has been appropriately configured for the new application audio and to use preferred volume settings based on user preferences, such that it is obvious to unmute the speakers for the user to hear the requested audio).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Guba et al. (US 2014/0335902 A1 and hereafter Guba) teaches methods, systems, and devices to improve vehicular safety by determining a location of a mobile device in a vehicle, and subsequently determining if the use of the mobile device and/or features of the mobile device should by blocked and/or limited (see Guba, abstract, figures 1-6, and ¶ 0002-0009 and 0013-0021);
Stankoulov (US 2021/0200501 A1) teaches projection, control, and management of user device applications using a connected resource, such as placing operating restrictions on device applications while in a moving vehicle (see Stankoulov, abstract, figures 1-3 and 10, and ¶ 0042-0048, 0291-0292, and 0298-0301); and
Nagisetty et al. (US 12,169,663 B1 and hereafter Nagisetty) teaches multi-zone content output controls (see Nagisetty, abstract and figures 1-15).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel R Sellers whose telephone number is (571)272-7528. The examiner can normally be reached Mon - Fri 10:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan S Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Daniel R Sellers/Primary Examiner, Art Unit 2694