Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is in response to the amendment filed 12/19/2025 in which Claims 1-15, 17-21 are pending.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 8-11, 13, 14, 18-21 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication 2020/0090508 to Baker et al (“Baker”) in view of U.S. Patent Publication 2015/0381706 to Wohlert et al (“Wohlert”) in further view of U.S. Patent 12,353,790 to Pinto et al (“Pinto”) and in further view of U.S. Patent Publication 2007/0296575 to Eisold et al (“Eisold”).
As to Claim 1, Baker teaches a system for a synchronizing audio on emergency vehicles (the set of vehicles can present a unified or synchronized alert system, whereby individual alert systems (e.g. visible and/or audible) for each vehicle can be controlled, assigned, enabled, or otherwise operable under a common synchronized control schema, see ¶ 0032), comprising:
Baker does not expressly teach receivers configured to accept a synchronization signal sent from a signal source that is remote from the emergency vehicles, each of the receivers being associated with a different one of the emergency vehicles; and a processor configured to: use the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles.
However, Baker teaches at least one pattern cycle 64, illustrated as three pattern cycles 64a, 64b, 64c, can be stored in a memory 66 for the master controller 12. The at least one pattern cycle 64 can be an alert pattern cycle 64a in the form of a flashing sequence, a sound sequence, or a combination of a flashing and sound sequence, or the like. The at least one pattern cycle 64 can therefore be a repeatable sequence or alert pattern that can be communicated to the at least one set of alert devices 50 [the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles] described herein (see ¶ 0029), but fails to disclose each of the receivers being associated with a different one of the emergency vehicles; and a processor configured to: use the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles. Wohlert, in the same field of endeavor related to audio synchronization, teaches a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further playback node. The media playback node also can determine, based at least in part upon the information, the start time for playback of the media content. The media playback node also can initiate playback of the media content at the start time (see Abstract); operating environment 100 includes a plurality of mobile playback nodes 102A-102D (referred to herein in the singular as mobile playback node 102, or collectively as mobile playback nodes 102) (see ¶ 0023); The media content to be played by the media playback nodes 102 can be synchronized. Several timing synchronization methods can be used. In some embodiments, playback of the media content by the media playback nodes 102 is synchronized using timing synchronization signals provided by a base station (best shown in FIG. 3) operating within the access network 112 and a common time reference provided by a time reference server 116 [accept a synchronization signal sent from a signal source that is remote]…the common time reference provided by the time reference server 116 is used to estimate network delay and to determine a media playback start time used to synchronize playback of the media content among the media playback nodes 102 [use the synchronization signal to determine a correct time to start playback]…an application server 118 can coordinate media playback across the media playback nodes 102 using the common time reference received from the time reference server 116 (¶ 0034). Therefore, when combining Wohlert’s audio synchronization using a remote time synchronization signal to determine the correct playback start time for a media playback node with Baker’s repeatable alert pattern or sequence communicated between alert devices of emergency vehicles would reasonably construe each of the receivers being associated with a different one of the emergency vehicles; and a processor configured to: use the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker with Wohlert to teach a receiver configured to accept a synchronization signal sent from a signal source that is remote from the emergency vehicles, each of the receivers being associated with a different one of the emergency vehicles; and a processor configured to: use the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles. The suggestion/motivation would have been in order for a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further media playback node (see ¶ 0003).
Baker and Wohlert do not expressly disclose a first playback of the repeating message played by the one of the emergency vehicles is synchronized with a second playback of the repeating message by another of the emergency vehicles that receives the repeating audio signals, wherein the first playback and the second playback start at different times.
Pinto teaches a first playback of the message played by the one of the emergency vehicles is synchronized with a second playback of the message by another of the emergency vehicles that receives the audio signals, wherein the first playback and the second playback start at different times (The first device may transmit a signal to the second device to share a clock (e.g., an internal clock)…The first device generates timebase information that is arranged to define timing relationships between playback states of audio content. Specifically, the generated information includes (e.g., as timing data) a first timebase that defines a relationship between the shared clock and the internal clock of the first device and a second timebase that defines a relationship between the first timebase and a playback state of the audio content. For instance, the playback state may be to initiate playback of the audio content and the second timebase may indicate that the audio content is to be played back at a playback time after a current time of the first timebase and at a particular playback rate [a first playback of the repeating message played by the one of the emergency vehicles is synchronized with a second playback of the repeating message by another of the emergency vehicles that receives the repeating audio signals, wherein the first playback and the second playback start at different times], see Col. 1, lines 36-37, 46-57; the timeline 26 (e.g., the black line) may correspond to (or be) the time of the internal clock 29. The timeline shows the playback time and duration of the pieces of audio content that are associated with timebases 21-24 (as illustrated by corresponding hatches) with respect to time (e.g., the time of the internal clock 29). Thus, as shown, a first piece of audio content associated with timebase 21 starts first, followed immediately by a second piece of audio content associated with timebase 22, see Col. 11, line 60 – Col. 12, line 2).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker and Wohlert with Pinto to teach a first playback of the repeating message played by the one of the emergency vehicles is synchronized with a second playback of the repeating message by another of the emergency vehicles that receives the repeating audio signals, wherein the first playback and the second playback start at different times. The suggestion/motivation would have been in order for a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further media playback node (see ¶ 0003).
Baker, Wohlert and Pinto do not expressly disclose a processor configured to: stream an audio stream of repeating audio signals corresponding to a repeating voice message to each of the emergency vehicles; a repeating voice message.
Eisold teaches a processor configured to: stream an audio stream of repeating audio signals corresponding to a repeating voice message to each of the emergency vehicles (Each disaster alert device includes a radio receiver, and a processor programmed to monitor radio transmissions from one or more central stations for disaster alerts directed to the location of the disaster alert device. Each alert device also includes an audio unit to alert personnel located at the site of the device to the precise nature of the disaster. The disaster alert devices are pre-programmed with information identifying the precise use location of the warning device, see Abstract; The voice message will preferably describe the nature of the warning and provide instructions as to a proper response. A specific example of such a message is provided below in a Section entitled "Disaster Example", see ¶ 0058; the central office to activate emergency crews. To do so the central office would program its computers with the latitude and longitude of the residences of members of various types of crews such as special police units, and special fire fighting units…directing a message to the disaster alert device of each crew member [repeating voice message] (by specifying their precise latitude and longitude) the central station personnel could immediately issue a request to these personnel to report to duty in case of a severe emergency, see ¶ 0082).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert and Pinto with Eisold to teach a processor configured to: stream an audio stream of repeating audio signals corresponding to a repeating voice message to each of the emergency vehicles. The suggestion/motivation would have been in order to monitor radio transmissions from one or more central stations for disaster alerts directed to the location of the disaster alert device (see Abstract).
As to Claim 8, Baker, Wohlert, Pinto and Eisold depending on Claim 1, Baker teaches wherein the processor defines a common time base as part of the synchronization signal (The master clock 18 can provide a timing reference signal 20 for the synchronization signaling system 10. By way of non-limiting example, the timing reference signal 20 can include a global master clock signal 22, a regional master clock signal 24, a local master clock signal 26, or a cellular master clock signal 28, all or any capable of providing the timing reference signal 20 to be received by the master controller 12, see ¶ 0022; global master clock signal 22 can be provided by a typical global navigational satellite system (GNSS) 30 where timing information is received in a chipset 32 via a satellite 34 that is precise on the order of nanosecond resolution, see ¶ 0023; a common alert pattern timing (e.g. wherein all alert patterns have a predetermined common period), or based on the timing reference signal 20. In one example, the full alert pattern repeating cycle 78 can be aligned referentially with the start of the master clock time (Tb, shown as a first master clock synchronization time 400) and define a synchronized alert period, see ¶ 0039).
As to Claim 9, Baker, Wohlert, Pinto and Eisold depending on Claim 1, Baker teaches wherein the processor defines a zone for the emergency vehicles, and wherein another emergency vehicle entering the zone will receive the synchronization signal and playback the message (emergency situations mobile and stationary vehicle applications can include an emergency signaling system to draw the attention of motorists and pedestrians to the emergency situation in order to avoid the area in which the emergency situation has occurred, see ¶ 0003; The synchronized signaling system 10 can enable the coordination of controllers 12 within the two vehicles 56a, 56b adapted to independently receive the timing reference signal 20, then independently operate their respective synchronized signaling system 10 to operate the set of alert devices in sync with each other, see ¶ 0046; the non-emergency vehicle 156 can become a temporary emergency vehicle in order to create larger presence of synchronized patterns in a given area. It should be understood that while a single emergency vehicle 56a is illustrated, multiple emergency vehicles 56a are contemplated, see ¶ 0050).
As to Claim 10, Baker, Wohlert, Pinto and Eisold depending on Claim 9, Baker teaches wherein a first vehicle of the emergency vehicles to playback the message becomes a synchronizer, and wherein other of the emergency vehicles synchronize playback of the message with the first vehicle (A separate interface 62 can be coupled to the remote set of alert devices 50b for receiving signals from the master controller 12. The separate interface 62 can command and operate the remote set of alert devices 50b. A local clock 21 can also be associated with the separate interface 62, see ¶ 0028; At least one pattern cycle 64, illustrated as three pattern cycles 64a, 64b, 64c, can be stored in a memory 66 for the master controller 12. The at least one pattern cycle 64 can be an alert pattern cycle 64a in the form of a flashing sequence, a sound sequence, or a combination of a flashing and sound sequence, or the like. The at least one pattern cycle 64 can therefore be a repeatable sequence or alert pattern that can be communicated to the at least one set of alert devices 50 described herein, see ¶ 0029; Each control device disclosed, including but not limited to the master controller 12, user interface 14, and separate interface 62, can keep a relative timer frame for example in 1.0 millisecond frames utilizing the local clock 21. The timing reference signal 20 can be periodically received at each control device in order to adjust the local time (t). The adjustment of the local time (t) at each control device can provide a precise frame response for multiple control devices, see ¶ 0030; the set of vehicles can present a unified or synchronized alert system, whereby individual alert systems (e.g. visible and/or audible) for each vehicle can be controlled, assigned, enabled, or otherwise operable under a common synchronized control schema, see ¶ 0032).
As to Claim 11, Baker teaches a method for a synchronizing audio on emergency vehicles (the set of vehicles can present a unified or synchronized alert system, whereby individual alert systems (e.g. visible and/or audible) for each vehicle can be controlled, assigned, enabled, or otherwise operable under a common synchronized control schema, see ¶ 0032),
Baker does not expressly teach accepting a synchronization signal at receivers, the synchronization signal being sent from a signal source that is remote from the emergency vehicles, each of the receivers being associated with a different one of the emergency vehicles; the audio stream being sent from a computing device that is remote from the emergency vehicles; and using the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles.
However, Baker teaches at least one pattern cycle 64, illustrated as three pattern cycles 64a, 64b, 64c, can be stored in a memory 66 for the master controller 12. The at least one pattern cycle 64 can be an alert pattern cycle 64a in the form of a flashing sequence, a sound sequence, or a combination of a flashing and sound sequence, or the like. The at least one pattern cycle 64 can therefore be a repeatable sequence or alert pattern that can be communicated to the at least one set of alert devices 50 [the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles] described herein (see ¶ 0029), the portable synchronization signaling system 110 can be in communication with the remote set of alert devices 50b. By way of non-limiting example, the portable interface 162 can send and receive signals to the separate interface 62 of the remote set of alert devices 50b [the audio stream being sent from a computing device that is remote from the emergency vehicles] (see ¶ 0051), but fails to disclose each of the receivers being associated with a different one of the emergency vehicles; using the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles. Wohlert, in the same field of endeavor related to audio synchronization, teaches a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further playback node. The media playback node also can determine, based at least in part upon the information, the start time for playback of the media content. The media playback node also can initiate playback of the media content at the start time (see Abstract); operating environment 100 includes a plurality of mobile playback nodes 102A-102D (referred to herein in the singular as mobile playback node 102, or collectively as mobile playback nodes 102) (see ¶ 0023); The media content to be played by the media playback nodes 102 can be synchronized. Several timing synchronization methods can be used. In some embodiments, playback of the media content by the media playback nodes 102 is synchronized using timing synchronization signals provided by a base station (best shown in FIG. 3) operating within the access network 112 and a common time reference provided by a time reference server 116 [accept a synchronization signal sent from a signal source that is remote]…the common time reference provided by the time reference server 116 is used to estimate network delay and to determine a media playback start time used to synchronize playback of the media content among the media playback nodes 102 [use the synchronization signal to determine a correct time to start playback]…an application server 118 can coordinate media playback across the media playback nodes 102 using the common time reference received from the time reference server 116 (¶ 0034). Therefore, when combining Wohlert’s audio synchronization using a remote time synchronization signal to determine the correct playback start time for a media playback node with Baker’s repeatable alert pattern or sequence communicated between alert devices of emergency vehicles would reasonably construe each of the receivers being associated with a different one of the emergency vehicles; and using the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles,
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker with Wohlert to teach accepting a synchronization signal at receivers, the synchronization signal being sent from a signal source that is remote from the emergency vehicles, each of the receivers being associated with a different one of the emergency vehicles; the audio stream being sent from a computing device that is remote from the emergency vehicles; and using the synchronization signal to determine a correct time to start playback of the repeating message by one of the emergency vehicles using the repeating audio signals sent to the one of the emergency vehicles. The suggestion/motivation would have been in order for a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further media playback node (see ¶ 0003).
Baker and Wohlert do not expressly disclose a first playback of the repeating message played by the one of the emergency vehicles is synchronized with a second playback of the repeating message by another of the emergency vehicles that receives the repeating audio signals, wherein the first playback and the second playback start at different times.
Pinto teaches a first playback of the repeating message played by the one of the emergency vehicles is synchronized with a second playback of the repeating message by another of the emergency vehicles that receives the repeating audio signals, wherein the first playback and the second playback start at different times (The first device may transmit a signal to the second device to share a clock (e.g., an internal clock)…The first device generates timebase information that is arranged to define timing relationships between playback states of audio content. Specifically, the generated information includes (e.g., as timing data) a first timebase that defines a relationship between the shared clock and the internal clock of the first device and a second timebase that defines a relationship between the first timebase and a playback state of the audio content. For instance, the playback state may be to initiate playback of the audio content and the second timebase may indicate that the audio content is to be played back at a playback time after a current time of the first timebase and at a particular playback rate, see Col. 1, lines 36-37, 46-57; the timeline 26 (e.g., the black line) may correspond to (or be) the time of the internal clock 29. The timeline shows the playback time and duration of the pieces of audio content that are associated with timebases 21-24 (as illustrated by corresponding hatches) with respect to time (e.g., the time of the internal clock 29). Thus, as shown, a first piece of audio content associated with timebase 21 starts first, followed immediately by a second piece of audio content associated with timebase 22 [a first playback of the repeating message played by the one of the emergency vehicles is synchronized with a second playback of the repeating message by another of the emergency vehicles that receives the repeating audio signals, wherein the first playback and the second playback start at different times], see Col. 11, line 60 – Col. 12, line 2).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker and Wohlert with Pinto to teach a first playback of the repeating message played by the one of the emergency vehicles is synchronized with a second playback of the repeating message by another of the emergency vehicles that receives the repeating audio signals, wherein the first playback and the second playback start at different times. The suggestion/motivation would have been in order for a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further media playback node (see ¶ 0003).
Baker, Wohlert and Pinto do not expressly disclose receiving, at each of the emergency vehicles, an audio stream of repeating audio signals corresponding to a repeating voice message to each of the emergency vehicles; a repeating voice message.
Eisold teaches a processor configured to: send an audio stream of repeating audio signals corresponding to a repeating voice message to each of the emergency vehicles (Each disaster alert device includes a radio receiver, and a processor programmed to monitor radio transmissions from one or more central stations for disaster alerts directed to the location of the disaster alert device. Each alert device also includes an audio unit to alert personnel located at the site of the device to the precise nature of the disaster. The disaster alert devices are pre-programmed with information identifying the precise use location of the warning device, see Abstract; The voice message will preferably describe the nature of the warning and provide instructions as to a proper response. A specific example of such a message is provided below in a Section entitled "Disaster Example", see ¶ 0058; the central office to activate emergency crews. To do so the central office would program its computers with the latitude and longitude of the residences of members of various types of crews such as special police units, and special fire fighting units…directing a message to the disaster alert device of each crew member [repeating voice message] (by specifying their precise latitude and longitude) the central station personnel could immediately issue a request to these personnel to report to duty in case of a severe emergency, see ¶ 0082).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert and Pinto with Eisold to teach receiving, at each of the emergency vehicles, an audio stream of repeating audio signals corresponding to a repeating voice message to each of the emergency vehicles; a repeating voice message. The suggestion/motivation would have been in order to monitor radio transmissions from one or more central stations for disaster alerts directed to the location of the disaster alert device (see Abstract).
As to Claim 13, Baker, Wohlert, Pinto and Eisold depending from Claim 12, Baker teaches receiving an audio signal with the message from a nearby emergency vehicle and playing the message over the high and low frequency speakers (The synchronization signaling system 10 further includes at least one set of alert devices 50, illustrated as a local set of alert devices 50a and a remote set of alert devices 50b. The at least one set of alert devices 50 can include visual outputs 52 such as flashing lights, strobe lights, different color lights, LED lights, along with audio outputs 54 such as sirens, speakers, horns, and any other type of alert device, see ¶ 0027; A separate interface 62 can be coupled to the remote set of alert devices 50b for receiving signals from the master controller 12. The separate interface 62 can command and operate the remote set of alert devices 50b, see ¶ 0028).
As to Claim 14, Baker, Wohlert, Pinto and Eisold depending from Claim 13, Baker teaches a graphical user interface that allows new audio messages to be transferred (The synchronization signaling system 10 includes a master controller 12 for receiving a user input 13, by way of non-limiting example via a user interface 14. The user interface 14 can be directly mounted to the master controller 12 or be separate from the master controller 12, see ¶ 0021; A separate interface 62 can be coupled to the remote set of alert devices 50b for receiving signals from the master controller 12. The separate interface 62 can command and operate the remote set of alert devices 50b, see ¶ 0028).
As to Claim 18, Baker, Wohlert, Pinto and Eisold depending on Claim 11, Wohlert teaches defining a common time base as part of the synchronization signal (the common time reference [common time base] provided by the time reference server 116 is used to estimate network delay and to determine a media playback start time used to synchronize playback of the media content among the media playback nodes 102, ¶ 0034).
As to Claim 19, Baker, Wohlert, Pinto and Eisold depending on Claim 11, Baker teaches defining a zone for the emergency vehicles, and when another emergency vehicle enters the zone, receiving the synchronization signal for playback of the message (emergency situations mobile and stationary vehicle applications can include an emergency signaling system to draw the attention of motorists and pedestrians to the emergency situation in order to avoid the area in which the emergency situation has occurred, see ¶ 0003; The synchronized signaling system 10 can enable the coordination of controllers 12 within the two vehicles 56a, 56b adapted to independently receive the timing reference signal 20, then independently operate their respective synchronized signaling system 10 to operate the set of alert devices in sync with each other, see ¶ 0046; the non-emergency vehicle 156 can become a temporary emergency vehicle in order to create larger presence of synchronized patterns in a given area. It should be understood that while a single emergency vehicle 56a is illustrated, multiple emergency vehicles 56a are contemplated, see ¶ 0050).
As to Claim 20, Baker, Wohlert, Pinto and Eisold depending on Claim 19, Baker and Wohlert do not expressly teach wherein a first vehicle of the emergency vehicles to playback the message becomes a synchronizer, and wherein other of the emergency vehicles synchronize playback of the message with the first vehicle. However, Baker teaches the set of vehicles can present a unified or synchronized alert system, whereby individual alert systems (e.g. visible and/or audible) for each vehicle can be controlled, assigned, enabled, or otherwise operable under a common synchronized control schema (see ¶ 0032). Wohlert teaches a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further media playback node. The media playback node also can determine, based at least in part upon the information, the start time for playback of the media content, and can initiate playback of the media content at the start time (see ¶ 0003). Therefore, when combining Wohlert’s audio synchronization using a media playback node to synchronize to a further media playback node with Baker’s synchronization between alert devices of emergency vehicles would reasonably construe wherein a first vehicle of the emergency vehicles to playback the message becomes a synchronizer, and wherein other of the emergency vehicles synchronize playback of the message with the first vehicle. The suggestion/motivation would have been in order for a media playback node can receive information for use in determining a start time for playback of media content so that playback of the media content is in sync with playback of the media content by a further media playback node (see ¶ 0003).
As to Claim 21, Baker, Wohlert, Pinto and Eisold depending on Claim 1, Baker teaches wherein the synchronization signal is a GPS signal (global master clock signal 22 can be provided by a typical global navigational satellite system (GNSS) 30 where timing information is received in a chipset 32 via a satellite 34 that is precise on the order of nanosecond resolution, see ¶ 0023).
Eisold teaches wherein the computing device is associated with another vehicle that is not one of the emergency vehicles (the central office to activate emergency crews. To do so the central office would program its computers with the latitude and longitude of the residences of members of various types of crews such as special police units, and special fire fighting units…directing a message to the disaster alert device of each crew member [repeating voice message] (by specifying their precise latitude and longitude) the central station personnel could immediately issue a request to these personnel to report to duty in case of a severe emergency, see ¶ 0082).
Claim(s) 2-4, 6, 7, 12 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication 2020/0090508 to Baker et al (“Baker”) in view of U.S. Patent Publication 2015/0381706 to Wohlert et al (“Wohlert”) in further view of U.S. Patent 12,353,790 to Pinto et al (“Pinto”) in further view of U.S. Patent Publication 2007/0296575 to Eisold et al (“Eisold”) and in further view of U.S. Patent Publication 2021/0043081 to Garrett et al (“Garrett”).
As to Claim 2, Baker, Wohlert, Pinto and Eisold depending from Claim 1, Eisold teaches a repeating voice message (the central office to activate emergency crews. To do so the central office would program its computers with the latitude and longitude of the residences of members of various types of crews such as special police units, and special fire fighting units…directing a message to the disaster alert device of each crew member [repeating voice message] (by specifying their precise latitude and longitude) the central station personnel could immediately issue a request to these personnel to report to duty in case of a severe emergency, see ¶ 0082).
Baker, Wohlert, Pinto and Eisold do not expressly disclose further comprising high and low frequency speakers and amplifiers that cover a wide range of audible frequencies, with the high and low frequency speakers being configured to playback the repeating message.
Garrett teaches further comprising high and low frequency speakers and amplifiers that cover a wide range of audible frequencies, with the high and low frequency speakers being configured to playback the repeating message (The output sequence is a sequence upon which the emergency sound generation node 220 is supposed to play the emergency sound… a processor (not shown) of the emergency sound generation node 220 reads the output sequence 2000 from a memory (not shown) and generates a tone signal based on the output sequence 2000. The tone signal may be amplified and played through a speaker (not shown). In this case, the tone signal may include volume level information and/or frequency information [high and low frequencies], see ¶ 0076).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert, Pinto and Eisold with Garrett to teach high and low frequency speakers and amplifiers that cover a wide range of audible frequencies, with the high and low frequency speakers being configured to playback the repeating message. The suggestion/motivation would have been in order for the tone signal to be amplified and played through a speaker (see ¶ 0076).
As to Claim 3, Baker, Wohlert, Pinto, Eisold and Garrett depending from Claim 2, Eisold teaches a repeating voice message (The audible tone or voice alert is set to repeat mode to remind the user that they still be in an area where an emergency service vehicle or vehicles are within the systems operational range, so that the need of the safe and traffic free route to attend an incident is achieved, see Abstract).
Baker teaches further comprising a radio receiver capable of receiving an audio signal with the repeating message from a nearby emergency vehicle and playing the repeating message over the high and low frequency speakers (The synchronization signaling system 10 further includes at least one set of alert devices 50, illustrated as a local set of alert devices 50a and a remote set of alert devices 50b. The at least one set of alert devices 50 can include visual outputs 52 such as flashing lights, strobe lights, different color lights, LED lights, along with audio outputs 54 such as sirens, speakers, horns, and any other type of alert device, see ¶ 0027; A separate interface 62 can be coupled to the remote set of alert devices 50b for receiving signals from the master controller 12. The separate interface 62 can command and operate the remote set of alert devices 50b, see ¶ 0028).
As to Claim 4, Baker, Wohlert, Pinto, Eisold and Garrett depending from Claim 3, Baker teaches a graphical user interface that allows new audio messages to be transferred to the system (The synchronization signaling system 10 includes a master controller 12 for receiving a user input 13, by way of non-limiting example via a user interface 14. The user interface 14 can be directly mounted to the master controller 12 or be separate from the master controller 12, see ¶ 0021; A separate interface 62 can be coupled to the remote set of alert devices 50b for receiving signals from the master controller 12. The separate interface 62 can command and operate the remote set of alert devices 50b, see ¶ 0028).
As to Claim 6, Baker, Wohlert, Pinto and Eisold depending on Claim 1, Eisold teaches a repeating voice message (the central office to activate emergency crews. To do so the central office would program its computers with the latitude and longitude of the residences of members of various types of crews such as special police units, and special fire fighting units…directing a message to the disaster alert device of each crew member [repeating voice message] (by specifying their precise latitude and longitude) the central station personnel could immediately issue a request to these personnel to report to duty in case of a severe emergency, see ¶ 0082).
Baker, Wohlert, Pinto and Eisold do not expressly disclose wherein the repeating message is a series of tones or prerecorded.
Garrett teaches wherein the repeating message is a series of tones or prerecorded (emergency sound generation node 220 reads the output sequence 2000 from a memory (not shown) and generates a tone signal based on the output sequence 2000. The tone signal may be amplified and played through a speaker (not shown), see ¶ 0076; The another emergency sound generation node may also generate another tone signal based on another output sequence, and the another tone signal generated by the another emergency sound generation node can be synchronized to the tone signal generated by the emergency sound generation node 220 to generate a combined emergency sound, see ¶ 0077).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert, Pinto and Eisold with Garrett to teach wherein the repeating message is a series of tones or prerecorded. The suggestion/motivation would have been in order for the tone signal to be amplified and played through a speaker (see ¶ 0076).
As to Claim 7, Baker, Wohlert, Pinto and Eisold depending on Claim 1, Eisold teaches a repeating voice message (the central office to activate emergency crews. To do so the central office would program its computers with the latitude and longitude of the residences of members of various types of crews such as special police units, and special fire fighting units…directing a message to the disaster alert device of each crew member [repeating voice message] (by specifying their precise latitude and longitude) the central station personnel could immediately issue a request to these personnel to report to duty in case of a severe emergency, see ¶ 0082).
Baker, Wohlert, Pinto and Eisold do not expressly disclose an outdoor warning speaker, wherein the repeating message is synchronized with the outdoor warning speaker.
Garrett teaches an outdoor warning speaker, wherein the repeating message is synchronized with the outdoor warning speaker (an emergency sound generation node 220 (e.g., siren) [outdoor warning speaker], see ¶ 0057; each peripheral 220 to 240 (e.g., processor thereof) [processor] receives the first synchronization message from the main controller 210, determines a rx time (using the local timer 2200) at which the first synchronization message is received and store a rx timestamp corresponding to the rx time, see ¶ 0060; the main controller 210 reads the tx timestamp from the memory, generates a second synchronization message containing the tx timestamp, and transmits the same to each peripheral 220 to 240. Next, each peripheral 220 to 240 receives the second synchronization message and retrieves the tx timestamp from the received second synchronization message. Also, each peripheral 220 to 240 determines a current local time at which the second synchronization message is received, see ¶ 0061; each peripheral 220 to 240 compares the tx timestamp contained in the second synchronization message with the rx timestamp to determine a time difference ΔT between the tx timestamp and the rx timestamp. In addition, each peripheral 220 to 240 determines a global time by adding the time difference ΔT to the determined current local time, so that a local time of each peripheral 220 to 240 can be synchronized to the global time of the main controller 210, see ¶ 0062).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert, Pinto and Eisold with Garrett to teach an outdoor warning speaker, wherein the repeating message is synchronized with the outdoor warning speaker. The suggestion/motivation would have been in order for the tone signal to be amplified and played through a speaker (see ¶ 0076).
As to Claim 12, Baker, Wohlert, Pinto and Eisold depending from Claim 11, Eisold teaches a repeating voice message (the central office to activate emergency crews. To do so the central office would program its computers with the latitude and longitude of the residences of members of various types of crews such as special police units, and special fire fighting units…directing a message to the disaster alert device of each crew member [repeating voice message] (by specifying their precise latitude and longitude) the central station personnel could immediately issue a request to these personnel to report to duty in case of a severe emergency, see ¶ 0082).
Baker, Wohlert, Pinto and Eisold do not expressly disclose further comprising high and low frequency speakers and amplifiers that cover a wide range of audible frequencies, with the high and low frequency speakers being configured to playback the repeating message.
Garrett teaches further comprising high and low frequency speakers and amplifiers that cover a wide range of audible frequencies, with the high and low frequency speakers being configured to playback the repeating message (The output sequence is a sequence upon which the emergency sound generation node 220 is supposed to play the emergency sound… a processor (not shown) of the emergency sound generation node 220 reads the output sequence 2000 from a memory (not shown) and generates a tone signal based on the output sequence 2000. The tone signal may be amplified and played through a speaker (not shown). In this case, the tone signal may include volume level information and/or frequency information [high and low frequencies], see ¶ 0076).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert, Pinto and Eisold with Garrett to teach high and low frequency speakers and amplifiers that cover a wide range of audible frequencies, with the high and low frequency speakers being configured to playback the repeating message. The suggestion/motivation would have been in order for the tone signal to be amplified and played through a speaker (see ¶ 0076).
Claim(s) 5, 15, 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication 2020/0090508 to Baker et al (“Baker”) in view of U.S. Patent Publication 2015/0381706 to Wohlert et al (“Wohlert”) in further view of U.S. Patent 12,353,790 to Pinto et al (“Pinto”) in further view of U.S. Patent Publication 2007/0296575 to Eisold et al (“Eisold”) and in further view of U.S. Patent Publication 2020/0294401 to Kerecsen.
As to Claim 5, Baker, Wohlert, Pinto and Eisold depending on Claim 1, Baker and Eisold do not expressly disclose wherein the computing device has a microphone that allows for recording of the message and transfer of the message directly to a plurality of the other emergency vehicles. Kerecsen teaches wherein the computing device has a microphone that allows for recording of the message and transfer of the message directly to a plurality of the other emergency vehicles (the traffic management may be in the form of variable speed limits, adaptable traffic lights, traffic intersection control, and accommodating emergency vehicles such as ambulances, fire trucks and police cars, see ¶ 0086; a voice recorder, see ¶ 0142; the audio content stored may be either pre-recorded or using a synthesizer. Few digital audio files may be stored, selected by a control logic. Alternatively or in addition, the source of the digital audio may be a microphone serving as a sensor. In another example, the system uses the sounder for simulating the voice of a human being or generates music…A talking human voice may be played by the sounder, either pre-recorded or using human voice synthesizer, and the sound may be a syllable, a word, a phrase, a sentence, a short story or a long story, and can be based on speech synthesis or pre-recorded, using male or female voice, see ¶ 0501).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert, Pinto and Eisold with Kerecsen to teach wherein the computing device has a microphone that allows for recording of the message and transfer of the message directly to a plurality of the other emergency vehicles. The suggestion/motivation would have been in order to communicate and exchange information with other vehicles and with roadside units, may allow for cooperation and may be effective in increasing safety such as sharing safety information, safety warnings, as well as traffic information (see ¶ 0086).
As to Claim 15, Baker, Wohlert, Pinto and Eisold depending on Claim 11, Baker, Wohlert, Pinto and Eisold do not expressly disclose recording of the message at the computing device and transferring the message directly to a plurality of the other emergency vehicles.
Kerecsen teaches recording of the message at the computing device and transferring the message directly to a plurality of the other emergency vehicles (the traffic management may be in the form of variable speed limits, adaptable traffic lights, traffic intersection control, and accommodating emergency vehicles such as ambulances, fire trucks and police cars, see ¶ 0086; a voice recorder, see ¶ 0142; the audio content stored may be either pre-recorded or using a synthesizer. Few digital audio files may be stored, selected by a control logic. Alternatively or in addition, the source of the digital audio may be a microphone serving as a sensor. In another example, the system uses the sounder for simulating the voice of a human being or generates music…A talking human voice may be played by the sounder, either pre-recorded or using human voice synthesizer, and the sound may be a syllable, a word, a phrase, a sentence, a short story or a long story, and can be based on speech synthesis or pre-recorded, using male or female voice, see ¶ 0501).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert, Pinto and Eisold with Kerecsen to teach recording of the message at the computing device and transferring the message directly to a plurality of the other emergency vehicles. The suggestion/motivation would have been in order to communicate and exchange information with other vehicles and with roadside units, may allow for cooperation and may be effective in increasing safety such as sharing safety information, safety warnings, as well as traffic information (see ¶ 0086).
As to Claim 17, Baker, Wohlert, Pinto and Eisold depending on Claim 11, Baker, Wohlert, Pinto and Eisold do not expressly disclose wherein the message is prerecorded or custom. Kerecsen wherein the message is prerecorded or custom (the audio content stored may be either pre-recorded or using a synthesizer. Few digital audio files may be stored, selected by a control logic. Alternatively or in addition, the source of the digital audio may be a microphone serving as a sensor. In another example, the system uses the sounder for simulating the voice of a human being or generates music…A talking human voice may be played by the sounder, either pre-recorded or using human voice synthesizer, and the sound may be a syllable, a word, a phrase, a sentence, a short story or a long story, and can be based on speech synthesis or pre-recorded, using male or female voice, see ¶ 0501).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Baker, Wohlert, Pinto and Eisold with Kerecsen to teach wherein the message is prerecorded or custom. The suggestion/motivation would have been in order to communicate and exchange information with other vehicles and with roadside units, may allow for cooperation and may be effective in increasing safety such as sharing safety information, safety warnings, as well as traffic information (see ¶ 0086).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EBONI N GILES whose telephone number is (571)270-7453. The examiner can normally be reached Monday - Friday 9 am - 6 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached on (571)272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EBONI N GILES/Examiner, Art Unit 2622
/PATRICK N EDOUARD/Supervisory Patent Examiner, Art Unit 2622