DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 11, and 18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 11, and 18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 11, and 19 of copending Application No. 18/330,436 (hereafter ‘436) in view of Iliev et al. (US 2002/0059059 A1 and hereafter Iliev).
Regarding claim 1, claim 1 of ‘436 recites the same limitations as the instant claim. The limitation of “payload data” in claim 1 of ‘436 reads on the “information”. However, the instant claim is different in that it includes the extra limitation:
“wherein the audio-rendering-directive metadata includes spatial-audio specifications over time, and wherein varying the audio-rendering-directive metadata over time in a manner that represents the information comprises varying the spatial-audio specifications over time in a manner that represents the information”.
Iliev discloses a method for embedding data into an audio signal, where the embedded data is imperceptible to a listener and the method embeds the data by modifying the phase component of the audio signal (see Iliev, abstract). Iliev teaches prior art of embedding data includes transparent watermarking that leverages psychoacoustic techniques, such as frequency masking and temporal masking (see Iliev, ¶ 0010-0012), and teaches the improvement where signal processing takes advantage of binaural hearing principles such as the minimum audible angle (MAA) (see Iliev, ¶ 0024). Iliev teaches that the interaural phase difference (IPD) is varied with respect to the MAA, such that the embedded data is imperceptible to a listener (see Iliev, ¶ 0030-0034 and 0054). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify claim 1 of ‘436 with the teaching of Iliev to embed data in an audio signal in an imperceptible manner for improved security and authentication of distributed audio (see Iliev, ¶ 0005 and 0010). Therefore, the combination of claim 1 of ‘436 and Iliev makes obvious the features of the instant claim.
Regarding claim 11, claim 11 of ‘436 recites the same limitations as the instant claim. The limitation of “payload data” in claim 11 of ‘436 reads on the “information”. However, the instant claim is different in that it includes the extra limitation:
“wherein the audio-rendering-directive metadata includes spatial-audio specifications over time, and wherein varying the audio-rendering-directive metadata over time in a manner that represents the information comprises varying the spatial-audio specifications over time in a manner that represents the information”.
For the same reasons as stated above with respect to claim 1 above, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify claim 1 of ‘436 with the teaching of Iliev to embed data in an audio signal in an imperceptible manner for improved security and authentication of distributed audio (see Iliev, ¶ 0005 and 0010). Therefore, the combination of claim 11 of ‘436 and Iliev makes obvious the features of the instant claim.
Regarding claim 18, claim 19 of ‘436 recites the same limitations as the instant claim. The limitation of “payload data” in claim 19 of ‘436 reads on the “information”. However, the instant claim is different in that it includes the extra limitation:
“wherein the audio-rendering-directive metadata includes spatial-audio specifications over time, and wherein varying the audio-rendering-directive metadata over time in a manner that represents the information comprises varying the spatial-audio specifications over time in a manner that represents the information”
For the same reasons as stated above with respect to claim 1 above, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify claim 1 of ‘436 with the teaching of Iliev to embed data in an audio signal in an imperceptible manner for improved security and authentication of distributed audio (see Iliev, ¶ 0005 and 0010). Therefore, the combination of claim 19 of ‘436 and Iliev makes obvious the features of the instant claim.
This is a provisional nonstatutory double patenting rejection.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-2, 5, 7-12, 15, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tracey et al. (US 2017/0098452 A1, previously cited, and hereafter Tracey) in view of Nurmukhanov et al. (US 2016/0210972 A1, previously cited as pertinent, and hereafter Nurmukhanov).
Regarding claim 1, Tracey discloses a method to facilitate communicating metadata and audio payload for different audio objects in a bitstream to an object-based audio decoder over a network or via computer-readable storage media (see Tracey, ¶ 0037). Tracey teaches:
“A method to facilitate communicating information to a destination, the method comprising:
varying, by a computing system, audio-rendering-directive metadata over time in a manner …” by teaching a bitstream composed of object-based audio and associated object-specific metadata, where the metadata provides controls over the output levels or amplitudes of different audio objects (see Tracey, ¶ 0031 and 0045).
Tracey does not appear to teach the “information” as the instant application discloses. The instant application discloses “a technological advance that leverages this high level of metadata granularity as a basis to convey information that may otherwise be conveyed by watermarking the audio itself, i.e., payload data” (see instant specification, p.1, ¶ 0004).
Nurmukhanov teaches selective watermarking of channels of multichannel audio (see Nurmukhanov, abstract), wherein watermarking is employed to prevent piracy and allow forensic tracking (see Nurmukhanov, ¶ 0003). Nurmukhanov further teaches that an encoder provides metadata that identifies a hierarchy of appropriate channels and/or portions of channels for the playback device to selectively watermark only some of the audio channels to save processing power (see Nurmukhanov, ¶ 0006, 0012-0013, 0056, and 0060, and figure 1). In particular Nurmukhanov teaches an object processing subsystem that receives decoded speaker channels, object channels, and metadata (see Nurmukhanov, ¶ 0065 and figure 1, units 7-9), where the metadata includes object-related metadata and watermark suitability values for the object channels so that the rendering subsystem renders the object channels with the object-related metadata that includes spatial positioning and trajectories (see Nurmukhanov, ¶ 0068-0069 and figure 1, units 9 and 11). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Tracey with the teachings of Nurmukhanov for the purpose of preventing piracy of media while saving processing power by selectively watermark only some of the audio channels and objects (see Tracey, ¶ 0031 in view of Nurmukhanov, ¶ 0003, 0006, and 0012-0013).
Therefore, the combination of Tracey and Nurmukhanov makes obvious the features for:
“varying, by a computing system, audio-rendering-directive metadata over time in a manner that represents the information” by teaching a bitstream composed of object-based audio and associated object-specific metadata, where the metadata provides controls over the output levels or amplitudes of different audio objects (see Tracey, ¶ 0031 and 0045), and making obvious to place information, such as watermarks, in the rendering metadata, such as the object-specific metadata and watermark suitability values, so that a rendering system can vary the levels of audio objects with watermark embedding (see Nurmukhanov, ¶ 0068-0069, 0075, and 0091-0099),
“outputting, by the computing system, the varied audio-rendering-directive metadata over time for communication along with an audio stream to the destination, to facilitate rendering of the audio stream at the destination in accordance with the varied audio-rendering-directive metadata over time” by teaching an object post-processor to apply the object-specific metadata (see Tracey, ¶ 0045-0047, and see Nurmukhanov, ¶ 0068-0069),
“wherein the rendering of the audio stream at the destination conveys the information by being in accordance with the varied audio-rendering-directive metadata over time that represents the information” by making obvious that the rendered audio is output based on the metadata such that changing positions, velocity, etc., and/or relative spatial positions of the audio object is rendered based on that metadata (see Tracey, figure 6, steps 624-626, and ¶ 0057 and 0059), and it is rendered to convey the information, or watermarks, by varying amplitude values of audio objects as a function of time and frequency (see Nurmukhanov, ¶ 0069 and 0095-0099), and
“wherein the audio-rendering-directive metadata includes spatial-audio specifications over time, and wherein varying the audio-rendering-directive metadata over time in a manner that represents the information comprises varying the spatial-audio specifications over time in a manner that represents the information” by making obvious metadata associated with static and dynamic (audio) objects, where the metadata specifies attributes such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, 0038, 0048, and 0057, in view of Nurmukhanov, ¶ 0068-0069).
Regarding claim 2, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “method of claim 1, wherein each spatial-audio specification of the spatial-audio specifications defines a respective perceptual-audio-direction for multi-speaker audio rendering, and wherein varying the spatial-audio specifications comprises varying the perceptual-audio-direction defined by each spatial-audio specification” by teaching metadata associated with static and dynamic (audio) objects, where the metadata specifies attributes, such as relative spatial positions in three-dimensional space, and the metadata specifies these with dynamic attributes, such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, 0038, 0048, and 0057, and see Nurmukhanov, ¶ 0068-0069).
Regarding claim 5, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “method of claim 1, wherein the audio stream defines a sequence of audio frames, wherein the audio-rendering-directive metadata defines a spatial-audio specification respectively per frame for each audio frame of the sequence of audio frames, and wherein varying the spatial-audio specifications over time comprises varying the spatial-audio specifications from audio frame to audio frame” by teaching the bitstream comprised, in part, by one or more audio objects, and each audio object includes an audio payload and a header with object-specific metadata (see Tracey, ¶ 0031), and the metadata includes dynamic object attributes, such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, and see Nurmukhanov, ¶ 0068-0069).
Regarding claim 7, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “method of claim 1, wherein varying the spatial-audio specifications over time comprises varying the spatial-audio specifications to an extent that, when the audio stream is rendered in accordance with the varied spatial-audio specifications, resulting changes in spatial audio are not human perceptible but are machine perceptible” where Nurmukhanov makes it obvious to embed a security or authentication watermark in audio data, where the watermark is imperceptible (see Nurmukhanov, ¶ 0003, 0097, and 0121).
Regarding claim 8, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “method of claim 1, wherein the information comprises at least a portion of an identifier of the audio stream” because Nurmukhanov makes it obvious to embed a watermark in the audio data, where the watermark includes an identifier in order to prevent piracy and allow forensic tracking (see Nurmukhanov, ¶ 0003, 0013, and 0075).
Regarding claim 9, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “method of claim 1, further comprising receiving the audio-rendering-directive metadata, wherein varying the audio-rendering-directive metadata comprises modifying the received audio-rendering-directive metadata” because Nurmukhanov makes it obvious to vary the metadata to indicate watermark suitability values, such that it is obvious to modify the rendering metadata (see Tracey, ¶ 0031 in view of Nurmukhanov, ¶ 0068-0069 and 0076-0080).
Regarding claim 10, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “method of claim 1, wherein communicating to the destination the varied audio-rendering-directive metadata over time along with the audio stream comprises transmitting the varied audio-rendering-directive metadata over time in a virtual channel along with audio data of the audio stream” by teaching the object-based audio bitstream comprising multiple virtual channels of data, such as the channel object, dialog object, music object, effect object, height object, the program specific metadata, and the object specific metadata, and teaches that the audio-rendering-directive metadata is transmitted as the object specific metadata (see Tracey, figure 2 and ¶ 0031-0032 and 0035, and see Nurmukhanov, ¶ 0068-0070).
Regarding claim 11, Tracey teaches a “computing system comprising: at least one processor; at least one non-transitory data storage” (see Tracey, ¶ 0019); and “program instructions stored in the at least one non-transitory data storage and executable by the at least one processor to carry out operations to facilitate communicating information to a destination” (see Tracey, ¶ 0022-0025), “the operations including: varying audio-rendering-directive metadata over time …” by teaching a bitstream composed of object-based audio and associated object-specific metadata, where the metadata provides controls over the output levels or amplitudes of different audio objects (see Tracey, ¶ 0031 and 0045).
Tracey does not appear to teach the “information” as the instant application discloses. The instant application discloses “a technological advance that leverages this high level of metadata granularity as a basis to convey information that may otherwise be conveyed by watermarking the audio itself, i.e., payload data” (see instant specification, p.1, ¶ 0004).
As stated above with respect to claim 1 above, Nurmukhanov teaches selective watermarking of channels of multichannel audio (see Nurmukhanov, abstract), wherein watermarking is employed to prevent piracy and allow forensic tracking (see Nurmukhanov, ¶ 0003). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Tracey with the teachings of Nurmukhanov for the purpose of preventing piracy of media while saving processing power by selectively watermark only some of the audio channels and objects (see Tracey, ¶ 0031 in view of Nurmukhanov, ¶ 0003, 0006, and 0012-0013).
Therefore, the combination of Tracey and Nurmukhanov makes obvious:
“A computing system comprising:
at least one processor” by teaching a consumer electronic device with a CPU or DSP (see Tracey, ¶ 0019);
“at least one non-transitory data storage” by teaching storage devices with the consumer electronic device (see Tracey, ¶ 0019); and
“program instructions stored in the at least one non-transitory data storage and executable by the at least one processor to carry out operations to facilitate communicating information to a destination” by teaching machine accessible medium storing data for performing the following operations (see Tracey, ¶ 0022-0025) and making obvious to communicate the information, such as watermarks, to a decoder (see Nurmukhanov, ¶ 0068-0069 and 0075), “the operations including:
varying audio-rendering-directive metadata over time in a manner that represents the information” by teaching a bitstream composed of object-based audio and associated object-specific metadata, where the metadata provides controls over the output levels or amplitudes of different audio objects (see Tracey, ¶ 0031 and 0045), and making obvious to place information, such as watermarks, in the rendering metadata, such as the object-specific metadata and watermark suitability values, so that a rendering system can vary the levels of audio objects with watermark embedding (see Nurmukhanov, ¶ 0068-0069, 0075, and 0091-0099), and
“outputting the varied audio-rendering-directive metadata over time for communication along with an audio stream to the destination, to facilitate rendering of the audio stream at the destination in accordance with the varied audio-rendering-directive metadata over time” by teaching an object post-processor to apply the object-specific metadata (see Tracey, ¶ 0045-0047, and see Nurmukhanov, ¶ 0068-0069),
“wherein the rendering of the audio stream at the destination conveys the information by being in accordance with the varied audio-rendering-directive metadata over time that represents the information” by making obvious that the rendered audio is output based on the metadata such that changing positions, velocity, etc., and/or relative spatial positions of the audio object is rendered based on that metadata (see Tracey, figure 6, steps 624-626, and ¶ 0057 and 0059), and it is rendered to convey the information, or watermarks, by varying amplitude values of audio objects as a function of time and frequency (see Nurmukhanov, ¶ 0069 and 0095-0099), and
“wherein the audio-rendering-directive metadata includes spatial-audio specifications over time, and wherein varying the audio-rendering-directive metadata over time in a manner that represents the information comprises varying the spatial- audio specifications over time in a manner that represents the information” by making obvious metadata associated with static and dynamic (audio) objects, where the metadata specifies attributes such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, 0038, 0048, and 0057, in view of Nurmukhanov, ¶ 0068-0069).
Regarding claim 12, see the preceding rejection with respect to claim 11 above. The combination makes obvious the “computing system of claim 11, wherein each spatial-audio specification of the spatial-audio specifications defines a respective perceptual-audio-direction for multi-speaker audio rendering, and wherein varying the spatial-audio specifications comprises varying the perceptual-audio-direction defined by each spatial-audio specification” by teaching metadata associated with static and dynamic (audio) objects, where the metadata specifies attributes, such as relative spatial positions in three-dimensional space, and the metadata specifies these with dynamic attributes, such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, 0038, 0048, and 0057, and see Nurmukhanov, ¶ 0068-0069).
Regarding claim 15, see the preceding rejection with respect to claim 11 above. The combination makes obvious the “computing system of claim 11, wherein the audio stream defines a sequence of audio frames, wherein the audio-rendering-directive metadata defines a spatial-audio specification respectively per frame for each audio frame of the sequence of audio frames, and wherein varying the spatial-audio specifications over time comprises varying the spatial-audio specifications from audio frame to audio frame” by teaching the bitstream comprised, in part, by one or more audio objects, and each audio object includes an audio payload and a header with object-specific metadata (see Tracey, ¶ 0031), and the metadata includes dynamic object attributes, such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, and see Nurmukhanov, ¶ 0068-0069).
Regarding claim 17, see the preceding rejection with respect to claim 11 above. The combination makes obvious the “computing system of claim 11, wherein varying the spatial-audio specifications over time comprises varying the spatial-audio specifications to an extent that, when the audio stream is rendered in accordance with the varied spatial-audio specifications, resulting changes in spatial audio are not human perceptible but are machine perceptible” where Nurmukhanov makes it obvious to embed a security or authentication watermark in audio data, where the watermark is imperceptible (see Nurmukhanov, ¶ 0003, 0097, and 0121).
Regarding claim 18, Tracey teaches at “least one non-transitory computer-readable medium storing program instructions executable to carry out operations to facilitate communicating information to a destination” (see Tracey, ¶ 0019 and 0022-0025), “the operations including: varying audio-rendering-directive metadata over time …” by teaching a bitstream composed of object-based audio and associated object-specific metadata, where the metadata provides controls over the output levels or amplitudes of different audio objects (see Tracey, ¶ 0031 and 0045).
Tracey does not appear to teach the “information” as the instant application discloses. The instant application discloses “a technological advance that leverages this high level of metadata granularity as a basis to convey information that may otherwise be conveyed by watermarking the audio itself, i.e., payload data” (see instant specification, p.1, ¶ 0004).
As stated above with respect to claim 1 above, Nurmukhanov teaches selective watermarking of channels of multichannel audio (see Nurmukhanov, abstract), wherein watermarking is employed to prevent piracy and allow forensic tracking (see Nurmukhanov, ¶ 0003). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Tracey with the teachings of Nurmukhanov for the purpose of preventing piracy of media while saving processing power by selectively watermark only some of the audio channels and objects (see Tracey, ¶ 0031 in view of Nurmukhanov, ¶ 0003, 0006, and 0012-0013).
Therefore, the combination of Tracey and Nurmukhanov makes obvious:
“At least one non-transitory computer-readable medium storing program instructions executable to carry out operations to facilitate communicating information to a destination” by teaching storage devices, such as a hard drive, that store data for performing the following operations (see Tracey, ¶ 0019 and 0022-0025), “the operations including:
varying audio-rendering-directive metadata over time in a manner that represents the information” by teaching a bitstream composed of object-based audio and associated object-specific metadata, where the metadata provides controls over the output levels or amplitudes of different audio objects (see Tracey, ¶ 0031 and 0045), and making obvious to place information, such as watermarks, in the rendering metadata, such as the object-specific metadata and watermark suitability values, so that a rendering system can vary the levels of audio objects with watermark embedding (see Nurmukhanov, ¶ 0068-0069, 0075, and 0091-0099); and
“outputting the varied audio-rendering-directive metadata over time for communication along with an audio stream to the destination, to facilitate rendering of the audio stream at the destination in accordance with the varied audio-rendering-directive metadata over time” by teaching an object post-processor to apply the object-specific metadata (see Tracey, ¶ 0045-0047, and see Nurmukhanov, ¶ 0068-0069),
“wherein the rendering of the audio stream at the destination conveys the information by being in accordance with the varied audio-rendering-directive metadata over time that represents the information” by making obvious that the rendered audio is output based on the metadata such that changing positions, velocity, etc., and/or relative spatial positions of the audio object is rendered based on that metadata (see Tracey, figure 6, steps 624-626, and ¶ 0057 and 0059), and it is rendered to convey the information, or watermarks, by varying amplitude values of audio objects as a function of time and frequency (see Nurmukhanov, ¶ 0069 and 0095-0099), and
“wherein the audio-rendering-directive metadata includes spatial-audio specifications over time, and wherein varying the audio-rendering-directive metadata over time in a manner that represents the information comprises varying the spatial-audio specifications over time in a manner that represents the information” by making obvious metadata associated with static and dynamic (audio) objects, where the metadata specifies attributes such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, 0038, 0048, and 0057, in view of Nurmukhanov, ¶ 0068-0069).
Regarding claim 19, see the preceding rejection with respect to claim 18 above. The combination makes obvious the “at least one non-transitory computer-readable medium of claim 18, wherein each spatial-audio specification of the spatial-audio specifications defines a respective perceptual-audio-direction for multi-speaker audio rendering, and wherein varying the spatial- audio specifications comprises varying the perceptual-audio-direction defined respectively by each spatial-audio specification” by teaching metadata associated with static and dynamic (audio) objects, where the metadata specifies attributes, such as relative spatial positions in three-dimensional space, and the metadata specifies these with dynamic attributes, such as changing positions, velocity, relative spatial positions, and watermarking suitability values (see Tracey, ¶ 0035, 0038, 0048, and 0057, and see Nurmukhanov, ¶ 0068-0069).
Claim(s) 3, 13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Tracey and Nurmukhanov as applied to claims 1, 11, and 18 above, and further in view of Iliev et al. (US 2002/0059059 A1, previously cited and hereafter Iliev).
Regarding claim 3, see the preceding rejection with respect to claim 1 above. The combination of Tracey and Nurmukhanov makes obvious the method of claim 1, but does not appear to teach the features “wherein the information comprises a bit sequence including zero-bits and one-bits, and wherein varying the spatial-audio specifications over time in a manner that represents the information comprises varying the spatial-audio specifications over time with a first variation in spatial-audio specification for each zero-bit and a second variation in spatial-audio specification for each one-bit, the first variation differing from the second variation”.
Iliev discloses a method for embedding data into an audio signal, where the embedded data is imperceptible to a listener and the method embeds the data by modifying the phase component of the audio signal (see Iliev, abstract). Iliev teaches prior art of embedding data includes transparent watermarking that leverages psychoacoustic techniques, such as frequency masking and temporal masking (see Iliev, ¶ 0010-0012), and teaches the improvement where signal processing takes advantage of binaural hearing principles such as the minimum audible angle (MAA) (see Iliev, ¶ 0024). Iliev teaches that the interaural phase difference (IPD) is varied with respect to the MAA, such that the embedded data is imperceptible to a listener (see Iliev, ¶ 0030-0034 and 0054). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Tracey and Nurmukhanov with the teaching of Iliev to embed data in an audio signal in an imperceptible manner for improved security and authentication of distributed audio (see Nurmukhanov, ¶ 0003 in view of Iliev, ¶ 0005 and 0010).
Therefore, the combination of Tracey, Nurmukhanov, and Iliev makes obvious the “method of claim 1, wherein the information comprises a bit sequence including zero-bits and one-bits, and wherein varying the spatial-audio specifications over time in a manner that represents the information comprises varying the spatial-audio specifications over time with a first variation in spatial-audio specification for each zero-bit and a second variation in spatial-audio specification for each one-bit, the first variation differing from the second variation” where Iliev makes it obvious to embed a security or authentication watermark in audio data by varying the IPD to encode a logical zero or one (see Nurmukhanov, ¶ 0006, 0011-0012, and 0097 in view of Iliev, ¶ 0010 and 0054 and equations 7.1-7.2).
Regarding claim 13, see the preceding rejection with respect to claims 11 and 3 above. The combination of Tracey and Nurmukhanov makes obvious the computing system of claim 11, but does not appear to teach the features “wherein the information comprises a bit sequence including zero-bits and one-bits, and wherein varying the spatial-audio specifications over time in a manner that represents the information comprises varying the spatial-audio specifications over time with a first variation in spatial-audio specification for each zero-bit and a second variation in spatial-audio specification for each one-bit, the first variation differing from the second variation”.
For the same reasons as cited above with respect to claim 3, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Tracey and Nurmukhanov with the teaching of Iliev to embed data in an audio signal in an imperceptible manner for improved security and authentication of distributed audio (see Nurmukhanov, ¶ 0003 in view of Iliev, ¶ 0005 and 0010).
Therefore, the combination of Tracey, Nurmukhanov, and Iliev makes obvious the “computing system of claim 11, wherein the information comprises a bit sequence including zero-bits and one-bits, and wherein varying the spatial-audio specifications over time in a manner that represents the information comprises varying the spatial-audio specifications over time with a first variation in spatial-audio specification for each zero-bit and a second variation in spatial-audio specification for each one-bit, the first variation differing from the second variation” because Iliev makes it obvious to embed a security or authentication watermark in audio data by varying the IPD to encode a logical zero or one (see Nurmukhanov, ¶ 0006, 0011-0012, and 0097 in view of Iliev, ¶ 0010 and 0054 and equations 7.1-7.2).
Regarding claim 20, see the preceding rejection with respect to claims 18 and 3 above. The combination of Tracey and Nurmukhanov makes obvious the at least one non-transitory computer-readable medium of claim 18, but does not appear to teach the features “wherein the information comprises a bit sequence including zero-bits and one-bits, and wherein varying the spatial-audio specifications over time in a manner that represents the information comprises varying the spatial-audio specifications over time with a first variation in spatial-audio specification for each zero-bit and a second variation in spatial-audio specification for each one-bit, the first variation differing from the second variation”.
For the same reasons as cited above with respect to claim 3, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Tracey and Nurmukhanov with the teaching of Iliev to embed data in an audio signal in an imperceptible manner for improved security and authentication of distributed audio (see Nurmukhanov, ¶ 0003 in view of Iliev, ¶ 0005 and 0010).
Therefore, the combination of Tracey, Nurmukhanov, and Iliev makes obvious the “at least one non-transitory computer-readable medium of claim 18, wherein the information comprises a bit sequence including zero-bits and one-bits, and wherein varying the spatial-audio specifications over time in a manner that represents the information comprises varying the spatial-audio specifications over time with a first variation in spatial-audio specification for each zero-bit and a second variation in spatial-audio specification for each one-bit, the first variation differing from the second variation” because Iliev makes it obvious to embed a security or authentication watermark in audio data by varying the IPD to encode a logical zero or one (see Nurmukhanov, ¶ 0006, 0011-0012, and 0097 in view of Iliev, ¶ 0010 and 0054 and equations 7.1-7.2).
Allowable Subject Matter
Claims 4, 6, 14, and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel R Sellers whose telephone number is (571)272-7528. The examiner can normally be reached Mon - Fri 10:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan S Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Daniel R Sellers/ Primary Examiner, Art Unit 2694