DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-2, 5-13, and 16-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
The declaration under 37 CFR 1.132 filed 11/25/2025 is insufficient to overcome the rejection of claims 1-2, 5-13, and 16-21 based upon 35 U.S.C. 103 as set forth in the last Office action because:
It include(s) statements which amount to an affirmation that the affiant has never seen the claimed subject matter before. This is not relevant to the issue of nonobviousness of the claimed subject matter and provides no objective evidence thereof. See MPEP § 716.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 5, 7, 11-12, 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hess (US 2012/0008806 A1, previously cited), in view of Tammam et al. (US 2017/0354196 A1, previously cited as pertinent and hereafter Tammam), and further in view of Oswald et al. (US 2014/0334637 A1, previously cited and hereafter Oswald).
Regarding claim 1, the combination of Hess, Tammam, and Oswald makes obvious a system with these features as shown below.
Hess teaches “A system for providing augmented spatialized audio in a vehicle” (see Hess, abstract), comprising a headtracking device (see Hess, figure 2, unit 270a), such as a camera or other suitable image sensor, which detects the position of the first user's head in the vehicle cabin (see Hess, figure 2, unit 270a and ¶ 0022) and teaches that the system comprises a plurality of loudspeakers disposed in a vehicle cabin, such as pairs of loudspeakers disposed in each headrest of a convertible car’s cabin (see Hess, figure 2, units 110L, 110R, 210L, and 210R and ¶ 0018-0020), and the at least one additional loudspeaker (e.g., a woofer) for transducing lower frequencies (see Hess, ¶ 0018). Of note, Hess also teaches the use of this sound system in closed roof vehicles, such as trucks, motor boats, or airplanes (see Hess, ¶ 0019).
Next, Hess teaches “a controller configured to output to a first binaural device, according to the position of the first user's head in the vehicle, a first spatial audio signal, such that the first binaural device produces a first spatial acoustic signal perceived by the first user as originating from a first virtual source location within the vehicle cabin” where a signal processor receives a position signal from the headtracking camera or sensor and outputs a user specific binaural sound output to the first binaural device (i.e., the headrest loudspeakers), such that the first binaural device outputs the first spatial acoustic signal perceived as originating from the first virtual source location in the vehicle cabin, because the system receives at least one of the audio signals from the audio signal source, processes the at least one received audio signal by the BRIR filters and the crosstalk cancellation filters to spatialize the received audio signal at a location intended as if the user was listening to loudspeakers corresponding to the format of the multichannel audio signal source, where, for example, the system spatializes, with the BRIR and crosstalk cancellation filters, two audio signals to the typical locations of stereo loudspeakers (or spatializes six channels with a 5.1 audio signal to the typical locations of loudspeakers in a 5.1 audio system, etc.), such that when output from the headrest loudspeakers and processed based on the user’s head position, the user perceives sound from virtual locations in fixed positions relative to the cabin of the vehicle (i.e., if the user moves their head position, the sound is still perceived from a fixed virtual position, such as a front left position corresponding to a front left loudspeaker) (see Hess, figure 2, units 110L, 110R, 260, and 270a, figure 3, units 260-261, and 270, and ¶ 0020-0023), and
Hess also teaches the features “wherein the first spatial audio signal comprises at least a first upper range of a first content signal” such that lower frequencies are not output by the loudspeakers in the headrest (see Hess, ¶ 0018 and 0022).
However, Hess doesn’t explicitly teach that the headtracking device (i.e., the camera or other image sensor) is “a time-of-flight sensor”.
Tammam teaches an augmented audio enhanced perception system for improving driver’s performance with various vehicles (see Tammam, abstract and ¶ 0003). Tammam teaches the system for use with a driver located in the cabin of a vehicle, wherein the angular orientation of the operator’s head is tracked by one or more time of flight cameras mounted on the dashboard of the vehicle (see Tammam, ¶ 0099). Recalling that Hess teaches head tracking features using a camera (see Hess, ¶ 0022-0023, figure 2, units 270a-270b, and figure 3, unit 270), and notes that other suitable image sensors are useable to track the head positions of the user (see Hess, ¶ 0022), one of ordinary skill in the art would have found it obvious to track the head positions of the user with a time-of-flight camera because Tammam teaches the time-of-flight is a suitable image sensor for tracking the angular orientation of the operator’s head (see Hess, ¶ 0022-0023 in view of Tammam, ¶ 0099). Therefore, it would have been obvious to one of ordinary skill in the art at the time (OOSITA) of the effective filing date to modify Hess with the teachings of Tammam for the purpose of teaching an alternate method of tracking the user’s head by using a different suitable image sensor, such as a time-of-flight camera, and expect similar results to tracking the head position with a camera (see Hess, ¶ 0022 in view of Tammam, ¶ 0099). OOSITA would expect similar results because Tammam teaches that the time-of-flight camera is able to provide the angular orientation of the operator’s head by tracking the operator’s head, where OOSITA would expect that detecting the head position of a user using other suitable image sensors as taught by Hess (see Hess, ¶ 0022) can extend to detecting the head position of a user by detecting movements performed by the user using the time-of-flight camera signals (see Tammam, ¶ 0099).
Therefore, the combination makes obvious the system comprising features of
“a headtracking device, comprising a time-of-flight sensor, configured to detect a position of a first user's head in the vehicle cabin, the position of the first user's head in the vehicle cabin including an orientation of the first user's head”, by making obvious the use of a time-of-flight camera to track the first user’s head in the vehicle cabin, wherein the tracking provides the angular orientation of the first user’s head (see Hess, ¶ 0022 in view of Tammam, ¶ 0099).
However, the combination of Hess and Tammam do not appear to teach the features where the system comprises “a plurality of speakers disposed in a perimeter of a cabin of the vehicle”.
Oswald discloses an automobile system with near-field speakers close to a listener’s head to produce sound that locates a sound source at a position other than the actual speaker positions (see Oswald, abstract). Herein, Oswald teaches “a plurality of speakers disposed in a perimeter of a cabin of the vehicle” (see Oswald, figure 1, units 104, 106, 108, 110, 112, and 114, and ¶ 0003 and 0020) and, similar to Hess, teaches near-field speakers in a driver’s headrest for generating virtual sound sources from the near-field speakers (see Oswald, figure 1, units 122 and 124, figure 4, units 122, 124, 224-1, and 224-n, and ¶ 0021 and 0027-0030). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Hess and Tammam with the teachings of Oswald for the purpose of improving the sound staging of sound coming from front speakers when using the headrest speakers for virtual positioning (see Hess, ¶ 0020-0022 in view of Oswald, ¶ 0024 and 0027-0028).
Therefore, the combination of Hess, Tammam, and Oswald makes obvious:
“A system for providing augmented spatialized audio in a vehicle” (see Hess, abstract), “comprising:
a headtracking device, comprising a time-of-flight sensor, configured to detect a position of a first user's head in the vehicle cabin, the position of the first user's head in the vehicle cabin including an orientation of the first user's head” by making obvious the use of a time-of-flight camera to track the first user’s head in the vehicle cabin, wherein the tracking provides the angular orientation of the first user’s head (see Hess, ¶ 0022 in view of Tammam, ¶ 0099);
“a plurality of speakers disposed in a perimeter of a cabin of the vehicle” to improve the sound staging of sound coming from front speakers when using the headrest speakers for virtual positioning (see Hess, ¶ 0018-0021 in view of Oswald, figure 1, units 104, 106, 108, 110, 112, and 114 and ¶ 0003, 0020, 0024, and 0027-0028); and
“a controller configured to output to a first binaural device, according to the position of the first user's head in the vehicle, a first spatial audio signal, such that the first binaural device produces a first spatial acoustic signal perceived by the first user as originating from a first virtual source location within the vehicle cabin” because the system receives at least one of the audio signals from the audio signal source, processes the at least one received audio signal by the BRIR filters and the crosstalk cancellation filters to spatialize the received audio signal at a location intended as if the user was listening to loudspeakers corresponding to the format of the multichannel audio signal source, where, for example, the system spatializes, with the BRIR and crosstalk cancellation filters, two audio signals to the typical locations of stereo loudspeakers (or spatializes six channels with a 5.1 audio signal to the typical locations of loudspeakers in a 5.1 audio system, etc.), such that when the audio is processed based on the user’s head position and output from the first binaural device (i.e., a pair of headrest loudspeakers), the user perceives sound from virtual locations in fixed positions relative to the cabin of the vehicle (i.e., if the user moves their head position, the sound is still perceived from a fixed virtual position, such as a front left position corresponding to a front left loudspeaker) (see Hess, figure 2, units 110L, 110R, 260, and 270a, figure 3, units 260-261, and 270, and ¶ 0020-0023),
“wherein the first spatial audio signal comprises at least a first upper range of a first content signal” such that the headrest speakers output at least a first upper range of the first content signal, such that lower frequencies are not output by the headrest speakers and the loudspeakers arranged around the cabin provide improved sound staging (see Hess, ¶ 0018 and 0022-0023, in view of Oswald, figure 4 and ¶ 0020-0022 and 0028-0029), and
“wherein the controller is further configured to drive the plurality of speakers with a driving signal such that a first bass content of the first content signal is produced in the vehicle cabin” because the combination makes it obvious to use the additional pairs of speakers to improve the sound heard by the user, such as improving the sound quality below about 100 Hz (see Hess, figure 2, units 110L, 110R, 230, and 260 and ¶ 0018, in view of Oswald, figure 4, units 104 and 106 and ¶ 0021, 0027-0029, and 0032-0033).
Regarding claim 5, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the headtracking device further comprises a plurality of two-dimensional cameras” where Hess teaches at least two cameras for tracking two different user’s and Tammam makes obvious to use a plurality of time-of-flight, or 2D, cameras (see Hess, ¶ 0022 in view of Tammam, ¶ 0099).
Regarding claim 7, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the controller is further configured to output to a second binaural device, according to the position of a second user's head in the vehicle, a second spatial audio signal, such that the second binaural device produces a second spatial acoustic signal perceived by the second user as originating from either the first virtual source location or a second virtual source location within the vehicle cabin” because Hess teaches a second binaural device and second spatial audio signal with a second headtracking camera, where the first listener’s and second listener’s spatial audio signals, or generated soundfields, are different (see Hess, figure 5, units 110L, 110R, 210L, 210R, 270a, and 270b and ¶ 0026, and also see Oswald, figure 1, units 128 and 130 and ¶ 0022).
Regarding claim 11, see the preceding rejection with respect to claim 7 above. The combination makes obvious the “system of claim 7, wherein the first binaural device and the second binaural device are each selected from one of a set of speakers disposed in a headrest or an open-ear wearable” because Hess teaches the speakers are disposed in the headrests (see Hess, figure 2, units 110L, 110R, 210L, and 210R and ¶ 0020, and also see Oswald, figure 1, units 122, 124, 128, and 130 and ¶ 0021-0022).
Regarding claim 12, see the preceding rejection with respect to claim 1 above. The combination of Hess, Tammam, and Oswald makes obvious the system of claim 1, and likewise makes obvious:
“A method for providing augmented spatialized audio in a vehicle cabin” (see Hess, abstract), “comprising the steps of:
determining, according to a headtracking signal output from a headtracking device comprising a time-of-flight sensor, a position of a first user's head in the vehicle cabin, the position of the first user's head in the vehicle cabin including an orientation of the first user's head” by making obvious the use of a time-of-flight camera to track the first user’s head in the vehicle cabin, wherein the tracking provides the angular orientation of the first user’s head (see Hess, ¶ 0022 in view of Tammam, ¶ 0099);
“outputting to a first binaural device, according to the position of the first user's head in the vehicle cabin, a first spatial audio signal, such that the first binaural device produces a first spatial acoustic signal perceived by the first user as originating from a first virtual source location within the vehicle cabin” because the system receives at least one of the audio signals from the audio signal source, processes the at least one received audio signal by the BRIR filters and the crosstalk cancellation filters to spatialize the received audio signal at a location intended as if the user was listening to loudspeakers corresponding to the format of the multichannel audio signal source, where, for example, the system spatializes, with the BRIR and crosstalk cancellation filters, two audio signals to the typical locations of stereo loudspeakers (or spatializes six channels with a 5.1 audio signal to the typical locations of loudspeakers in a 5.1 audio system, etc.), such that when the audio is processed based on the user’s head position and output from the first binaural device (i.e., a pair of headrest loudspeakers), the user perceives sound from virtual locations in fixed positions relative to the cabin of the vehicle (i.e., if the user moves their head position, the sound is still perceived from a fixed virtual position, such as a front left position corresponding to a front left loudspeaker) (see Hess, figure 2, units 110L, 110R, 260, and 270a, figure 3, units 260-261, and 270, and ¶ 0020-0023),
“wherein the first spatial audio signal comprises at least an upper range of a first content signal” such that the headrest speakers output at least a first upper range of the first content signal, such that lower frequencies are not output by the headrest speakers and the loudspeakers arranged around the cabin provide improved sound staging (see Hess, ¶ 0018 and 0022-0023, in view of Oswald, ¶ 0020-0022, and 0028-0029, and figure 4); and
“driving a plurality of speakers with a driving signal such that a first bass content of the first content signal is produced in the vehicle cabin” because the combination makes it obvious to use the additional pairs of speakers to improve the sound heard by the user, such as improving the sound quality below about 100 Hz (see Hess, ¶ 0018 and figure 2, units 110L, 110R, 230, and 260, in view of Oswald, ¶ 0021, 0027-0029, and 0032-0033, and figure 4, units 104 and 106).
Regarding claim 16, see the preceding rejection with respect to claim 12 above. The combination makes obvious the “method of claim 12, wherein the headtracking device further comprises a plurality of two-dimensional cameras” where Hess teaches at least two cameras for tracking two different user’s and Tammam makes obvious to use a plurality of time-of-flight, or 2D, cameras (see Hess, ¶ 0022 in view of Tammam, ¶ 0099).
Regarding claim 18, see the preceding rejection with respect to claim 12 above. The combination makes obvious the “method of claim 12, further comprising the steps of outputting to a second binaural device, according to a position of a second user's head in the vehicle cabin, a second spatial audio signal, such that the second binaural device produces a second spatial acoustic signal perceived by the second user as originating from a second virtual source location within the vehicle cabin” because Hess teaches a second binaural device and second spatial audio signal with a second headtracking camera, where the first listener’s and second listener’s spatial audio signals, or generated soundfields, are different (see Hess, figure 5, units 110L, 110R, 210L, 210R, 270a, and 270b and ¶ 0026, and also see Oswald, figure 1, units 128 and 130 and ¶ 0022).
Claims 2, 8-10, 13, and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Hess, Tammam, and Oswald as applied to claims 1 and 12 above, and further in view of Christoph et al. (US 2018/0146290 A1, previously cited and hereafter Christoph).
Regarding claim 2, see the preceding rejection with respect to claim 1 above. The combination of Hess, Tammam, and Oswald makes obvious the system of claim 1, but the combination does not appear to teach a time-alignment feature.
Christoph discloses an individual delay compensation for personal sound zones (see Christoph, abstract). Christoph teaches a listening space, such as inside a vehicle, where the listening space is divided into multiple sound zones having different reproduced sound material (see Christoph, figure 1, unit 100 and ¶ 0024 and 0028-0029). Herein, Christoph teaches the system having plural speakers disposed in the doors, dashboard, and/or hat shelf of the vehicle and speakers in the headrests (see Christoph, figure 4, units 108A-108R and ¶ 0035 and 0039). Specifically, Christoph teaches the use of a pressure matching technique to match the complex pressures of the wavefronts generated by the speakers in the cabin, such that sound pressure is created in the bright zone (e.g., the drivers’ position) and zero pressure in the dark zones (e.g., the other seating positions) (see Christoph, figure 5 and ¶ 0036 and 0040). Additionally, Christoph teaches delaying the different loudspeaker outputs to reduce audible artifacts (see Christoph, figure 8 and ¶ 0033, 0043-0045, and 0054). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Hess, Tammam, and Oswald with the teachings of Christoph for the purpose of generating a plurality of sound zones that reproduce different sound content with limited interference from adjacent sound zones (see Christoph, ¶ 0002-0003).
Therefore the combination of Hess, Tammam, Oswald, and Christoph makes obvious the “system of claim 1, wherein the controller is configured to time-align the production of the first bass content with the production of the first spatial acoustic signal” where Christoph makes obvious the time-alignment of the signals in the outputs of the plural loudspeakers for a particular zone of audio (see Christoph, figure 8 and ¶ 0033, 0043-0044, 0054-0055, and 0064-0065).
Regarding claim 8, see the preceding rejection with respect to claims 2 and 7 above. The combination of Hess, Tammam, and Oswald makes obvious the system of claim 7, and for similar reasons as claim 2 above, the combination of Hess, Tammam, Oswald, and Christoph makes obvious the “system of claim 7, wherein the second spatial audio signal comprises at least an upper range of a second content signal, wherein the controller is further configured to drive the plurality of speakers in accordance with a first array configuration such that the first bass content is produced in a first listening zone within the vehicle cabin and in accordance with a second array configuration such that a second bass content of the second content signal produced in a second listening zone within the vehicle cabin, wherein in the first listening zone a magnitude of the first bass content is greater than a magnitude of the second bass content and in the second listening zone the magnitude of the second bass content is greater than the magnitude of the first bass content” because Hess teaches the headrest speakers output sound above 100 Hz and a cabin loudspeaker is configured to output for lower frequencies, Oswald teaches similar near-field loudspeakers and further teaches plural loudspeakers in the cabin for improving the sound imaging, and Christoph makes obvious the separate zones of audio where the bass signal for a first zone does not interfere with a bass signal for an adjacent zone (see Hess, ¶ 0018, in view of Oswald, ¶ 0020-0021 and 0027-0028, and further in view of Christoph, figure 5 and ¶ 0002, 0036, and 0039-0040).
Regarding claim 9, see the preceding rejection with respect to claim 8 above. The combination makes obvious the “system of claim 8, wherein the controller is configured to time-align, in the first listening zone, the production of the first bass content with the production of the first spatial acoustic signal and to time-align, in the second listening zone, the production of the second bass content with the second spatial acoustic signal” where Christoph makes obvious the time-alignment of the signals in the outputs of the plural loudspeakers for a particular zone of audio (see Christoph, figure 8 and ¶ 0033, 0043-0044, 0054-0055, and 0064-0065).
Regarding claim 10, see the preceding rejection with respect to claim 8 above. The combination makes obvious the “system of claim 8, wherein, in the first listening zone, the magnitude of the first bass content exceeds the magnitude of the second bass content by three decibels, wherein, in the second listening zone, the magnitude of the second bass content exceeds the magnitude of the first bass content by three decibels” where Christoph makes obvious this separation in the audio zones in low frequencies (see Christoph, figure 5).
Regarding claim 13, see the preceding rejection with respect to claims 2 and 12 above. The combination of Hess, Tammam, and Oswald makes obvious the method of claim 12, and for similar reasons as stated above with respect to claim 2, the combination of Hess, Tammam, Oswald, and Christoph makes obvious the “method of claim 12, wherein the production of the first bass content is time-aligned with the production with the production of the first spatial acoustic signal” where Christoph makes obvious the time-alignment of the signals in the outputs of the plural loudspeakers for a particular zone of audio (see Christoph, figure 8 and ¶ 0033, 0043-0044, 0054-0055, and 0064-0065).
Regarding claim 19, see the preceding rejection with respect to claims 2 and 18 above. The combination of Hess, Tammam, and Oswald makes obvious the method of claim 18, and for similar reasons as claim 2 above, the combination of Hess, Tammam, Oswald, and Christoph makes obvious the “method of claim 18, wherein the plurality of speakers are driven in accordance with a first array configuration such that the first bass content is produced in a first listening zone within the vehicle cabin and in accordance with a second array configuration such that a second bass content of a second content signal is produced in a second listening zone within the vehicle cabin, wherein in the first listening zone a magnitude of the first bass content is greater than a magnitude of the second bass content and in the second listening zone the magnitude of the second bass content is greater than the magnitude of the first bass content, wherein the second spatial audio signal comprises at least on upper range of a second content signal” because Hess teaches the headrest speakers output sound above 100 Hz and a cabin loudspeaker is configured to output for lower frequencies, Oswald teaches similar near-field loudspeakers and further teaches plural loudspeakers in the cabin for improving the sound imaging, and Christoph makes obvious the separate zones of audio where the bass signal for a first zone does not interfere with a bass signal for an adjacent zone (see Hess, ¶ 0018, in view of Oswald, ¶ 0020-0021 and 0027-0028, and further in view of Christoph, figure 5 and ¶ 0002, 0036, and 0039-0040).
Regarding claim 20, see the preceding rejection with respect to claim 19 above. The combination makes obvious the “method of claim 19, wherein in the first listening zone, the production of the first bass content is time-aligned with the production of the first spatial acoustic signal and in the second listening zone, the production of the second bass content is time-aligned with the second spatial acoustic signal” where Christoph makes obvious the time-alignment of the signals in the outputs of the plural loudspeakers for a particular zone of audio (see Christoph, figure 8 and ¶ 0033, 0043-0044, 0054-0055, and 0064-0065).
Regarding claim 21, see the preceding rejection with respect to claim 19 above. The combination makes obvious the “method of claim 19, wherein, in the first listening zone, the magnitude of the first bass content exceeds the magnitude of the second bass content by three decibels, wherein, in the second listening zone, the magnitude of the second bass content exceeds the magnitude of the first bass content by three decibels” where Christoph makes obvious this separation in the audio zones in low frequencies (see Christoph, figure 5).
Claims 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Hess, Tammam, and Oswald as applied to claims 1 and 12 above, and further in view of Karkkainen et al. (US 2019/0357000 A1, previously cited and hereafter Karkkainen).
Regarding claim 6, see the preceding rejection with respect to claim 1 above. The combination of Hess, Tammam, and Oswald makes obvious the system of claim 1, wherein the headtracking device is a camera (see Hess, ¶ 0022). However, the combination does not appear to teach a trained neural network for headtracking.
Karkkainen discloses methods and apparatuses for implementing a head-tracking headset (see Karkkainen, abstract). In particular, Karkkainen teaches a machine learning based 3D audio rendering (MLB3DAR) system, where the system includes machine learning circuitry to improve tracking of the user’s head position in order to output binaural audio via HRTFs (see Karkkainen, ¶ 0022, 0031, and 0042-0043, and figure 2, units 200, 210, and 212). One of ordinary skill in the art at the time of the effective filing date would have found it obvious to apply the teaching of machine learning/artificial intelligence to the camera system of the combination and expect similar or improved head tracking ability (see Hess, ¶ 0022 in view of Karkkainen, ¶ 0003, 0018, and 0042). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Oswald and Hess with the teachings of Karkkainen for the purpose of improving headtracking (see Karkkainen, ¶ 0018, and 0042). Therefore, the combination of Hess, Tammam, Oswald, and Karkkainen makes obvious the “system of claim 1, further comprising a neural network trained to determine the position of the first user's head according to headtracking signal output from the headtracking device” (see Hess, ¶ 0022, in view of Karkkainen, ¶ 0018, 0022, 0031, and 0042-0043, and figure 2, units 200, 210, and 212).
Regarding claim 17, see the preceding rejection with respect to claims 6 and 12 above. The combination of Hess, Tammam, and Oswald makes obvious the method of claim 12, and for the same reasons as claim 2 above, the combination of Hess, Tammam, Oswald, and Karkkainen makes obvious the “method of claim 12, wherein determining the position of the first user's head in the vehicle cabin comprises inputting the headtracking signal to a neural network trained to determine the position of the first user's head according to the headtracking signal” (see Hess, ¶ 0022, in view of Karkkainen, ¶ 0018, 0022, 0031, and 0042-0043, and figure 2, units 200, 210, and 212).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 12 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 3 and 10 of U.S. Patent No. 11,700,497 B2 (hereafter ‘497) in view of Tammam (US 2017/0354196 A1, cited above with respect to 35 USC 103 rejections).
Regarding instant claim 1, claim 3 of ‘497 reads on the broader features of the instant claim, such that the headtracking device comprises a time-of-flight sensor, but does not read on the features wherein “the position of the first user's head in the vehicle cabin including an orientation of the first user's head”.
Tammam, as shown above with respect to the 35 USC 103 rejection of claim 1, teaches a time-of-flight sensor that tracks a driver’s head by providing an angular orientation of the driver’s head (see Tammam, ¶ 0099). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify claim 3 of ‘497 with the teachings of Tammam for the purpose of providing improved headtracking, such as providing the angular orientation of the user’s head (see Tammam, ¶ 0099).
Regarding instant claim 12, see the preceding nonstatutory double patenting rejection with respect to claim 1 above. Claim 11 of ‘497 reads on the broader features of the instant claim, such that the headtracking device comprises a time-of-flight sensor, but does not read on the features wherein “the position of the first user's head in the vehicle cabin including an orientation of the first user's head”. For the same reasons, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify claim 8 of ‘497 with the teachings of Tammam for the purpose of providing improved headtracking, such as providing the angular orientation of the user’s head (see Tammam, ¶ 0099).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Aguirre-Valencia (US 2014/0225887 A1, previously cited) teaches a computer system that provides stereoscopic images by tracking the user’s head motion to update the stereoscopic images (see Aguirre-Valencia, abstract and ¶ 0048 and 0052); Aguirre-Valencia teaches the use of a time of flight technique to track the head position of the user, such as using a KINECT from MICROSOFT CORPORATION (see Aguirre-Valencia, ¶ 0055); and
Ziraknejad et al., “The effect of Time-of-Flight Camera Integration Time on Vehicle Driver Head Pose Tracking Accuracy” discloses an experimental study to determine the accuracy of a 3D depth map as dependent on camera integration time, where the 3D depth map is used to estimate a vehicle driver’s head orientation (see Ziraknejad, p. 247, abstract).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel R Sellers whose telephone number is (571)272-7528. The examiner can normally be reached Mon - Fri 10:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan S Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Daniel R Sellers/ Primary Examiner, Art Unit 2694