Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This is in response to application filed 04/19/2023.
Priority
2. the Examiner notes that a foreign priority claim is asserted, however, the foreign priority application was not available for review and therefore the priority claims could not be verified by the Examiner.
Claim Rejections - 35 USC § 112
3. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 13-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Dependent claim 13 depends on apparatus of claim 11, however, it is believed that intention was to have dependent claim 13 depends on independent claim 12 (apparatus), since claim 11 is a dependent method claim that dependent on claim 1 (method).
Claims 14-19 are rejected for the same reasons addressed in claim 13, as they depend on dependent claim 13.
Dependent claim 20 depends on apparatus of claim 11, however, it is believed that intention was to have dependent claim 20 depends on independent claim 12 (apparatus), since claim 11 is a dependent method claim that dependent on claim 1 (method).
Claim Rejections - 35 USC § 102
4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by SPITTLE (Pub.No.: 2023/0300532 A1).
Regarding claims 1 and 12, SPITTLE teaches a spatial audio processing method and apparatus (reads on spatial /binaural rendering and spatial audio processing applied to audio streams, see [0217], [0218], [0959] and [0966]) comprising:
obtaining first movement information of a video reproduction device (reads on receiving motion data from external devices and sensors, including head movement, orientation and motion sensors external to the ear device (. e.g., head-tracked, synthetic heads, remote devices), see external sensor triggers as discussed in [0251-0254] and external/synthetic head, remote device motion as discussed in [0965]);
obtaining second movement information of an audio reproduction device (reads on motion data obtained from ear devices/earbuds (audio reproduction devices) including orientation and placement information, see [0233], [0240] and [0254]);
obtaining an audio signal (receiving audio data streams from microphones, external sources, and audio I/O interfaces for processing, see [0240], [0242] and [0258]); and
performing spatial audio processing on the audio signal based on whether the first movement information satisfies a predetermined condition (reads on evaluating motion data and using it to control spatial rendering and signal processing behavior, including condition-based determination of correct placement, orientation, and consistency, see [0233], [0221] and [0222]),
wherein, when the first movement information satisfies the predetermined condition, the spatial audio processing is performed based on the second movement information (Note that SPITTLE teaches when correct placement or orientation is detected, processing parameters and plugins maybe swapped or applied based on data from the corresponding ear/audio device, see [0221] and [0233]), and
wherein, when the first movement information does not satisfy the predetermined condition, the spatial audio processing is performed based on the first movement information and the second movement information (reads on sharing motion and sensor data between ear devices to compensate for incorrect placement, inconsistency, or mismatch, thereby using data from multiple devices jointly for spatial processing, see [0221], [0222] and [0242]).
The claimed apparatus is read as the audio system (pr chip/device) that includes a processor configured to perform spatial audio processing, as disclosed in SPITTLE and the claimed “processor” in the apparatus as recited in independent claim 12 reads on (main processing core, DSP or microDSP configured to perform the same operations recited in method claim 1, see [0238]-[0242]).
Regarding claims 2 and 13 SPITTLE teaches,
wherein the first movement information comprises at least one of spatial information and direction information of the video reproduction device (reads on using motion data, spatial information, orientation, and directional data derived from sensors such as IMU, gyroscope, magnetometer, associated with devices involved in audio/visual experiences, including head-tracked and video-related contexts that effect spatial rendering, see [0233], [0959], [00965], [0680], [0915] and [00932]), and
wherein the second movement information comprises at least one of spatial information and direction information of the audio reproduction device (reads on ear-worn audio devices (e.g., ear buds, synthetic ears, audio devices) include motion sensor providing spatial position, orientation , and directional information, which us used to control spatial audio processing and rendering, see [0233], [0218] and [0965]).
Regarding claims 3 and 14 SPITTLE teaches,
wherein the first movement information is obtained by an inertial measurement unit (IMU) of the video reproduction device (read son movement information is obtained using motion sensors including IMUs providing orientation, motion, and spatial data for devices involved in immersive and audio-visual experiences. Such motion sensing applies to devices participating in spatial rendering, including video related devices, see [0233], [0959] and [0965]), wherein the second movement information is obtained by an IMU of the audio reproduction device (reads on ear-worn audio devices include motion sensor IMUs used to detect orientation, motion, and position of the audio reproduction device for spatial audio processing, see [0233], [0218] and [0965]), and
wherein each of the IMU of the video reproduction device and the IMU of the audio reproduction device comprises at least one of an acceleration sensor (see [0233] and [0965]), an angular velocity sensor (gyroscope), and a geomagnetic sensor (magnetometer).
Regarding claim 4 SPITTLE teaches,
wherein the predetermined condition is a case in which a quaternion value of the video reproduction device is greater than a predetermined value (reads on evaluating motion/orientation values derived from IMU data against thresholds/conditions to determine how spatial audio processing is performed. Orientation and movement states are assessed relative to predetermined criteria to control processing behavior, see [0233] and [0254]), and
wherein the quaternion value (note that devices orientation using orientation data derived from motion sensors, inherently includes quaternion-based representations used in IMU-based spatial rendering and orientation tracking, see [0233] and [0959]) is obtained based on a value obtained from at least one of the acceleration sensor (reads on motion sensing using acceleration sensors as part of IMU-based orientation determination for spatial audio processing, see [0233] and [0295]), the angular velocity sensor, and the geomagnetic sensor of the video reproduction device.
Regarding claims 5 and 16 SPITTLE teaches,
wherein the predetermined condition (reads on evaluating predefined conditions based on sensor data to determine whether and how spatial audio processing is performed , see [0233] and [0254]) satisfies at least one of a case (reads on one or more sensor-based conditions may independently satisfy a trigger condition for spatial audio processing, see [0253] and [0254]) in which an acceleration obtained via the acceleration sensor of the video reproduction device (read son motion sensors, including acceleration sensors, in video reproduction devices for detecting movement and spatial position, see [0218] and [0233]) is greater than a first value (reads on comparing sensor outputs against threshold values to detect significant movement events, see [0251] and [0254]), a case in which a variation in an angular velocity obtained via the angular velocity sensor (SPITTLE teaches gyroscopes (angular velocity sensors) for detecting rotational movement and changes in orientation of the video reproduction device, see [0218] and [0233]) of the video reproduction device is greater than a second value (reads on threshold-based evaluation of motion sensor data, including rotational changes to trigger processing decisions, see [0251] and [0254]), and a case in which a variation in a magnetic field direction obtained via the geomagnetic sensor is greater than a third value (reads on detecting changes in sensor signals exceeding predefined thresholds as qualifying conditions for control logic, see [0251] and [0254]).
Regarding claims 6 and 17 SPITTLE teaches,
wherein the predetermined condition is a case in which the video reproduction device is determined as moving in a predetermined pattern repeatedly based on the first movement information (reads on quaternion-based orientation and motion modeling, see [0229]-[0233] and [0240]).
Regarding claim 7 SPITTLE teaches,
wherein the first value, the second value, and the third value (reads on multiple configurable parameters and values used in spatial audio processing that control rendering behavior and processing modes, see [0958] and [0961]) are values configured by learning information associated with a movement of a user (reads on learning behavior and movement patterns to personalize spatial audio processing, including adapting parameters based on how a user moves and interacts with devices, see [0959] and [0961]) of the video reproduction device and the audio reproduction device (reads on capturing user movement via motion sensors in devices worn or used by the user to influence spatial rendering and processing decisions, see [0233] and [0965]), and
wherein the learning is performed via machine learning (SPITTLE teaches the use of neural networks and machine learning engines to perform learning and adaptation of audio processing parameters, see [0234] and [0257]).
Regarding claims 8 and 18 SPITTLE teaches,
wherein the predetermined condition (reads on evaluating spatial and positional conditions to determine when different spatial audio processing modes should be applied, see [0233] and [0254]) is a case in which the video reproduction device is located beyond a range of an angle of field of the user of the video reproduction device and the audio reproduction device (reads on determining whether a device is within or outside a user’s effective spatial/field-of-view range based on orientation and spatial positioning data, see [0233] and [0966]), and
wherein the range of the angle of field of the user is determined based on the audio reproduction device (reads on the spatial perception and angle-based determinations are derived from audio reproduction devices (e.g., earbuds/headphones) using motion, orientation, and spatial audio processing, see [0233], [0959] and [0966]).
Regarding claim 9 SPITTLE teaches,
wherein the predetermined pattern (read son identifying motion patterns and behavioral patterns derived from sensor data over time to control spatial audio processing behavior, see [0233] and [0959]) corresponds to the first movement information (reads on motion information obtained from devices (e.g., head movement, orientation, spatial position) as input data for determining processing behavior, see [0218] and [0233]) repeated (detecting recurring user behavior and repeated motion characteristics for learning and system adaptation purposes, see [0959] and [0961]) during a predetermined period of time (read son analyzing sensor data over time windows and periods to derive behavioral and movement-based determinations, see [0234] and [0959]).
Regarding claims 10 and 19 SPITTLE teaches,
wherein the video reproduction device comprises at least one of a sensor (reads on video devices including cameras and sensors for detecting users position, orientation, and visual characteristics, [0218] and [0965]) that
recognizes a face of a user of the video reproduction device and the audio reproduction device (reads on detecting user presence and facial characteristics using imaging and sensor data associated with video reproduction devices, see [0965] and [0966]), a sensor that recognizes a direction of a line of sight of the user (reads on determining where a user is looking and whether the user’s attention is directed towards a display or elsewhere, see [0233] and [0966]), and a sensor that recognizes a direction of the face of the user (reads on detecting head orientation and face direction using sensor and motion data to control spatial audio rendering, see [0233] and [0218]), and
wherein the predetermined condition satisfies at least one of a case in which the face
of the user is not recognized (reads on user’s presence or facial detection is unavailable or lost, see [0965] and [0966]), a case in which the direction of the face of the user is not toward
a display of the video reproduction device that displays a video, and a case in which the
direction of the line of sight of the user is not toward the display.
Regarding claims 11 and 20 SPITTLE teaches,
obtaining information associated with whether the display of the video reproduction device that displays a video is activated (reads on monitoring operational states of the video reproduction device, including whether video content is being displayed and whether the display is active, for purposes of controlling spatial audio processing, see [0218] and [0233]), wherein the predetermined condition is a case in which the display is deactivated (reads on changing spatial audio behavior when video presentation is unavailable or inactive, including cases where the display is turned off or not actively presenting video, see [0233] and [0966]).
Conclusion
5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rasha S. AL-Aubaidi whose telephone number is (571) 272-7481. The examiner can normally be reached on Monday-Friday from 8:30 am to 5:30 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Ahmad Matar, can be reached on (571) 272-7488.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RASHA S AL AUBAIDI/Primary Examiner, Art Unit 2693