DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered.
Response to Arguments
Applicant’s arguments, see the remarks, filed 09/22/82025, with respect to the amended claim(s) 1, 17, and 19 have been fully considered and moot in view of new grounds of rejection by relying on the teachings of Moles et al. (US 20220386025 A1) in view of Baum et al. (US 20230400574 A1) and Nielsen et al. (US 20220095052 A1) in view of Baum et al. (US 20230400574 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-9, and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Moles et al. (US 20220386025 A1) in view of Baum et al. (US 20230400574 A1).
Regarding claims 1, 17, and 19, Moles discloses a video bar (figs. 1-3), comprising:
a processor (118 and 120 of fig. 1, more details 200 of fig. 2, and 300 of fig. 3); and
a memory coupled to the processor ([0035] For example, program code stored in a memory of the computing device 118 may be executed by a processor of the computing device 118, by the digital signal processor 120 itself, or a separate audio processor in order to carry out one or more of the operations shown in FIGS. 3 and 4. In some embodiments, the program code may be a computer program stored on a non-transitory computer readable medium that is executable by a processor of the device, [0044] As an example, the instructions can reside completely, or at least partially, within any one or more of the memory 234, the computer readable medium, and/or within the processor 232 during execution of the instructions),
the memory having program instructions stored thereon that, upon execution by the processor ([0035 and 0044]), cause the video bar to:
determine a location and an orientation of an Information Handling System (IHS) in a conference room ([0046] position information and orientation information, [0060] The acoustic measurements may include determining a location of the speakers 208 in the room and calculating a distance between the speaker 208 and each microphone 206, for example, [0061] determined position data, [0077] The controller 330 may also configure a setting of the AEC 346 for one or more of the microphone inputs 340 by determining a position of the speaker 208 within the conferencing environment 100 based on the audio response signals received in response to the test signal, and adjusting the AEC 346 settings for those inputs 340 based on the speaker position. For example, the AEC performance of the audio system 200 may be improved by placing an acoustic “null” in the region around the speaker 208, so that the microphone inputs 340 do not pick up audio being played by the speaker 208, thus reducing speaker interference in the incoming audio signals. In embodiments, the controller 330 can determine the speaker position by calculating a distance between the speaker 208 and each microphone 206 based on the audio response signal; [0071] the autoconfiguration component of the controller 330 may be further configured to calculate or estimate an expected distance between the given microphone 206 and a select audio source (e.g., talker), and use the expected distance to further adjust the input gain structure for that microphone 206, for example, based on the Inverse Square Law or the like, [0077] the controller 330 can determine the speaker position by calculating a distance between the speaker 208 and each microphone 206 based on the audio response signal; 402-405 of fig. 4, [0082-0086]); and
at least in part in response to the determined location and the determined orientation, operate an audio device in the conference room ([0046] communication interface 236 may enable the controller 230 to transmit information to and receive information from one or more of the microphone 206 and speaker 208, or other component(s) of the audio system 200. This can include device identifiers, device operation information, lobe or pick-up pattern information, position information, orientation information, commands to adjust one or more characteristics of the microphone or speaker, and more; [0009-0010] receive the audio response signal from each microphone; and adjust one or more settings of the DSP component based on the audio response signal; [0059-0060] For example, in embodiments, the autotuning component 240 can be configured to perform this comparison and other analysis of the captured audio signals (also referred to herein as an “audio response signal”) and use the results to automatically optimize or tune one or more settings of the DSP 220 accordingly, for example, to compensate for frequency resonance, unwanted gains, noise, and/or acoustic echo, and other concerns, as described herein. In such embodiments, the autotuning component 240 can be configured to analyze the captured audio by comparing the input signals to the original test signal and perform acoustic measurements for the conferencing environment 100 based on said analysis; [0070] the controller 330, or the autoconfiguration component included therein, can be configured to prescribe configurations, optimizations, or other adjustments for various aspects or settings of the DSP component 320 upon analyzing data obtained for the conferencing environment 100. Such data may include, for example, device information obtained from one or more networked devices (e.g., speakers 208, microphones 206, etc.); data obtained from analyzing audio signals captured by the networked microphones, including the audio response signals obtained in response to playing a test signal via a speaker in the room; and position or room data obtained for the room or conferencing environment (e.g., environment 100). The adjustments may relate to any configurable setting of the DSP component 320, including, for example, the audio processing blocks shown in FIG. 3 and described herein, as well as others. The controller 330 can be further configured to establish or carry out the adjustments at the DSP 320 by sending appropriate control signals to the corresponding blocks or components of the DSP 320, as shown in FIG. 3. The control signals may include, for example, parameters, values, and other information for configuring the corresponding DSP settings (e.g., EQ frequencies, gains, mutes, etc.); [0072] more example in figure 5, 504 and 506 of fig. 5, [0094-0096] receive audio response signal from microphone and adjust DSP setting(s) on audio response signal).
It is noted that Moles does not teach transmit an electro-magnetic signal: in response to the transmission, receive an acknowledgement from the IHS: and determine the location comprising a distance between the video bar and the IHS based, at least in part, upon a Time-of-Flight (ToF) calculation, wherein the ToF calculation is based, at least in part, upon a difference between: (i) a time the acknowledgement is received, and (ii) a time of the transmission; to determine an orientation of the IHS in the conference room, triangulate a plurality of the ToF calculations from a plurality of antennas configured at a known distance apart on the HIS.
Baum discloses transmit an electro-magnetic signal: in response to the transmission, receive an acknowledgement from the IHS: and determine the location ([0101] The control circuitry may, for example, use antenna signals and motion data to determine the angle of arrival of signals from other electronic devices to thereby determine the locations of those electronic devices relative to the user's electronic device) comprising a distance between the video bar and the IHS based, at least in part, upon a Time-of-Flight (ToF) calculation, wherein the ToF calculation is based, at least in part, upon a difference between: (i) a time the acknowledgement is received, and (ii) a time of the transmission ([0006] The method may include calculating a time of flight based on the transmission time and the reception time. The method may include calculating a distance of the potential mobile target based on the time of flight. The method may include comparing the distance based on a library of stored distances for the room. The method may include confirming the potential mobile target is outside the room if the distance is outside one or more stored ranges for the room; [0054] The electronic device 202 can also measure a distance based on the time delay between the transmission of an ultrasonic signal 208 and the time the return is received at the electronic device 202; [0069] The one or more electromagnetics signals 206 can be transmitted using a wireless protocol. The electromagnetic signals 206 can be transmitted via a transmitter or a transceiver. The electromagnetic signal returns can be received via one or more antenna. The electronic device can measure the time of flight between when the electromagnetic signals 206 are transmitted and the one or more returns are received. The time of flight can be used to determine a range. A brief description of the various protocols is described below; [0083] A mobile device or smart speaker can include circuitry for performing ranging measurements. Such circuitry can include one or more dedicated antennas (e.g., three antennas) and circuitry for processing measured signals. The ranging measurements can be performed using the time-of-flight of pulses between the mobile device and the smart speaker. In some implementations, a round-trip time (RTT) is used to determine distance information, e.g., for each of the antennas; [0092] At 303, the first electronic device 310 computes distance information 330, which can have various units, such as distance units (e.g., meters) or as a time (e.g., milliseconds). Time can be equivalent to a distance with a proportionality factor corresponding to the speed of light. In some embodiments, a distance can be computed from a total round-trip time, which may equal T.sub.2−T.sub.1+T.sub.4−T.sub.3; the calculation of distance in variant techniques [0103], [0131], [0121], [0201], [0208], [0215]);
to determine an orientation of the IHS in the conference room, triangulate a plurality of the ToF calculations ([0113] and [0208] time of flight) from a plurality of antennas configured at a known distance apart on the IHS ([0101] to determine the orientation of device 510 relative to nodes 578; [0102] electronic device 510 may include multiple antennas (e.g., a first antenna 548-1 and a second antenna 548-2) coupled to transceiver circuitry 576 by respective transmission lines 570 (e.g., a first transmission line 570-1 and a second transmission line 570-2). Antennas 548-1 and 548-2 may each receive a wireless signal 558 from node 578. Antennas 548-1 and 548-2 may be laterally separated by a distance d.sub.1, where antenna 548-1 is farther away from node 578 than 548-2 (in the example of FIG. 5);[0095] B. Triangulation to Determine Angle of Arrival; [0100] The separate measurements from different antennas can be used to determine a two-dimensional (2D) position, as opposed to a single distance value that could result from anywhere on a circle/sphere around the mobile device).
Taking the teachings of Mole and Baum together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the calculation of distance and orientation of Baum into the video bar of Mole to generate more accurate relative position (distance/angle) and the electromagnetic model can be accurate enough to detect the presence of a specific person based on the electromagnetic signal returns alone.
Regarding claim 3, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein the audio device comprises at least one of: a microphone, or a speaker (206 and 208 of fig. 2).
Regarding claims 4, 18, and 20, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein to operate the audio device, the program instructions, upon execution by the processor, cause the video bar to apply a setting to the audio device comprising at least one of: a power on setting, a power off setting, a mute setting, or an unmute setting ([0025] (3) a signal continuity aspect for checking or confirming that the devices are properly connected to the system (e.g., cables connected, test signals reaching intended destinations, etc.), have appropriate mute and gain settings (e.g., not muted), and AEC references are properly selected, [0040] the controller 230 may support one or more third-party controllers and in-room control panels (e.g., volume control, mute, etc.) for controlling the microphones 206 and speakers 208; [0070] The controller 330 can be further configured to establish or carry out the adjustments at the DSP 320 by sending appropriate control signals to the corresponding blocks or components of the DSP 320, as shown in FIG. 3. The control signals may include, for example, parameters, values, and other information for configuring the corresponding DSP settings (e.g., EQ frequencies, gains, mutes, etc.).
Regarding claim 5, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein to operate the audio device, the program instructions, upon execution by the processor, cause the video bar to select content to be reproduced or not reproduced by the audio device during a remote meeting ([0030]).
Regarding claim 6, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein to operate the audio device, the program instructions, upon execution by the processor, cause the video bar to select, for the audio device, based, at least in part, upon the location, at least one of: a gain setting, or a volume setting ([0070] The controller 330 can be further configured to establish or carry out the adjustments at the DSP 320 by sending appropriate control signals to the corresponding blocks or components of the DSP 320, as shown in FIG. 3. The control signals may include, for example, parameters, values, and other information for configuring the corresponding DSP settings (e.g., EQ frequencies, gains, mutes, etc.); [0071] In some cases, the autoconfiguration component of the controller 330 may be further configured to calculate or estimate an expected distance between the given microphone 206 and a select audio source (e.g., talker), and use the expected distance to further adjust the input gain structure for that microphone 206, for example, based on the Inverse Square Law or the like).
Regarding claim 7, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein the conference room comprises a plurality of devices, and wherein to operate the audio device, the program instructions, upon execution by the processor, further cause the video bar to select the device among the plurality of devices, at least in part, in response to the locations ([0058 and 0086]).
Regarding claim 8, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein the conference room comprises a plurality of devices, and wherein to operate the audio device, the program instructions, upon execution by the processor, further cause the video bar to distribute different remote meeting content to each of the plurality of audio devices, at least in part, in response to the locations ([0058 and 0086]).
Regarding claim 9, Moles and Baum teach the video bar of claim 8, Moles further teaches wherein the remote meeting content comprises at least two of: an audio broadcast, an audio content share, or a live translation service ([0028] The conferencing environment 100 also includes one or more loudspeakers 108 for playing or broadcasting far-end audio signals received from audio sources that are not present in the conferencing environment 100 (e.g., remote conference participants connected to the conferencing event through third-party conferencing software) and other far-end audio signals associated with the conferencing event).
Regarding claim 11, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein to operate the audio device, the program instructions, upon execution by the processor, cause the video bar to: turn the audio device on in response to the IHS facing the audio device, or turn the audio device off in response to the IHS facing another audio device ([0073 and 0089]).
Regarding claim 12, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein the program instructions, upon execution by the processor, cause the video bar to determine an identity of a user of the IHS, and, at least in part in response to the identity, operate the audio device ([0045]).
Regarding claim 13, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein the conference room comprises a plurality of IHSs, and wherein the program instructions, upon execution by the processor, further cause the video bar to: identify a number of the plurality of IHSs in the conference room; and at least in part in response to the number of IHSs, operate the audio device (206, 208, and 239 of fig. 2, [0025] (1) a device discovery aspect for identifying the number and type of devices located in a given room, [0052 and 0055] identify devices based on IDs).
Regarding claim 14, Moles and Baum teach the video bar of claim 13, Moles further teaches wherein to identity the number of the plurality of IHSs in the conference room, the program instructions, upon execution by the processor, cause the video bar to transmit an ultrasonic signal (239 of fig. 2, [0052 and 0055]).
Regarding claim 15, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein the conference room comprises a plurality of IHSs, and wherein the program instructions, upon execution by the processor, further cause the video bar to: identify a distribution of the plurality of IHSs in the conference room (404 and 405 of fig. 4, establish audio routes based on device information); and at least in part in response to the number of IHSs, operate the audio device (406 of fig. 4).
Regarding claim 16, Moles and Baum teach the video bar of claim 1, Moles further teaches wherein the program instructions, upon execution by the processor, further cause the video bar to: determine a change in the location of the IHS in the conference room ([0006] The installer may also check signal levels and audio continuity by verifying that near-end audio signals captured by the microphones are reaching an output port (e.g., USB port) for transmission to remote participants, and that far-end audio signals are reaching the speaker outputs. After the initial configurations are complete, the installer may return to regularly update the audio system to adapt to changes in room layout, seated locations, audio connections, and other factors, as these changing circumstances may cause the audio system to become sub-optimal over time); and at least in part in response to the change, operate the audio device ([0006] the changing circumstances may cause the audio system to become sub-optimal over time, [0061] position data change detected by the cameras).
Claim(s) 1, 3, 13-17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nielsen et al. (US 20220095052 A1) in view of Baum et al. (US 20230400574 A1).
Regarding claims 1, 17, and 19, Nielsen discloses a video bar (104 of fig. 2), comprising:
a processor (1010 of fig. 10, [0057] a processor); and
a memory (1030 of fig. 10) coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor ([00578]), cause the video bar to:
to determine a location and an orientation of an Information Handling System (IHS) in a conference room ([0027] and [0028] determination of the spatial relationship between each of the microphone assemblies 120A-120C and the other components of the video conference endpoint 104 may be utilized to set up the video conference endpoint 104 and/or continuously verify the location and/or orientation of the microphone assemblies 120A-120C with respect to the video conference endpoint 104 by emitting or outputting audio by at least one of the loudspeakers 116(1), 116(2) and receiving the emitted audio by the microphone assemblies 120A-120C; ):
transmit an electro-magnetic signal ([0027] and [0037] as illustrated in FIG. 6A, 0(1) represents the elevation angle between the microphone assembly 120 and the loudspeaker 116(1) emitting the audio captured by the microphone assembly 120);
in response to the transmission, receive an acknowledgment from the IHS ([0027], [0037], and [0039] The video conference endpoint 104 may use acoustic propagation delay techniques, or time-of-flight techniques, to determine the time Δt it takes for the audio emitted from the loudspeaker 116(1) to be received by the directional microphones 310(1)-310(4) of the microphone assembly 120, so the system 104 would obviously acknowledge the delay signal from the IHS); and
determine the location comprising a distance between the video bar and the IHS based, at least in part, upon a Time-of-Flight (ToF) calculation ([0039] The video conference endpoint 104 may use acoustic propagation delay techniques, or time-of-flight techniques, to determine the time Δt it takes for the audio emitted from the loudspeaker 116(1) to be received by the directional microphones 310(1)-310(4) of the microphone assembly 120. to calculate the distance r(1) between the microphone assembly 120 and the loudspeaker 116(1)) ,
wherein the ToF calculation is based, at least in part, upon a difference between: (i) a time the acknowledgment is received, and (ii) a time of the transmission ([0039] The video conference endpoint 104 may use acoustic propagation delay techniques, or time-of-flight techniques, to determine the time Δt it takes for the audio emitted from the loudspeaker 116(1) to be received by the directional microphones 310(1)-310(4) of the microphone assembly 120. In other words, the video conference endpoint 104 may use the compiled acoustic impulse responses 530(1)-530(4) of the directional microphones 310(1)-310(4) to measure the time Δt between the loudspeaker 116(1) emitting the audio and the directional microphones 310(1)-310(4) receiving the emitted audio. One such technique is to detect the initial time delay from the impulse response which is already available, and correct for the known latency in the equipment);
to determine an orientation of the IHS in the conference room ([0039] Turning to FIG. 6B, and with continued reference to FIGS. 3A-3C, 4, 5A, 5B, and 6A, illustrated is a schematic representation 610 of the microphone assembly 120 in a spatial orientation with respect to the video conference endpoint 104, and more specifically, with respect to the first loudspeaker 116(1) of the video conference endpoint 104),
triangulate a plurality of the ToF calculations from a plurality of antennas configured at a known distance apart on the IHS ([0041] With the rotational angles φ(1), φ(2), the elevation angles θ(1), θ(2), and the distances r(1), r(2) between the microphone assembly 120 and the loudspeakers 116(1), 116(2) determined, and with the loudspeakers 116(1), 116(2) having a predetermined positional/spatial relationship with respect to a camera 112, the spatial relationship (i.e., spatial coordinates (x.sub.c, y.sub.c, z.sub.c) of the microphone assembly 120 in a coordinate system centered on the camera 112, and rotational angle δ.sub.c (i.e., orientation) of the microphone assembly 120 with respect to the axis of the camera 112) of the microphone assembly 120 may be determined through known triangulation techniques) ; and
at least in part in response to the determined location and the determined orientation, operate an audio device in the conference room ([0047] At 790, with the knowledge of the spatial coordinates (x.sub.c, y.sub.c, z.sub.c) and the rotational angle δ.sub.c of the microphone assembly 120 with respect to the axis of the camera 112, the video conference endpoint 104 can assign, route or mix the outputs of each of the directional microphones 310(1)-310(4) to the appropriate directional audio output channel (i.e., the left channel or the right channel) of the video conference endpoint 104 so that the audio outputs of the video conference endpoint 104 spatially match what is shown in the video output of the video conference endpoint 104).
It is noted that Nielsen does not teach an electro-magnetic signal and from a plurality of antennas.
Baum teaches an electro-magnetic signal and from a plurality of antennas ([0066] In various embodiments, the electronic device 202 can include multiple antennas. Various broadcast and reception schemes can be employed for sending and receiving electromagnetic signals 206. In various embodiments, the electronic device 202 can include pairs of transmitting and receiving antenna. In various embodiments, the transmission and reception from different antenna can be done simultaneously).
Taking the teachings of Nielsen and Baum together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the electro-magnet signal and antennas of Baum into the video bar of Mole to generate more accurate relative position (distance/angle) and the electromagnetic model can be accurate enough to detect the presence of a specific person based on the electromagnetic signal returns alone.
Regarding claim 3, Nielsen teaches the video bar of claim 1, wherein the audio device comprises at least one of: a microphone, or a speaker (116(1) and 116(2) of figs. 6B and 6C).
Regarding claim 13, Nielsen teaches the video bar of claim 1, wherein the conference room comprises a plurality of IHSs, and wherein the program instructions, upon execution by the processor, further cause the video bar to: identify a number of the plurality of IHSs in the conference room (120A-120E of fig. 8A); and at least in part in response to the number of IHSs, operate the audio device ([0048]).
Regarding claim 14, Nielsen teaches the video bar of claim 13, wherein to identity the number of the plurality of IHSs in the conference room (e.g. figs. 8A and 8B, microphones 120), the program instructions, upon execution by the processor, cause the video bar to transmit an ultrasonic signal ([0072]).
Regarding claim 15, Nielsen teaches the video bar of claim 1, wherein the conference room comprises a plurality of IHSs, and wherein the program instructions, upon execution by the processor, further cause the video bar to: identify a distribution of the plurality of IHSs in the conference room (310(1)A to 310(40)B of fig. 8B); and at least in part in response to the number of IHSs, operate the audio device (fig. 8B, [0049] left channel and right channel).
Regarding claim16, Nielsen teaches the video bar of claim 1, wherein the program instructions, upon execution by the processor, further cause the video bar to:determine a change in the location of the IHS in the conference room; andat least in part in response to the change, operate the audio device (fig. 8C, change in the location of the HIS).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
LaBosco (US 11736876 B2) discloses a computer-implemented method and system for performing testing of audio equipment in a conference room, the method executed by one or more processors, comprising: (a) commissioning the conference room with a set of audio video equipment, the set of audio equipment comprising one or more loudspeakers, one or more microphones, and audio signal processing equipment that includes at least an acoustic echo cancellation function; (b) determining an initial audio performance level in the conference room, and storing the initial audio performance level (IAPL).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached Monday-Friday 6:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TUNG T. VO
Primary Examiner
Art Unit 2425
/TUNG T VO/Primary Examiner, Art Unit 2425