DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20, are rejected under 35 U.S.C. 103 as being unpatentable over DAZE et al (Pub. No.: US 2018/0088777 A1; hereinafter DAZE) in view of Kripalani et al (Pub. No.: US 2009/0254839 A1; hereinafter Kripalani)
Consider claims 1, 13, and 20, DAZE clearly shows and discloses one or more non-transitory computer-readable media, a system, and a computer-implemented method comprising: receiving a plurality of media signals from a plurality of consoles included in a vehicle, wherein each of the plurality of consoles is included in a common communication session (the user devices may also include one or more mobile terminals 108 associated with occupants of vehicle 100, mobile terminals read on consoles; occupant 221 may use user device 210-1 to conduct a video conference. Accordingly, user interface 220 may display an icon 227 representing the video conference. Occupant 221 may invite occupant 222 to the video conference by dragging icon 227 and dropping on occupant 222. This way, occupant 222 may conveniently join the video conference through user device 210-2) (paragraphs: 0024, 0027, 0053, 0055, and 0061 and fig. 1, labels: 106, and 110); causing video signals associated with the plurality of media signals to be displayed at a first console in the plurality of consoles (user interface 220 may be configured to allow a user to share the same content among all user devices 210 by moving icon 227 to a predefined area of user interface 220, streaming the video or audio call data to user device 210-2 when user device 210-1 granting a permission) (paragraphs: 0055-0056, 0061 and fig. 1, fig. 7, and fig. 8); spatializing, based on a position, an audio signal associated with a first media signal of the plurality of media signals to generate a spatialized audio signal (pressure sensors 112 may be configured to generate signals only when the sensed pressure exceeds a preset threshold pressure that corresponds to the lower boundary of a typical weight of a non-infant human, for example, 20 lbs. In another embodiment, pressure sensors 112 may be configured to accurately measure an occupant's weight) (paragraphs: 0038 and 0043), the first media signal being associated with a second console of the plurality of consoles, the position being either a position of a first window displaying the video signal associated with the first media signal or a position of an occupant of the vehicle associated with the second console (display on user interface 220 a map of vehicle 100. The map may show various types of information regarding user devices 210, such as the content presented by each user device 210, the identities and/or seating positions of the occupants who are using user devices 210, etc. FIG. 3 is a schematic diagram illustrating an exemplary user interface 220 for sharing content. Referring to FIG. 3, user interface 220 displays a map 225 of vehicle 100. Map 225 may show an interior layout of vehicle 100. In some embodiments, the occupants using user devices 210 may be shown on map 225 according to the seating positions of the occupants) (paragraphs: 0038 and 0043, 0050, 0057, and claim 4); and driving a set of loudspeakers to reproduce the spatialized audio signal (paragraphs: 0023, 0063, 0067 and claims 4-6) (fig. 8); however, DAZE does not specifically teach another example for receiving a plurality of media signals from a plurality of consoles.
In the same field of endeavor, Kripalani clearly discloses another example for receiving a plurality of media signals from a plurality of consoles (Kripalani teaches meeting console 110-1 may include various multimedia input devices arranged to capture media content from one or more participants 154-1-p, and stream the media content to the multimedia conference server 130. The meeting console 110-1 includes various types of multimedia input equipment, such as a video camera 106 and an array of microphones 104-1-r) (paragraphs: 0015, 0018, 0027, 0032-0033).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to incorporate the teaching of Kripalani into teaching of DAZE for the purpose of using plurality of consoles.
Consider claims 2 and 14, DAZE and Kripalani clearly show the method, wherein spatializing the audio signal comprises making the audio signal appear to originate from the position (DAZE: paragraphs: 0038, 0043, 0050, 0057, and claim 4).
Consider claim 3, DAZE and Kripalani clearly show the method, further comprising selecting the set of loudspeakers based on respective locations of the set of loudspeakers within the vehicle (DAZE: paragraphs: 0023, 0063, 0050, 0057).
Consider claims 4 and 16, DAZE and Kripalani clearly show the method, further comprising combining the video signals into a composite video signal for display in the first window (Kripalani: paragraphs: 0024, 0027, 0032-0033, and fig. 1, labels: 130 for combining the respective video signals coming from meeting consoles 108-1s and 104-1-r).
Consider claims 5 and 15, DAZE and Kripalani clearly show the method, further comprising synchronizing the spatialized audio signal with the video signal associated with the first media signal (DAZE: paragraphs: 0055 and 0063).
Consider claims 6 and 17, DAZE and Kripalani clearly show the method, wherein causing the video signals to be displayed at the first console comprises causing a second video signal associated with a third console to be displayed in a second window of the first console (DAZE: paragraphs: 0038 and 0043, 0050, 0057, and claim 4).
Consider claims 7 and 18, DAZE and Kripalani clearly show the method, further comprising: spatializing, based on a second position, a second audio signal associated with the second video signal to generate a second spatialized audio signal, the second video signal being associated with a third console of the plurality of consoles, the second position being either a position of the second window or a position of an occupant of the vehicle associated with the third console; and driving the set of loudspeakers to reproduce the second spatialized audio signal (DAZE: paragraphs: 0038 and 0043, 0050, 0057, and claim 4).
Consider claims 8 and 19, DAZE and Kripalani clearly show the method, further comprising: receiving a second media signal from a source external to the vehicle; and causing the second media signal to be output at the first console (Kripalani: paragraphs: 0022,0025, 0028 and 0030-0031).
Consider claim 9, DAZE and Kripalani clearly show the method, wherein the second media signal is a video signal (video streams from all or a subset of the participants may be displayed as a mosaic on the participant's display with a top window with video for the current active speaker, and a panoramic view of the other participants in other windows) (Kripalani: paragraphs: 0028 and 0030).
Consider claim 10, DAZE and Kripalani clearly show the method, wherein: the second media signal includes content from at least two separate sources; and the computer-implemented method further comprises separating the second media signal into: (i) a third media signal including first content from a first one of the separate sources (Kripalani: paragraphs: 0028), and (ii) a fourth media signal including second content from a second one of the separate sources (video streams from all or a subset of the participants may be displayed as a mosaic on the participant's display with a top window with video for the current active speaker, and a panoramic view of the other participants in other windows) (Kripalani: paragraphs: 0022, 0025, 0028 and 0030-0031).
Consider claim 11, DAZE and Kripalani clearly show the method, wherein the second media signal is received from a teleconference service operating at a remote location (Kripalani: paragraphs: 0028 and 0030-0033).
Consider claim 12, DAZE and Kripalani clearly show the one or more non-transitory computer-readable media, further comprising, transmitting the plurality of media signals as a composite media signal to a teleconference service operating at a remote location (Kripalani: paragraphs: 0024, 0027, and 0032-0033 and fig. 1, labels: 108-1-s, and 104-1-r).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Amal Zenati whose telephone number is 571-270-1947. The examiner can normally be reached on 8:00 -5:00 M-F.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on 571- 272- 7488. The fax phone number for the organization where this application or proceeding is assigned is 571- 273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/AMAL S ZENATI/Primary Examiner, Art Unit 2693