Prosecution Insights
Last updated: April 17, 2026
Application No. 18/717,274

ADVANCED MULTIMEDIA SYSTEM FOR ANALYSIS AND ACCURATE EMULATION OF LIVE EVENTS

Non-Final OA §103§112
Filed
Jun 06, 2024
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1-23 are pending in the present application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of Israel patent application number IL289178 filed on 12/20/2021 has been received and made of record. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/02/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: “3D spatial environment” should be “three dimensional (3D) spatial environment” and “3-D” should be “3D”. Appropriate correction is required. Claim 5 is objected to because of the following informalities: “a 2-D or 3-D projector” should be “a two dimensional (2D) or 3D projector”. Appropriate correction is required. Claim 7 is objected to because of the following informalities: “the 3-D projector” should be “the 3D projector”. Appropriate correction is required. Claim 8 is objected to because of the following informalities: “a 7-D projector” should be “a seven dimensional (7D) projector”. Appropriate correction is required. Claim 23 is objected to because of the following informalities: “generate 4D spatial pixels” should be “generate four dimensional (4D) spatial pixels”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2 recites the limitation "the audio-visual sensory projection and processing device" in line 1, “the edited output streams” in line2, “the outputs” in line 6. “the output” in line 7. There is insufficient antecedent basis for this limitation in the claim. Claim 3 recites the limitation " wherein the emulated volume local space" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim 5 recites the limitation " the light fixture device" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim 19 recites the limitation " in which spatial grid" in line 1. There is insufficient antecedent basis for this limitation in the claim (Examiner think claim 19 is dependent claim18). Claim 21 recites the limitation "the audio-visual sensory projection and processing device" in line 1. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 9-16, 20, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0342106 to Rosado in view of U.S. PGPubs U.S. PGPubs 2020/0349672 to Cochran et al., further in view of U.S. PGPubs 2005/0024488 to Borg. Regarding claim 1, Rosado teaches a system for emulating a remote live event or a recorded event at a client side, while optimally preserving the original experience of the live or recorded event (par 0005, “the system and accompanying methods provide for a virtual reality system that enables users to actively participate in live events or activities occurring in locations remote from the users by rendering, in real-time, digital versions of the live events or activities that are readily accessible via applications executing on the computing devices of the users “, par 0059, “A virtual reality system 100 and methods for utilizing and operating the virtual reality system 100 are disclosed. In particular, the system 100 and accompanying methods provide for a virtual reality system that enables users to actively participate in live events or activities occurring in locations remote from the users by rendering, in real-time, digital versions of the live events or activities that are readily accessible via applications executing on the computing devices of the users.”, par 0126, “ the method 800 may include utilizing the data streams to represent the real event (e.g. basketball game) by replicating real world athlete or performer movement in the virtual world thereby allowing for emulation of a live event.”), comprising: a) a remote side being located at the live event 3D spatial environment and being adapted to collect and analyze multi-channels data from an array of sensors deployed at said remote side (Fig 1, par 0064-0067, “The system 100 may include a location 115, which may be any type of location that an event, activity, and/or gathering may take place …. the first and/or second users 101, 110 may desire to communicate with others at the location 115, participate in an event at the location, and/or otherwise engage with the location 115 despite being located remotely from the location 115 …. the cameras 117 may have any functionality of any traditional digital or non-digital camera, and may be configured to capture, manipulate, and/or process audio, video, a combination of audio and video, motion capture content, augmented reality content, virtual reality content, any type of content, or any combination thereof …. the cameras 117 may detect deformations, distortions, alterations, or a combination thereof, in the light grid pattern to calculate the depth and surface information of the objects at the location 115. As the depth and surface information is obtained by cameras 117, processors of the cameras 117 can process the depth information to generate three-dimensional depth images corresponding to the objects at the location 115 and/or the location 115 itself”), said remote side comprises: b) a publisher device for: b.1) collecting data from all sensors' multi-channels at said live event (Fig 1, par 0070-0071, “ The sensors 125 may be configured to track the movements of the user 123 at the location, track the movements of any objects at the location, or any combination thereof. In certain embodiments, any measurements taken by the sensors 125 and/or tracked motion and/or depth information may be provided to any device of the system 100, such as, but not limited to, the cameras 117, the computing device 120, the first and/or second user devices 102, 111, the servers 140, 150, 160, and/or the database 155 for further processing …. multiple sensors 125 (e.g. depth sensors may be placed around a performance area (e.g. location 115), which may be used to capture and record pointcloud data of anyone (e.g. user 123) standing within a performance zone at a live event. The pointcloud data may be sent to a computer on site (e.g. computing device 120), which then sends the data over the internet to the system event host server (e.g. servers 140, 150, 160”); b.2) generating dynamic spatial location and time map layers describing dynamic movements during said live or recorded event (par 0072, “the sensors 132 may be placed on various body parts of the user 123 and may be configured to retrieve, capture and/or record real time motion capture data associated with each body part of the user 123 as each body part is moved. The capture motion capture data may be transmitted to the device 130 for processing and/or handling, and the device 130 may transmit the captured data to the servers 140, 150 and/or to any other devices of system 100 for further processing”, par 0133, “The method 2000 may begin at step 2002, which may include having each event profile table have a recorded sequence timeline that determines when data streams associated with the event are called to be displayed to users using the client applications. At step 2004, the method 2000 may include, when the event begins or at another selected time, displaying all data streams in the same chronological order as the real world event”); synchronizing said live event using time, 3D geometric location and media content (par 0034, “positioning virtual lights, fixtures, point cloud display objects, video walls, and visual effects within the virtual/digital world, assigning values to such virtual objects to listen for their corresponding data streams, recalling assignment and performance data from the event identity in the database for playback, and creating a visualizing of a recorded event in a three-dimensional (3D) space in real-time”, par 0079, “when a user logs into the client application, the client application may enable the user to access interaction preferences, settings, inventory, avatar data, and all performances or livestream events that the user may or may not be interested in. In certain embodiments, the client application may interact with any of the devices of system 100, such as the servers 140, 150, 160 to gather required data and synchronize any events or performances being watched with other users and their corresponding client applications”, par 0135, “the method 2400 may include positioning, in the worlds developed for users, all placement of lights, fixtures, pointcloud display objects, rendered video walls, and effects properly. At step 2404, the method 2400 may include assigning values for each interactive or performance item (e.g. lighting, fixture, etc.) referenced in step 2402 to listen for their corresponding respective data streams”); b.3) editing the data of the live event to optimally fit the user's personal space geometric structure (par 0080, “The client application may display a digital mailbox for accessing messages and other content (and to send digital electronic messages to other users), and can enable a user to customize his or her virtual house and property as desired. The client application may enable users to experience custom events that may be performed using online content provider systems, such as YouTube, Spotify, and SoundCloud, or by utilizing 3D cameras 117 in the system 100 and the user's computer (e.g first user device 102) audio input of whom is hosting the event”, par 0082, “the client application may render a virtual home and/or property for each user of the client application. For example, every user may have a property (e.g. virtual home), in which they can create their own living space, customized with decorations, furniture, art, property, landscape, terrain, and items”); b.4) generating an output stream that is ready for distribution with adaptation to each client (par 0071, “The pointcloud data may be sent to a computer on site (e.g. computing device 120), which then sends the data over the internet to the system event host server (e.g. servers 140, 150, 160). This data may be replicated across all clients connected to the live event. Within the virtual world that the users are connected to via client applications supported by the system 100, a custom bounds object may be designed to display the pointcloud data that is being received by the server(s)”, par 0098, “For large scale events, such as sports games or plays, sensors 125 at the real world event may retrieve, capture, and record real-time player or performer positions. The tracking data may be sent to a computer on site (e.g. computing device 120), which then sends the data over the internet (e.g. communications network 135) to be received and/or processed by the system 100. This data may be replicated across all clients connected to the event and the system 100 may display the appropriate data to each player or performer it is designated to”, par 0124, “ the method 600 may include replicating the data streams across all client applications connected to the event. In certain embodiments, the replicating may be performed and/or facilitated by utilizing the first user device 102, the second user device 111, the server 140, the server 150, the server 160, the sensors 125, the sensors 132, the device 130, the cameras 117, the computing device 120, the communications network 135, any component of FIGS. 1-48, any combination thereof, or by utilizing any other appropriate program, network, system, or device”); c) a client side located at the facility of a user (Fig 1, par 0078, “The client application of the system 100 may be accessible via an internet connection established, such as by utilizing a browser program executing on the first or second user devices 102, 111”), said client side comprises: c.1) a multimedia output device for generating for each client, multi-channel signals for executing an emulated local 3-D space with Personal/Public Space Enhancement (PSE) that mimics said live event with high accuracy and adaptation to said facility (par 0071, “multiple sensors 125 (e.g. depth sensors may be placed around a performance area (e.g. location 115), which may be used to capture and record pointcloud data of anyone (e.g. user 123) standing within a performance zone at a live event. The pointcloud data may be sent to a computer on site (e.g. computing device 120), which then sends the data over the internet to the system event host server (e.g. servers 140, 150, 160). This data may be replicated across all clients connected to the live event. Within the virtual world that the users are connected to via client applications supported by the system 100, a custom bounds object may be designed to display the pointcloud data that is being received by the server(s”, par 0098, “The tracking data may be sent to a computer on site (e.g. computing device 120), which then sends the data over the internet (e.g. communications network 135) to be received and/or processed by the system 100. This data may be replicated across all clients connected to the event and the system 100 may display the appropriate data to each player or performer it is designated to”, par 0120, “the method 200 may include transmitting the sensor data, the media content, and/or the aligned wraparound view of the performer 116 over the communications network 135 to be replicated and displayed across all client applications of users connected to the system 100 that want to experience the event remotely. The wraparound view of the performer 116 and a rendering of the concert venue may be displayed in real-time to each user via a graphical user interface of their client applications so that the users can experience the concert while not being physically at the physical location 115 of the concert”, par 0126, “ the method 800 may include utilizing the data streams to represent the real event (e.g. basketball game) by replicating real world athlete or performer movement in the virtual world thereby allowing for emulation of a live event. In certain embodiments, the representing and/or replicating may be performed and/or facilitated by utilizing the first user device 102, the second user device 111, the server 140, the server 150, the server 160, the sensors 125, the sensors 132, the device 130, the cameras 117, the computing device 120, the communications network 135, any component of FIGS. 1-48, any combination thereof, or by utilizing any other appropriate program, network, system, or device”); c.2) at least one server for processing live streamed wideband data received from said remote side and distributing edited content to each client at said client side (par 0124, “ the method 600 may include replicating the data streams across all client applications connected to the event. In certain embodiments, the replicating may be performed and/or facilitated by utilizing the first user device 102, the second user device 111, the server 140, the server 150, the server 160, the sensors 125, the sensors 132, the device 130, the cameras 117, the computing device 120, the communications network 135, any component of FIGS. 1-48, any combination thereof, or by utilizing any other appropriate program, network, system, or device”). But Rosado keeps silent for teaching decoding the channels data to an editable format. In related endeavor, Cochran et al. teach decoding the channels data to an editable format (par 0085, par 0131,” the GPU 1005 performs RGB to YUV conversion to generate 6 splits (see, e.g., 1107 in FIG. 11). In one embodiment, an NV12 format is used, although the underlying principles of the invention are not limited to any particular format. In the illustrated implementation, a motion JPEG encoder 1007 encodes the image frames 1107 using motion JPEG (i.e., independently encoding each image frame without inter-frame data as used by other video compression algorithms such as MPEG-2).”, par 0088-0089, “n H.264 encoder 1016 encodes the video streams for transmission to end users and a muxer & file writer 1017 generates video files 1018 (e.g., in an MP4 file format) at different compression ratios and/or bitrates. The muxer & file writer 1017 combines the H.264 encoded video with the audio, which is captured and processed in parallel as described directly below. ….the stereo audio capture unit 1002 comprises one or more microphones, analog-to-digital converters, and audio compression units to compress the raw audio to generate the audio stream 1003 (e.g., using AAC, MP3 or other audio compression techniques). An audio decoder 1004 decodes the audio stream to a 16-bit PCM format 1021, although various other formats may also be used. An RTP packetizer generates RTP packets in an RTP buffer 1023 for transmission over a communication link/network. At the receiving end, an RTP depacketizer 1024 extracts the PCM audio data from the RTP packets and an AAC encoder 1024 encodes/compresses the PCM audio in accordance with the AAC audio protocol (although other encoding formats may be used)”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Rosado to include decoding the channels data to an editable format as taught by Cochran et al. to communicate the encoded/decode/compressed video frames to process and distribute live virtual reality content between server and client. But Rosado as modified by Cochran et al. keeps silent for teaching editing the data of the live event to comply with the displayed scenario. In related endeavor, Borg teaches editing the data of the live event to comply with the displayed scenario (par 0035-0036, “The multiple signals of audio and video can be then switched or mixed in the switcher (119), either automatically or manually by an editor or technical director. The completed composite signal ready for POD audience viewing can be then sent via any communication technology, such as a standard broadband delivery system, using the Transmission component (120) …. When the signal is received at the Point Of Display or POD (130), the signal can be decrypted (and/or the watermark authenticated) (128) and then sent through the POD projection system which can consist of one or more A Roll projectors or video displays (134) which present the A Roll environment video that can include, …. All video and audio signals, as well as laser and computer generated light shows as described in later Figures, can be controlled through the LightPiano.TM. (132), a system that provides a graphically based environment system controller.”) and to optimally fit the user's personal space geometric structure (par 0045, “To `compose` the desired surround environment, the icons for the various inputs are dragged and dropped from the various sections of the interface onto the desired Screens in the Room Display (410). In this example, the LightPiano operator drags the icon for A Roll Set 1 (442) onto the position for Screen One (412), while applying Effect 3 (432) to the video signal. This is accomplished by dragging and dropping the Effect icon onto the video path”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Rosado as modified by Cochran et al. to include editing the data of the live event to comply with the displayed scenario as taught by Borg to integrates both remote and locally sourced content creating a group-experienced "virtual" environment. Regarding claim 2, Rosado as modified by Cochran et al. and Borg teaches all the limitation of claim 1, and Cochran et al. further teach wherein the audio-visual sensory projection and processing device is adapted to: a) decode the edited output streams received from the publisher device into multiple data layers running at the client side (par 0087-0089); b) synchronize the data and control commands(par 0041); c) process the visual and audio signals, lighting signals and signals from sensors (par 0051); d) rebuild the scenarios belonging to the live event and make them ready for execution (par 0038); e) rout the outputs to each designated device to perform a required task; and f) distribute outputs that refer to each client. The output can be a visual output in all light frequencies, audio waves in all audio frequencies and sensors reflecting sense (par 0059). Regarding claim 3, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and Borg further teaches wherein the emulated volume local space is executed by a multimedia output device, which receives the signals generated by an audio-visual-sensors, emulation, projection and processing device for a specific client and executes said signals to generate the emulated local space and PSE for said specific client (par 0016, par 0018, “The present invention can incorporate multi-camera switched high definition video capture, integrated on-the-fly with rich visual imagery, surround sound audio, and computer graphics to create a rich multi-sensory (surround audio, multi-dimensional visual, etc.) presentation using multiple projectors and/or display screens with multiple speaker configurations “, Fig 4A, par 0048, “the LightPiano system (480) includes a video processor, an audio processor and a text processor. Each input processor (484) is connected to an appropriate output controller (488), which controls the output of the signals to the audio and video presentation output systems (490). Preferably, the LightPiano system (480) includes a video display controller, an audio system controller and lighting effects controller. The video display controller can be connected to a plurality of output video display systems (490), such as display screens and projectors, and can be adapted to control in real time or substantially in real time, the presentation of video on a given output display system. The audio system controller can be connected to a plurality of output audio systems, such as speaker systems and multidimensional or surround sound systems and can be adapted to control in real time or substantially in real time, the presentation of audio on a given sound system . The lighting and effect(s) controller can be connected to a plurality of output lighting and effect(s) systems, such as strobe lights, laser light systems and smoke effect systems and can be adapted to control in real time or substantially in real time, the presentation of the light show and effect(s) by a given lighting or effects system”). Regarding claim 4, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and further teaches wherein the multimedia output device comprises: a) a video device (Borg: Fig 4); b) a visual device (Borg: Fig 4); c) an audio device (Borg: Fig 4); d) a light fixture device (Borg: par 0048, light effect generator); e) one or more sensors (Rosado: par 0070-0071); f) power and communication components (Rosado: Fig 1, oar 0060); g) smoke generators; h) fog generators (Borg: par 0019, par 0048, smoke effect generator); i) robotic arms, j) hovering devices (Borg: par 0034, “using roving or robotic cameras (116)”); k) Machine code devices (Rosado: computer systems); l) Internet of Things (IoT) devices (Rosado: par 0071, Borg: par 0030-0040). Regarding claim 9, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and Rosado further teaches wherein the publisher device uses mathematical and physical values of a dynamic grid along with media channels telemetric data indicators and media content visual and audio (par 0070-0072, “The sensors 125 may be configured to track the movements of the user 123 at the location, track the movements of any objects at the location, or any combination thereof. In certain embodiments, any measurements taken by the sensors 125 and/or tracked motion and/or depth information may be provided to any device of the system 100, such as, but not limited to, the cameras 117, the computing device 120, the first and/or second user devices 102, 111, the servers 140, 150, 160, and/or the database 155 for further processing. In certain embodiments, the sensors 125 may be configured to interact with software of the system 100 may and enable the sensors 125 to detect objects at the location 115 that are within a capture zone of the sensors 125 and/or cameras 117”). Regarding claim 10, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and further teaches wherein the array of sensors is a sensor ball, which is an integrated standalone unit and is adapted to record, analyze and process an event in all aspects of video, visual, audio and sensory data (Rosado: par 0070-0072, the data form all kind sensors integrated together is used to record and process in server side, Cochran et al.: par 0051, Borg: par 0016, “the sensory experience from the site of origination can be extended to the remote site by surrounding the remote site audience with sensory stimuli in up to 360 degrees including visual stimulus from video (for example, multi-display video) as well as computer graphic illustration, light show, and surround audio”, par 0018, “The present invention can incorporate multi-camera switched high definition video capture, integrated on-the-fly with rich visual imagery, surround sound audio, and computer graphics to create a rich multi-sensory (surround audio, multi-dimensional visual, etc.) presentation using multiple projectors and/or display screens with multiple speaker configurations. In addition, the present invention can provide for mixing temporally disparate content (live, pre-recorded, still, and synthesized) `on the fly` at the remote location(s), allowing a local VJ to "play the room", and provide for a truly compelling, spontaneous, unique, and deeply immersive sensory experience”). Regarding claim 11, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and further teaches wherein data processing at the client and remote sides is done using software and hardware being digital and/or analog processing (Rosado: abstract, par 0063, Borg: abstract, par 0018, run software application by computers (server and client sides)). Regarding claim 12, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and further teaches wherein artificial Intelligence and machine learning algorithms are used for making optimal adaptations to each client side and remote side (Rosado: par 0061, “ The system 100 may be configured to support, but is not limited to supporting, data and content services, virtual reality services, augmented reality services, machine learning services, artificial intelligence services, computing applications and services, cloud computing services, internet services, satellite services, telephone services, software as a service (SaaS) applications, mobile applications and services, and any other computing applications and services”, par 0145, “ the rendering of the virtual worlds may be performed on the graphics processors, and, in certain embodiments, as the system 100 learns over time various user preferences and/or actions conducted in the system 100, artificial intelligence and/or machine learning algorithms facilitating such learning may also be executed on graphics processors and/or application specific integrated processors”). Regarding claim 13, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and further teaches wherein data in the remote and/or client sides is generated and delivered using communication protocols being to utilize high bandwidth for advanced functionality (Rosado: par 0134, “at a live event, the method 2200 may include having equipment send OSC (as shown in screenshot 2304), MIDI, ArtNET or other communication protocols and/or data to the server 100 via TCP, UDP or other network protocols over an internet connection. The data may be from fixtures (e.g. lighting and fixture equipment 2302), lighting, main audio outputs, performance instruments, disc jockey equipment and/or any other equipment at an event. At step 2204, all data from the equipment may be received from the event into the server 100, which relays audio, video, performers, and stage lighting plots and sequences of fixtures in real-time “, Borg: par 0026, “four different sets are described. LightPiano can be used to control: the POC satellite feed to screen one; three different video feeds to screens two, three and four; a computer graphic light show to screen five; and a laser light show already extant in the room, using the industry-standard ANSAI DMX 512-A protocol. The LightPiano can control each of the elements individually throughout each of the four sets “, par 0038, “The environmental surround video can be intermixed or merged through the LightPiano with live video from the POD captured from a roving camera (220) in the crowd. Already-existing special effects, such as a laser light show (240) can also be controlled by the LightPiano, using the industry-standard DMX digital lighting control protocol. The high quality POC audio signal can be sent to the POD surround audio system (250)”). Regarding claim 14, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and Cochran et al. further teach being adapted to perform at least the following operations: process, merge and multiplex the data; synchronize the data between publishers, servers and a plurality of client sides; perform live streaming of multimedia data on broad high bandwidth networks; to make adaption to lower bandwidth (par 0066, par 0119, par 0142, “in certain embodiments (e.g., such as when bandwidth reduction is required) only the left (right) eye video may be captured and the right (left) stream may be reproduced by performing a transformation on the left (right) video stream (i.e., using the coordinate relationship between the left and right eyes of a user as well as the coordinates of the event).”, Fig 10B, par 0088-0090, “an H.264 encoder 1016 encodes the video streams for transmission to end users and a muxer & file writer 1017 generates video files 1018 (e.g., in an MP4 file format) at different compression ratios and/or bitrates. The muxer & file writer 1017 combines the H.264 encoded video with the audio, which is captured and processed in parallel as described directly below”). Regarding claim 15, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and Rosado further teaches wherein an event is selected from the group of: live or recorded events; virtual generated events; played-back events from a local or network source (abstract, live event, par 0077, record event, par 0086, virtual event, par 0134-0135, playback event). Regarding claim 16, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and further teaches wherein the audio-visual projection and processing device further comprises a plurality of sensors for exploring the 3D spatial environment of each local user and making optimal adaptation of the editing to said 3D spatial environmental volume (Rosado: par 0034, par 0066-0067, “ As the depth and surface information is obtained by cameras 117, processors of the cameras 117 can process the depth information to generate three-dimensional depth images corresponding to the objects at the location 115 and/or the location 115 itself. The three-dimensional depth images can enable the cameras 117 to distinguish various objects in the environment from one another”, Borg: par 0035-0036, “The multiple signals of audio and video can be then switched or mixed in the switcher (119), either automatically or manually by an editor or technical director. The completed composite signal ready for POD audience viewing can be then sent via any communication technology, such as a standard broadband delivery system, using the Transmission component (120) …. When the signal is received at the Point Of Display or POD (130), the signal can be decrypted (and/or the watermark authenticated) (128) and then sent through the POD projection system which can consist of one or more A Roll projectors or video displays (134) which present the A Roll environment video that can include, …. All video and audio signals, as well as laser and computer generated light shows as described in later Figures, can be controlled through the LightPiano.TM. (132), a system that provides a graphically based environment system controller.”), par 0045, “compound Effects (or `filters`) can be stored in the Memory Bank locations (470). To `compose` the desired surround environment, the icons for the various inputs are dragged and dropped from the various sections of the interface onto the desired Screens in the Room Display (410). In this example, the LightPiano operator drags the icon for A Roll Set 1 (442) onto the position for Screen One (412), while applying Effect 3 (432) to the video signal”). Regarding claim 20, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and Borg further teaches in which the output stream is in a container format, being dedicated for multi-channels (par 0035-0037, “Multi-channel high quality audio direct from the POC facility's soundboard can be captured (118) and delivered to the switcher (119). The multiple signals of audio and video can be then switched or mixed in the switcher (119), either automatically or manually by an editor or technical director …. The Distribution component (140), can deliver the content downstream (141) through a multiplicity of distribution channels”). Regarding claim 22, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, and further teaches being adapted to perform audio to video synchronization, audio to machine code/light synchronization and video to machine code/light synchronization (Rosado: par 0079, par 0100, “the private chat rooms can be used for personal chat, group chat, and also be used for meetings, consulting and more. Documents, webpages, videos, audio and images can be viewed in the meetings and synchronized with everyone participating”, Cochran et al.: par 0035, par 0038-0040, “Another embodiment of the time code synchronization mechanism 10 of FIG. 1 involves triggering the panoramic camera heads 12, 14 and 18 using a “hardware sync trigger.sup.2” 42. The hardware trigger 42 is generated at specific time intervals based on the desired frame rate. This rate of hardware triggering has to match the rate of time codes being generated by the time code generator 20”, Borg: par 0036). Claim(s) 5-6 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0342106 to Rosado in view of U.S. PGPubs U.S. PGPubs 2020/0349672 to Cochran et al., further in view of U.S. PGPubs 2005/0024488 to Borg, further in view of U.S. PGPubs 2007/0159604 to Belliveau. Regarding claim 5, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, but does not explicitly teach wherein the light fixture device is a 2-D or 3-D projector having at least pan, tilt, zoom, focus and keystone functions. In related endeavor, Belliveau teaches wherein the light fixture device is a 2-D or 3-D projector having at least pan, tilt, zoom, focus and keystone functions (par 0016, par 0018, perform zoom and focus function, par 0048, par 0051, projector has pan and tilt functions, par 0049, keystone correction). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Rosado as modified by Cochran et al. and Borg to include wherein the light fixture device is a 2-D or 3-D projector having at least pan, tilt, zoom, focus and keystone functions as taught by Belliveau to incorporate a video projector with a moveable mirror system that directs the images projected by the projector onto the stage or projection surface to provide true project image through variably control functions. Regarding claim 6, Rosado as modified by Cochran et al., Borg, and Belliveau teach all the limitation of claim 5, and Belliveau further teaches wherein the light fixture device is a PTZKF projector (see rejection claim 5, the projector has pan, tilt, zoom, focus and keystone functions). Regarding claim 8, Rosado as modified by Cochran et al., Borg, and Belliveau teach all the limitation of claim 6, and Cochran et al. further teach wherein the projector is a 7-D projector (Fig 1, multiple projectors to display video). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0342106 to Rosado in view of U.S. PGPubs U.S. PGPubs 2020/0349672 to Cochran et al., further in view of U.S. PGPubs 2005/0024488 to Borg, further in view of U.S. PGPubs 2007/0159604 to Belliveau, further in view of U.S. PGPubs 20090027622 to Lalley et al.. Regarding claim 7, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 5, but does not explicitly teach wherein the 3-D projector is a sphere projector that projects a real complete sphere video and visual output to cover a geometric space from a single source point of projection. In related endeavor, Lalley et al. teach wherein the 3-D projector is a sphere projector that projects a real complete sphere video and visual output to cover a geometric space from a single source point of projection (Fig 1, par 0054, par 0062, “The three-dimensional (e.g., partial sphere) projection surface is coupled to a housing 212 which houses and protects various electrical and mechanical components of the system 200 as well as the output of audio and coherent visual data”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Rosado as modified by Cochran et al. and Borg to include wherein the 3-D projector is a sphere projector that projects a real complete sphere video and visual output to cover a geometric space from a single source point of projection as taught by Lalley et al. to generate incident light beams for a sphere projector to form a clear, sharp, focused image onto the projection surface to satisfy a need for a variable-focal length projection coverage of a three-dimensional screen. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2018/0342106 to Rosado in view of U.S. PGPubs U.S. PGPubs 2020/0349672 to Cochran et al., further in view of U.S. PGPubs 2005/0024488 to Borg, further in view of U.S. PGPubs 2013/0335407 to Reitan. Regarding claim 17, Rosado as modified by Cochran et al. and Borg teach all the limitation of claim 1, but keeps silent for teaching in which the multimedia output device is used as a publisher, to thereby create an event of a local user to be emulated at the 3D space of other users or at the remote event. In related endeavor, Reitan teach in which the multimedia output device is used as a publisher, to thereby create an event of a local user to be emulated at the 3D space of other users or at the remote event (par 0163, “virtual control panel 203 is a set of controls embedded used to control what portion of the 3-D augmented-reality environment is presented to a user “, par 0244, “a plurality of users 301, 302, 303, and 304, both remote and local, may meet at an augmented reality environment 300 that appears to be a club. In this example a first user 301, 302, 303, and 304 may interact with a second user 301, 302, 303, and 304 regardless of whether either user 301, 302, 303, and 304 is a remote user 304 or a local user 301.”). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Rosado as modified by Cochran et al. and Borg to include in which the multimedia output device is used as a publisher, to thereby create an event of a local user to be emulated at the 3D space of other users or at the remote event as taught by Reitan to allow television and movie viewers to step into the action, moving freely about landscapes, choosing which aspects of recorded events to view based on viewer's interest and preferences, while interacting with characters and objects within the content, including the advertisers' products. Allowable Subject Matter Claims 18 and 23 are objected to as being dependent upon a rejected base, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 21 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 18, including " in which the multimedia output device comprises: a) an optical source for generating modulated circular waves at predetermined frequencies; b) two or more orthogonally positioned screw shapes tubes, for conveying said modulated circular waves to a conical prism; c) a conical prism for generating an output of complete spatial grid with high resolution geometrical shape; and d) a disk prism spinning at a predetermined rate, for producing transmitted optical waves that cause interference in desired points along said grid, while at each point, generating spatial pixels with colors and intensity that correspond to the image to be projected". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 21, including " in which the audio-visual sensory projection and processing device is configured to: a) sample the user local audio output at the user's local side; b) and performing real-time adaptive synchronization, based on the data extracted from said local audio output, to optimally match the user's local side to the streamed source data". The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 23, including " wherein the 7D projector is implemented using a gun of particles that transmits energy waves and/or particles, as well as a predefined wave energy mass in predetermined frequencies using several transmitted modulation schemes, to generate a measured reaction between waves critical energy mass, used to generate 4D spatial pixels at any desired point along the generated spatial 4D grid, thereby creating a desired view in the surrounding volume". Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Jun 06, 2024
Application Filed
Feb 22, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month