Prosecution Insights
Last updated: April 19, 2026
Application No. 18/807,283

SHARED EVENT RECORDING AND RENDERING

Non-Final OA §103
Filed
Aug 16, 2024
Examiner
GUO, XILIN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
374 granted / 456 resolved
+20.0% vs TC avg
Strong +17% interview lift
Without
With
+17.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over ICHIKAWA et al (U.S. Patent Application Publication 2020/0051336 A1) in view of Filip et al (U.S. Patent Application Publication 2025/0086915 A1). Regarding claim 1, ICHIKAWA discloses a method comprising: at a first device (FIG. 1; paragraph [0089], a server 10-1; paragraph [0088], servers 10-1 to 10-7 according to the first to seventh embodiments may collectively be referred to as a server 10 in the specification and the drawings ,,, paragraph [0579], a hardware configuration of the server 10 common in each of the present embodiments will be described with reference to FIG. 71. As illustrated in FIG. 71, the server 10 includes a CPU 900; FIG. 7 shows server 10-1) comprising one or more processors (Paragraph [0580], the CPU 900 includes a processor such as a microprocessor): determining that a second device (Paragraph [0089], client 20; paragraph [0170], the client 20 may be configured as a single device. As shown in FIG. 1, client 20b is arranged in real space B; FIG. 2 shows client 20) is currently at an event at a physical environment based on one or more event criterion (Paragraph [0090], a client 20 is arranged in each of the plurality of real spaces 2. Here, the real spaces 2 may be rooms ... As shown in FIG. 1, client 20b is arranged in real space B. FIG. 7; paragraph [0141], the event recognizing unit 106 generates event information on the basis of chronological information transmitted from the recognizing unit 104. For example, in a case in which the user is participating in the generated shared space, and the user points at a desk located in the shared space, the event recognizing unit 106 generates information indicating that the desk is pointed at as the event information ); providing a notification to the second device based on determining that the second device is currently at the event (FIG. 50; paragraph [0394], ... to start space sharing together by setting the real space 2a as a base space ... the server 10 first provides a notification of the input invitation message to the user 4b), the notification providing an option to authorize use of sensor data obtained by the second device during the event (Paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220. Here, the 2D image correlation information is information indicating a position in the captured 2D image corresponding to each object) to generate Paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1; paragraph [0123], FIG. 6 is an explanatory diagram illustrating a generation example of the shared space. As illustrated in FIG. 6, for example, a shared space generating unit 102 generates a shared space 40 by arranging ... the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects); receiving authorization to use the sensor data to generate the Paragraph [0396], the shared space synthesizing unit 152 of the server 10 decides the real space 2a as a base space (S2557); paragraph [0373], as illustrated in FIG. 47, the shared space synthesizing unit 152 may process the content of the free viewpoint so that an animation in which all the objects displayed together before and after the base space is changed (for example, the user remaining in the shared space after the base space is changed or the like) fly and move from the base space before the change to the base space after the change is displayed), wherein the authorization is based on user input received at the second device in response to the notification (Paragraph [0396], in a case in which the 4b accepts the invitation message (S2555: Yes)); obtaining the sensor data from the second device in accordance with the authorization (Paragraph [0120], the transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1 ...; paragraph [0126], the client connecting unit 120 transmits information received from the client 20 of the connection destination to the shared space managing unit 100-1); and generating the Paragraph [0123], a shared space generating unit 102 generates a shared space 40 by arranging the 3D data of the object 42a and the object 42b included in the stream received from the input unit 22 of the real space 2a and the real space 2b and the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects) based on the sensor data obtained from the second device (As shown in FIG. 1, client 20b; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1) and additional sensor data obtained from one or more other devices at the event (As shown in FIG. 1, client 20a; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1). However, ICHIKAWA does not specifically disclose generate a temporal-based three-dimensional (3D) representation of the event. In additional, Filip discloses (Abstract, a method for integrating media content with a three-dimensional (3D) scene to provide an immersive view of a location includes obtaining a 3D scene of the location which is generated based on a plurality of images, receiving media content temporally associated with the location, integrating at least a portion of the media content with the 3D scene of the location, and providing the integrated 3D scene of the location having the at least the portion of the media content integrated with the 3D scene of the location to represent a state of the location based on the temporal association of the media content with the location) generate a temporal-based three-dimensional (3D) representation of the event (FIGS. 1 and 2; paragraph [0060], the media content may be captured by a camera (e.g., image capturer 182) of a computing device ... For example, an image may include information including a date the image was captured, a time of day the image was captured ...; paragraph [0070], the 3D scene integrator 338 may be configured to integrate the user-generated content and/or machine-generated content with the initial 3D scene generated by 3D scene generator 336, for example, according to temporal information associated with the media content. For example, a first integrated 3D scene of a location may be associated with a first time (e.g., a first time of day, first time of year, etc.) based on media content captured at the first time or relating to the first time and a second integrated 3D scene of the location may be associated with a second time (e.g., a second time of day, second time of year, etc.) based on media content captured at the second time or relating to the second time). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for generating a shared space taught by ICHIKAWA incorporate the teachings of Filip, and applying the method for integrating media content with a three-dimensional (3D) scene taught by Filip to provide the temporal information associated with the captured images and generate the temporal-based three-dimensional (3D) representation of the event. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify ICHIKAWA according to the relied-upon teachings of Filip to obtain the invention as specified in claim. Regarding claim 2, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1), and the combination of ICHIKAWA in view of Filip discloses generate a temporal-based three-dimensional (3D) representation of the event. ICHIKAWA further discloses wherein access to use the temporal-based 3D representation of the event is based on the authorization (Paragraph [0125], FIG. 7 is a functional block diagram illustrating a configuration example of the server 10-1. As illustrated in FIG. 7, the server 10-1 has a shared space managing unit 100-1 and a plurality of client connecting units 120; paragraph [0138], FIG. 8 is a functional block diagram illustrating a further detailed configuration example of the shared space managing unit 100-1. As illustrated in FIG. 8, the shared space managing unit 100-1 includes a shared space generating unit 102 ...; paragraph [0144], the shared space generating unit 102 has a synchronizing unit 150, a shared space synthesizing unit 152 ...; paragraph [0238], the shared space synthesizing unit 152 can grant authority to access the object in the base space to the user 4b located in the real space other than the base space among one or more the user 4 participates in the shared space. For example, the shared space synthesizing unit 152 grants (unconditionally) authority to access a device in the base space to the user 4b). Regarding claim 3, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1), and the combination of ICHIKAWA in view of Filip discloses generate a temporal-based three-dimensional (3D) representation of the event. ICHIKAWA further discloses wherein users who contributed to the temporal-based 3D representation are provided access to use the temporal-based 3D representation of the event is based on the authorization (Paragraph [0125], FIG. 7 is a functional block diagram illustrating a configuration example of the server 10-1. As illustrated in FIG. 7, the server 10-1 has a shared space managing unit 100-1 and a plurality of client connecting units 120; paragraph [0138], FIG. 8 is a functional block diagram illustrating a further detailed configuration example of the shared space managing unit 100-1. As illustrated in FIG. 8, the shared space managing unit 100-1 includes a shared space generating unit 102 ...; paragraph [0144], the shared space generating unit 102 has a synchronizing unit 150, a shared space synthesizing unit 152 ...; paragraph [0238], the shared space synthesizing unit 152 can grant authority to access the object in the base space to the user 4b located in the real space other than the base space among one or more the user 4 participates in the shared space. For example, the shared space synthesizing unit 152 grants (unconditionally) authority to access a device in the base space to the user 4b). Regarding claim 4, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1), and ICHIKAWA further discloses wherein the additional sensor data is obtained from the one or more other devices at the event (FIG. 1; paragraph [0170], the client 20 may be configured as a single device. As shown in FIG. 1, client 20a is arranged in real space A; FIG. 2 shows client 20) based on receiving additional authorization from the one or more other devices at the event to use the additional sensor data (Paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1) to generate the temporal-based 3D representation of the event (Paragraph [0123], a shared space generating unit 102 generates a shared space 40 by arranging the 3D data of the object 42a and the object 42b included in the stream received from the input unit 22 of the real space 2a and the real space 2b and the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects). Regarding claim 14, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1), and the combination of ICHIKAWA in view of Filip discloses generate a temporal-based three-dimensional (3D) representation of the event (see claim 1). ICHIKAWA discloses further comprising updating a portion of the temporal-based 3D representation of the event by: obtaining user-based image content from a user database (Paragraphs [0138]-[0139], FIG. 8 is a functional block diagram illustrating a further detailed configuration example of the shared space managing unit 100-1. As illustrated in FIG. 8, the shared space managing unit 100-1 includes a shared space generating unit 102, a recognizing unit 104, an event recognizing unit 106, and a control unit 108 ... the recognizing unit 104 first acquires the shared space frame data from a shared space frame data DB 156 in a frame order. Then, the recognizing unit 104 performs various types of recognition processes on the basis of the acquired shared space frame data, and transmits the recognized result to the event recognizing unit 106); and augmenting the portion of the temporal-based 3D representation of the event based on the user-based image content (Paragraphs [0140]-[0152], the event recognizing unit 106 transmits the generated event information to the control unit 108 ... The shared space generating unit 102 generates the shared space frame data on the basis of the frame data and the meta information obtained from the streams received from a plurality of clients 20 .. the delivering unit 154 transmits each piece of generated frame data to the client connecting unit 120 corresponding to the real space of the transmission destination of the frame data.). Regarding claim 15, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1), and the combination of ICHIKAWA in view of Filip discloses generating the temporal-based 3D representation of the event based on the sensor data obtained from the second device and the additional sensor data obtained from the one or more other devices at the event (see claim 1) comprises: determining synchronized data based on the sensor data obtained from the second device and the additional sensor data obtained from the one or more other devices based on one or more synchronization algorithms (ICHIKAWA: paragraphs [0138]-[0139], FIG. 8 is a functional block diagram illustrating a further detailed configuration example of the shared space managing unit 100-1. As illustrated in FIG. 8, the shared space managing unit 100-1 includes a shared space generating unit 102, a recognizing unit 104, an event recognizing unit 106, and a control unit 108; paragraph [0144], the shared space generating unit 102 generates the shared space frame data on the basis of the frame data and the meta information obtained from the streams received from a plurality of clients 20. Further, as illustrated in FIG. 8, the shared space generating unit 102 has a synchronizing unit 150, a shared space synthesizing unit 152, a delivering unit 154, and a shared space frame data DB 156; paragraph [0145], the synchronizing unit 150 sequentially transmits the frame data and the second control information received from each of a plurality of clients 20 to the shared space synthesizing unit 152 together for each piece of information (for example, each frame) having the same timing); and generating the temporal-based 3D representation of the event based on the determined synchronized data (Paragraphs [0146]-[0152], the shared space synthesizing unit 152 generates the shared space frame data on the basis of the frame data of each of real spaces transmitted from the synchronizing unit 150 ... On the basis of the shared space frame data generated by the shared space synthesizing unit 152, the delivering unit 154 generates frame data to be transmitted to the output unit 24 in the real space for each real space). Regarding claim 16, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1), and ICHIKAWA further disclose wherein the one or more synchronization algorithms are based on: a common clock synchronization associated with the second device and the one or more other devices (FIG. 1; paragraph [0093], the server 10-1 generates content of a free viewpoint by synthesizing 3D data of substantially all of each real space 2 in which each user performing communication is located. Further, the respective users can freely communicate while having an experience as if they were located within the same space by viewing the content of the free viewpoint at the same time) ; image content obtained from the second device corresponding to additional image content obtained from the one or more other devices (Paragraph [0170], the client 20 may be configured as a single device. As shown in FIG. 1, client 20a is arranged in real space A; FIG. 2 shows client 20; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1); audio content obtained from the second device corresponding to additional audio content obtained from the one or more other devices (FIG. 4; paragraph [0116], sound data at the time of the corresponding frame that is recorded by the sensor unit 220 is stored as the audio data 304); or a combination thereof. Regarding claim 17, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1), and the combination of ICHIKAWA in view of Filip discloses generating the temporal-based 3D representation of the event based on the sensor data obtained from the second device and the additional sensor data obtained from the one or more other devices at the event (see claim 1) comprises: determining refined image data based on the sensor data obtained from the second device (As shown in FIG. 1, client 20b; FIG. 2; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1) and the additional sensor data obtained from the one or more other devices (As shown in FIG. 1, client 20a; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1) based on one or more transformation algorithms (Paragraph [0110], the recognizing unit 224 performs various types of recognition processes on the basis of the frame data transmitted from the sensor unit 220 ...; paragraph [0111], the recognizing unit 224 recognizes the type of the object on the basis of the frame data ...; paragraph [0114], the recognizing unit 224 adds the result of the recognition process to the transmitted frame data, and transmits the resulting frame data to the stream generating unit 226; paragraph [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1); and generating the temporal-based 3D representation of the event (Paragraph [0123], a shared space generating unit 102 generates a shared space 40 by arranging the 3D data of the object 42a and the object 42b included in the stream received from the input unit 22 of the real space 2a and the real space 2b and the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects) based on the determined refined image data (Paragraph [0120], the transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1 ...; paragraph [0126], the client connecting unit 120 transmits information received from the client 20 of the connection destination to the shared space managing unit 100-1). Regarding claim 18, ICHIKAWA discloses a first device (FIG. 1; paragraph [0089], a server 10-1; paragraph [0088], servers 10-1 to 10-7 according to the first to seventh embodiments may collectively be referred to as a server 10 in the specification and the drawings ,,, paragraph [0579], a hardware configuration of the server 10 common in each of the present embodiments will be described with reference to FIG. 71. As illustrated in FIG. 71, the server 10 includes a CPU 900; FIG. 7 shows server 10-1) comprising: one or more sensors (Paragraph [0474], the free viewpoint live content server 52 is a device that distributes free viewpoint live content to the server 10-4 or the like, for example. Here, the live content of the free viewpoint is content of the free viewpoint generated on the basis of sensing performed by a sensor unit 520); a non-transitory computer-readable storage medium (Paragraph [0579], the server 10 includes a read only memory (ROM) 902, a RAM 904); and one or more processors coupled to the non-transitory computer-readable storage medium (Paragraphs [0579]-[0580], the server 10 includes a CPU 900 ... the CPU 900 includes a processor such as a microprocessor; paragraph [0583], the bus 906 interconnects the CPU 900, the ROM 902 and the RAM 904), wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors (Paragraphs [0581]-[0582], the ROM 902 stores programs, control data such as an operation parameter, or the like, to be used by the CPU 900 ... The RAM 904 temporarily stores, for example, programs to be executed by the CPU 900 ), cause the one or more processors to perform operations comprising: determining that a second device (Paragraph [0089], client 20; paragraph [0170], the client 20 may be configured as a single device. As shown in FIG. 1, client 20b is arranged in real space B; FIG. 2 shows client 20) is currently at an event at a physical environment based on one or more event criterion (Paragraph [0090], a client 20 is arranged in each of the plurality of real spaces 2. Here, the real spaces 2 may be rooms ... As shown in FIG. 1, client 20b is arranged in real space B. FIG. 7; paragraph [0141], the event recognizing unit 106 generates event information on the basis of chronological information transmitted from the recognizing unit 104. For example, in a case in which the user is participating in the generated shared space, and the user points at a desk located in the shared space, the event recognizing unit 106 generates information indicating that the desk is pointed at as the event information); providing a notification to the second device based on determining that the second device is currently at the event (FIG. 50; paragraph [0394], ... to start space sharing together by setting the real space 2a as a base space ... the server 10 first provides a notification of the input invitation message to the user 4b), the notification providing an option to authorize use of sensor data obtained by the second device during the event (Paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220. Here, the 2D image correlation information is information indicating a position in the captured 2D image corresponding to each object) to generate a Paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1; paragraph [0123], FIG. 6 is an explanatory diagram illustrating a generation example of the shared space. As illustrated in FIG. 6, for example, a shared space generating unit 102 generates a shared space 40 by arranging ... the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects); receiving authorization to use the sensor data to generate the Paragraph [0396], the shared space synthesizing unit 152 of the server 10 decides the real space 2a as a base space (S2557); paragraph [0373], as illustrated in FIG. 47, the shared space synthesizing unit 152 may process the content of the free viewpoint so that an animation in which all the objects displayed together before and after the base space is changed (for example, the user remaining in the shared space after the base space is changed or the like) fly and move from the base space before the change to the base space after the change is displayed), wherein the authorization is based on user input received at the second device in response to the notification (Paragraph [0396], in a case in which the 4b accepts the invitation message (S2555: Yes)); obtaining the sensor data from the second device in accordance with the authorization (Paragraph [0120], the transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1 ...; paragraph [0126], the client connecting unit 120 transmits information received from the client 20 of the connection destination to the shared space managing unit 100-1); and generating the Paragraph [0123], a shared space generating unit 102 generates a shared space 40 by arranging the 3D data of the object 42a and the object 42b included in the stream received from the input unit 22 of the real space 2a and the real space 2b and the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects) based on the sensor data obtained from the second device (As shown in FIG. 1, client 20b; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1) and additional sensor data obtained from one or more other devices at the event (As shown in FIG. 1, client 20a; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1). However, ICHIKAWA does not specifically disclose generate a temporal-based three-dimensional (3D) representation of the event. In additional, Filip discloses (Abstract, a method for integrating media content with a three-dimensional (3D) scene to provide an immersive view of a location includes obtaining a 3D scene of the location which is generated based on a plurality of images, receiving media content temporally associated with the location, integrating at least a portion of the media content with the 3D scene of the location, and providing the integrated 3D scene of the location having the at least the portion of the media content integrated with the 3D scene of the location to represent a state of the location based on the temporal association of the media content with the location) generate a temporal-based three-dimensional (3D) representation of the event (FIGS. 1 and 2; paragraph [0060], the media content may be captured by a camera (e.g., image capturer 182) of a computing device ... For example, an image may include information including a date the image was captured, a time of day the image was captured ...; paragraph [0070], the 3D scene integrator 338 may be configured to integrate the user-generated content and/or machine-generated content with the initial 3D scene generated by 3D scene generator 336, for example, according to temporal information associated with the media content. For example, a first integrated 3D scene of a location may be associated with a first time (e.g., a first time of day, first time of year, etc.) based on media content captured at the first time or relating to the first time and a second integrated 3D scene of the location may be associated with a second time (e.g., a second time of day, second time of year, etc.) based on media content captured at the second time or relating to the second time). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for generating a shared space taught by ICHIKAWA incorporate the teachings of Filip, and applying the method for integrating media content with a three-dimensional (3D) scene taught by Filip to provide the temporal information associated with the captured images and generate the temporal-based three-dimensional (3D) representation of the event. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify ICHIKAWA according to the relied-upon teachings of Filip to obtain the invention as specified in claim. Regarding claim 19, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 18), and the combination of ICHIKAWA in view of Filip discloses generate a temporal-based three-dimensional (3D) representation of the event. ICHIKAWA further discloses wherein access to use the temporal-based 3D representation of the event is based on the authorization (Paragraph [0125], FIG. 7 is a functional block diagram illustrating a configuration example of the server 10-1. As illustrated in FIG. 7, the server 10-1 has a shared space managing unit 100-1 and a plurality of client connecting units 120; paragraph [0138], FIG. 8 is a functional block diagram illustrating a further detailed configuration example of the shared space managing unit 100-1. As illustrated in FIG. 8, the shared space managing unit 100-1 includes a shared space generating unit 102 ...; paragraph [0144], the shared space generating unit 102 has a synchronizing unit 150, a shared space synthesizing unit 152 ...; paragraph [0238], the shared space synthesizing unit 152 can grant authority to access the object in the base space to the user 4b located in the real space other than the base space among one or more the user 4 participates in the shared space. For example, the shared space synthesizing unit 152 grants (unconditionally) authority to access a device in the base space to the user 4b). Regarding claim 20, ICHIKAWA discloses a non-transitory computer-readable storage medium, storing program instructions executable on a device (FIG. 1; paragraph [0089], a server 10-1; paragraph [0088], servers 10-1 to 10-7 according to the first to seventh embodiments may collectively be referred to as a server 10 in the specification and the drawings ,,, paragraph [0579], a hardware configuration of the server 10 common in each of the present embodiments will be described with reference to FIG. 71. As illustrated in FIG. 71, the server 10 includes a CPU 900; FIG. 7 shows server 10-1; paragraphs [0581]-[0582], the ROM 902 stores programs, control data such as an operation parameter, or the like, to be used by the CPU 900 ... The RAM 904 temporarily stores, for example, programs to be executed by the CPU 900) to perform operations comprising: determining that a second device (Paragraph [0089], client 20; paragraph [0170], the client 20 may be configured as a single device. As shown in FIG. 1, client 20b is arranged in real space B; FIG. 2 shows client 20) is currently at an event at a physical environment based on one or more event criterion (Paragraph [0090], a client 20 is arranged in each of the plurality of real spaces 2. Here, the real spaces 2 may be rooms ... As shown in FIG. 1, client 20b is arranged in real space B. FIG. 7; paragraph [0141], the event recognizing unit 106 generates event information on the basis of chronological information transmitted from the recognizing unit 104. For example, in a case in which the user is participating in the generated shared space, and the user points at a desk located in the shared space, the event recognizing unit 106 generates information indicating that the desk is pointed at as the event information); providing a notification to the second device based on determining that the second device is currently at the event (FIG. 50; paragraph [0394], ... to start space sharing together by setting the real space 2a as a base space ... the server 10 first provides a notification of the input invitation message to the user 4b), the notification providing an option to authorize use of sensor data obtained by the second device during the event (Paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220. Here, the 2D image correlation information is information indicating a position in the captured 2D image corresponding to each object) to generate a Paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1; paragraph [0123], FIG. 6 is an explanatory diagram illustrating a generation example of the shared space. As illustrated in FIG. 6, for example, a shared space generating unit 102 generates a shared space 40 by arranging ... the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects); receiving authorization to use the sensor data to generate the Paragraph [0396], the shared space synthesizing unit 152 of the server 10 decides the real space 2a as a base space (S2557); paragraph [0373], as illustrated in FIG. 47, the shared space synthesizing unit 152 may process the content of the free viewpoint so that an animation in which all the objects displayed together before and after the base space is changed (for example, the user remaining in the shared space after the base space is changed or the like) fly and move from the base space before the change to the base space after the change is displayed), wherein the authorization is based on user input received at the second device in response to the notification (Paragraph [0396], in a case in which the 4b accepts the invitation message (S2555: Yes)); obtaining the sensor data from the second device in accordance with the authorization (Paragraph [0120], the transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1 ...; paragraph [0126], the client connecting unit 120 transmits information received from the client 20 of the connection destination to the shared space managing unit 100-1); and generating the Paragraph [0123], a shared space generating unit 102 generates a shared space 40 by arranging the 3D data of the object 42a and the object 42b included in the stream received from the input unit 22 of the real space 2a and the real space 2b and the 3D data of the object 42c and the object 42d included in the stream received from the input unit 22 of the input unit 22 in the shared space 40 as shared objects) based on the sensor data obtained from the second device (As shown in FIG. 1, client 20b; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1) and additional sensor data obtained from one or more other devices at the event (As shown in FIG. 1, client 20a; paragraph [0102], the sensor unit 220 further generates 2D image correlation information indicating a correspondence relation between each object and a 2D image captured by the sensor unit 220; paragraphs [0118]-[0120], the stream generating unit 226 transmits the generated stream to the transmitting unit 228 ... The transmitting unit 228 transmits the stream transmitted from the stream generating unit 226 to the server 10-1). However, ICHIKAWA does not specifically disclose generate a temporal-based three-dimensional (3D) representation of the event. In additional, Filip discloses (Abstract, a method for integrating media content with a three-dimensional (3D) scene to provide an immersive view of a location includes obtaining a 3D scene of the location which is generated based on a plurality of images, receiving media content temporally associated with the location, integrating at least a portion of the media content with the 3D scene of the location, and providing the integrated 3D scene of the location having the at least the portion of the media content integrated with the 3D scene of the location to represent a state of the location based on the temporal association of the media content with the location) generate a temporal-based three-dimensional (3D) representation of the event (FIGS. 1 and 2; paragraph [0060], the media content may be captured by a camera (e.g., image capturer 182) of a computing device ... For example, an image may include information including a date the image was captured, a time of day the image was captured ...; paragraph [0070], the 3D scene integrator 338 may be configured to integrate the user-generated content and/or machine-generated content with the initial 3D scene generated by 3D scene generator 336, for example, according to temporal information associated with the media content. For example, a first integrated 3D scene of a location may be associated with a first time (e.g., a first time of day, first time of year, etc.) based on media content captured at the first time or relating to the first time and a second integrated 3D scene of the location may be associated with a second time (e.g., a second time of day, second time of year, etc.) based on media content captured at the second time or relating to the second time). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for generating a shared space taught by ICHIKAWA incorporate the teachings of Filip, and applying the method for integrating media content with a three-dimensional (3D) scene taught by Filip to provide the temporal information associated with the captured images and generate the temporal-based three-dimensional (3D) representation of the event. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify ICHIKAWA according to the relied-upon teachings of Filip to obtain the invention as specified in claim. Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over ICHIKAWA et al (U.S. Patent Application Publication 2020/0051336 A1) in view of Filip et al (U.S. Patent Application Publication 2025/0086915 A1) in view of SU et al (U.S. Patent Application Publication 2025/0308142 A1). Regarding claim 5, the combination of ICHIKAWA in view of Filip discloses everything claimed as applied above (see claim 1). However, ICHIKAWA does not specifically disclose wherein the temporal-based 3D representation is a neural rendering model. In additional, SU discloses wherein the temporal-based 3D representation (FIG. 1; paragraphs [0014]-[0022], example embodiments described herein relate to scalable 3D-scene representation ...; paragraphs [0035]-[0040], scalability allows one to apply for a variety of diverse quality criteria to generate the enhancement layer, including: ... Temporal frame rate: one can apply a frame rate interpolation on the base layer, then add the neural-field residual to generate an output at a higher frame rate) is a neural rendering model (Paragraph [0029], there are multiple 3D scene representation models, including neural radiance field (NeRF) ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for generating a shared space taught by ICHIKAWA in view of Filip incorporate the teachings of SU, and applying the 3D-scene representation using neural field modeling taught by SU to provide the neural radiance field model for rendering the temporal-based 3D representation. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify ICHIKAWA in view of Filip according to the relied-upon teachings of SU to obtain the invention as specified in claim. Regarding claim 6, the combination of ICHIKAWA in view of Filip in view SU discloses everything claimed as applied above (see claim 5), and the combination of ICHIKAWA in view of Filip in view SU wherein the neural rendering model is neural radiance field (NeRF) model (SU: paragraph [0029], there are multiple 3D scene representation models, including neural radiance field (NeRF) ...). Claims 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over ICHIKAWA et al (U.S. Patent Application Publication 2020/0051336 A1) in view of Filip et al (U.S. Patent Application Publication 2025/0086915 A1) in view of SU et al (U.S. Patent Application Publication 2025/0308142 A1) in view of Furukawa et al (U.S. Patent Application Publication 2025/0036135 A1). Regarding claim 7, the combination of ICHIKAWA in view of Filip in view SU discloses everything claimed as applied above (see claim 5). However, ICHIKAWA does not specifically disclose wherein the NeRF model is configured to receive input corresponding to a 3D viewpoint and a timepoint during the event. In additional, Furukawa discloses wherein the NeRF model (Paragraph [0038], Neural radiance fields (NeRF) is an example of an offline approach for generating high-resolution 3D viewpoints for an object, given a set of images and camera poses) is configured to receive input corresponding to a 3D viewpoint (Paragraph [0038], the images and poses in FIG. 2 are used as inputs for NeRF to generate a 3D rendering of the captured object ... The differences in results are not too noticeable from the far viewpoint, but the differences can be seen from the close viewpoint) and a timepoint during the event (Paragraph [0043], let knowledge on the initial pose of the robot be {circumflex over (X)}.sub.0.sup.r. Information acquired by the depth sensor at time step k and the robot motion from k−1 to k measured by the motion sensor). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for generating a shared space taught by ICHIKAWA in view of Filip in view SU incorporate the teachings of Furukawa, and applying the method for generate a 3D rendering of a captured object using neural field modeling taught by Furukawa to provide the 3D viewpoints and a desired time for capturing information to the neural radiance field model for rendering the temporal-based 3D representation. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify ICHIKAWA in view of Filip in view SU according to the relied-upon teachings of Furukawa to obtain the invention as specified in claim. Regarding claim 8, the combination of ICHIKAWA in view of Filip in view SU discloses everything claimed as applied above (see claim 6). However, ICHIKAWA does not specifically disclose wherein the NeRF model is configured to output a plurality of ray values (Paragraph [0038], Neutral radiance fields (NeRF) is an example of an offline approach for generating high-resolution 3D viewpoints for an object, given a set of images and camera poses. he key focus of NeRF is to efficiently represent the scene with implicit functions along viewing rays by outputting a density function relating to the lengths of the rays with colors associated with the position along a given ray) corresponding to a view of the event from the 3D viewpoint at the timepoint during the event (Paragraph [0038], the images and poses in FIG. 2 are used as inputs for NeRF to generate a 3D rendering of the captured object ... The differences in results are not too noticeable from the far viewpoint, but the differences can be seen from the close viewpoint ... paragraph [0043], let knowledge on the initial pose of the robot be {circumflex over (X)}.sub.0.sup.r. Information acquired by the depth sensor at time step k and the robot motion from k−1 to k measured by the motion sensor). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for generating a shared space taught by ICHIKAWA in view of Filip in view SU incorporate the teachings of Furukawa, and applying the method for generate a 3D rendering of a captured object using neural field modeling taught by Furukawa to provide the 3D viewpoints and a desired time for capturing information to the neural radiance field model for rendering the temporal-based 3D representation. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify ICHIKAWA in view of Filip in view SU according to the relied-upon teachings of Furukawa to obtain the invention as specified in claim. Claims 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over ICHIKAWA et al (U.S. Patent Application Publication 2020/0051336 A1) in view of Filip et al (U.S. Patent Application Publication 2025/0086915 A1) in view of SU et al (U.S. Patent Application Publication 2025/0308142 A1) in view of Furukawa et al (U.S. Patent Application Publication 2025/0036135 A1) in view of GHAZVINIAN ZANJANI et al (U.S. Patent Application Publication 2024/0386650 A1). Regarding claim 9, the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa discloses everything claimed as applied above (see claim 8). However, ICHIKAWA does not specifically disclose wherein the NeRF model is configured to output an expiration corresponding to one or more of the ray values. In additional, GHAZVINIAN ZANJANI discloses wherein the NeRF model (Paragraph [0038], 3D implicit geometric representation by a neural network is used, and may be referred to as Neural Radiance Fields (NeRFs)) is configured to output an expiration corresponding to one or more of the ray values (Paragraph [0038], a NeRF-based approach, a volumetric representation of the scene is learned, and radiance can be calculated based on performing ray marching through an encoded light field ...; paragraph [0069], the machine learning architecture 400 of FIG. 4 (e.g., also referred to as a Neural Mesh Fusion (NMF) network ... In some aspects, training of a machine learning network (e.g., such as the machine learning NMF network 400, among various other machine learning networks of FIGS. 3-12, etc.) can be performed using online training, offline training, and/or various combinations of online and offline training ... Additionally, offline may be based on one or more time conditions (e.g., after a particular amount of time has expired, such as a day, a week, a month, etc.) ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for generating a shared space taught by ICHIKAWA in view of Filip in view SU in view of Furukawa incorporate the teachings of GHAZVINIAN ZANJANI, and applying the 3D reconstruction techniques taught by GHAZVINIAN ZANJANI to provide the Neural Radiance Fields (NeRFs) to output an expiration corresponding to one or more of the ray values. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify ICHIKAWA in view of Filip in view SU in view of Furukawa according to the relied-upon teachings of GHAZVINIAN ZANJANI to obtain the invention as specified in claim. Regarding claim 10, the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses everything claimed as applied above (see claim 9), and the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses wherein the expiration is determined based on a dissimilarity threshold corresponding to ray color or ray density (GHAZVINIAN ZANJANI: FIG. 4; paragraphs [0100]-[0101], ... by rendering training input images 410, using the rendering engine 470 ... The intersection of rays and the scene mesh ... If the quantity of rays that do not hit (e.g., intersect) with the scene mesh is greater than a threshold (e.g., a threshold ratio), then the current 2D input image 410 (Ij) can be used to construct a fragment ...). Regarding claim 11, the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses everything claimed as applied above (see claim 9), and the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses wherein the expiration is determined based on a dissimilarity threshold corresponding to ray position or orientation differences (GHAZVINIAN ZANJANI: FIG. 4; paragraphs [0100]-[0101], ... by rendering training input images 410, using the rendering engine 470 ... The intersection of rays and the scene mesh ... If the quantity of rays that do not hit (e.g., intersect) with the scene mesh is greater than a threshold (e.g., a threshold ratio), then the current 2D input image 410 (Ij) can be used to construct a fragment ...). Regarding claim 12, the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses everything claimed as applied above (see claim 9), and the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses wherein the expiration is determined based on a per-timepoint expiration budget (GHAZVINIAN ZANJANI: paragraph [0069], the machine learning architecture 400 of FIG. 4 (e.g., also referred to as a Neural Mesh Fusion (NMF) network ... In some aspects, training of a machine learning network (e.g., such as the machine learning NMF network 400, among various other machine learning networks of FIGS. 3-12, etc.) can be performed using online training, offline training, and/or various combinations of online and offline training ... Additionally, offline may be based on one or more time conditions (e.g., after a particular amount of time has expired, such as a day, a week, a month, etc.) ...). Regarding claim 13, the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses everything claimed as applied above (see claim 9), and the combination of ICHIKAWA in view of Filip in view SU in view of Furukawa in view of GHAZVINIAN ZANJANI discloses wherein the expiration is determined based on a prioritization determined based on determining visible scene changes (GHAZVINIAN ZANJANI: paragraph [0069], the machine learning architecture 400 of FIG. 4 (e.g., also referred to as a Neural Mesh Fusion (NMF) network ... In some aspects, training of a machine learning network (e.g., such as the machine learning NMF network 400, among various other machine learning networks of FIGS. 3-12, etc.) can be performed using online training, offline training, and/or various combinations of online and offline training ... Additionally, offline may be based on one or more time conditions (e.g., after a particular amount of time has expired, such as a day, a week, a month, etc.) ...). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Xilin Guo whose telephone number is (571)272-5786. The examiner can normally be reached Monday - Friday 9:00 AM-5:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XILIN GUO/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602855
LIVE MODEL PROMPTING AND REAL-TIME OUTPUT OF PHOTOREAL SYNTHETIC CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597403
DISPLAY DEVICE FOR A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12579712
ASSET CREATION USING GENERATIVE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 17, 2026
Patent 12579766
SYSTEM AND METHOD FOR RAPID OUTFIT VISUALIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12573121
Automated Generation and Presentation of Sign Language Avatars for Video Content
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+17.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month