Prosecution Insights
Last updated: April 19, 2026
Application No. 18/506,024

Communicating Pre-rendered Media

Non-Final OA §103
Filed
Nov 09, 2023
Examiner
VU, KHOA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
234 granted / 345 resolved
+5.8% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
372
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/12/2026 has been entered. Claims 1, 3-5, 7-12, 14-16, and 18-28 filed 03/12/2026 are presented for examination. Response to Arguments Applicant’s arguments with respect amended claims 1, 12, 23, 26 and canceled 2, 13 filed on 03/12/2026 have been considered but they are not persuasive. However, examiner found some amended limitations are taught by references previous introduced. In Remark page 10, third paragraph, applicant argued that Applicant submits that the cited references fail to teach or suggest, at least, "generating, based on the pre-rendered content, description information that is configured to enable the UE to perform rendering operations using the pre-rendered content, and to indicate buffer information for one or more buffers by which the network computing device will stream the pre-rendered content” as recited in amended claim 1. The examiner respectfully disagrees with Applicant’s argument. In fact, in paragraph [0033], Potetsianakis discloses “the processing of the content data may be performed by a server application, preferably by an adaptive streaming server application” and [0056] “An adaptation may be linked to a part of the content data of a content file or content stream using for example presentation time information, e.g. presentation time stamps, associated with the content…remote rendering of the 3D objects at a server system (e.g. a cloud system) so that pre-rendered content data is transmitted to the rendering device for playout by the rendering device” and [0063] in addition (part of) the content data may be processed and rendered remotely by a rendering engine 216 server system and transmitted as pre-rendered content data to the rendering device. Suitable streaming protocols such as RTP, RTSP over UDP and [0107] “Content data for such rendering device, that may include video data and/or synthetic image data, can be linked with adaptation information that identifies (at least) a target affect parameter value and an adaptation process for adapting the content (e.g. changing colors, changing or adding a 3D model, etc.) or for adapting the processing of the content data (e.g. increase or decrease playout speed, skip playout of certain content data, etc.)” and [0108] “The render device may include a network interface 408 for establishing communication with the server system” Potetsianakis teaches generating based on a buffer information of adaptation information that identifies (at least) a target affects parameter values to indicate a buffer storages pre-rendered content data linked with a content file which includes a description information extension, e.g., presentation time stamps, associated with the content, changing colors, changing or adding a 3D model, increase or decrease playout speed, etc. for one or more buffer and the pre-rendered content data is transmitted to the rendering device from server via an adaptive streaming network protocol such as RTP, RTSP over UDP. Independent claims 12, 23, and 26 have been amended to recite features that are similar to distinguishing features of independent claim 1 and are rejected as an explanation above. Dependent claims 3-5, 7-11, 14-16, 18-22, 24-25, 27-28 depend on claims 1, 12, 23, 26 and the rejections to the claims are maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 7-8, 11-, 12, 14, 18-19 and 22-28 are rejected under 35 U.S.C. 103 as being unpatentable by Yip et al. (U.S. 2023/0316583 A1) in view of Peri et al. (U.S. 2023/0215075 A1) and further in view of Potetsianakis et al. (U.S. 2023/0205313 A1). Regarding Claim 1 (Currently amended), Yip discloses a method for communicating rendered media to a user equipment (UE) performed by a processor of a network computing device (Yip, [0008] “a method for performing rendering by a first device receiving 3D media data from a media server in a communication system, 2D rendering is to be performed by the AR glasses” and [0029] “a processor of a computer” and [0036] “FIG. 1 the device 120 may be a user equipment (UE)” Yip teaches a method for communicating, a UE (AR glasses 120, Fig. 1) renders a media data received from a media server) by a processor of a computer, comprising: receiving pose information from the UE (Yip, [0090] “The pose information parser 541 of the MEC 540 parses at least one of the pose information, the pose …information received from the vision engine 521 of the AR glasses 520” receiving pose information from the UE (the AR glasses 50); generating rendered content for processing by the UE based on the pose information received from the UE (Yip, [0092] “The 3D media decoder 543 of the MEC 540 depacketizes and decodes the 3D media data received from the media server 560, and then, the 3D renderer 544 of the MEC 540 renders a plurality of 2D view video frames based on the pose information predicted in operation 503” Yip teaches generating render content (decoding the 3D media data) for processing by the UE based on the pose information (2D view video frames based on the pose information predicted in operation 503) received from the UE (Fig. 5); generating, based on the rendered content, description information that is configured to enable the UE to perform rendering operations using the rendered content (Yip, [0093] “The 2D encoder and packet size 545 of the MEC 540 encodes and packetizes the view rendered in operation 504 using a 2D codec” and [0094] “The MEC 540 transmits the compressed media packet and view selection metadata to the AR glasses 520”; Yip teaches generating description information (e.g., the compressed media packet and view selection metadata) based on the rendered content (encodes and packetizes the view rendered), transmitting the description information to the UE (Yip, [0094] “The MEC 540 transmits the compressed media packet and view selection metadata to the AR glasses 520” Yip teaches transmits the description information the compressed media packet and view selection metadata) to the UE (AR glasses 520); and transmitting the rendered content to the UE (Yip, [0087] “remote-renders the gathered 3D media data and provides it to the AR glasses 520. Remote rendering is performed between the AR glasses 520 and the MEC 540 remote-renders the gathered 3D media data and provides it to the AR glasses 520” and Yip, [0092] “the 3D renderer 544 of the MEC 540 renders a plurality of 2D view video frames based on the pose information predicted in operation 503” Yip teaches transmits the rendered content (rendered 2D view video frames) to the UE (AR glasses 520) However, Yip does not explicitly teach generating pre-rendered content for processing by the UE based on the pose information. generating the pre-rendered content; to indicate buffer information for one or more buffers by which the network computing device will stream the pre-rendered content; Peri teaches generating pre-rendered content for processing by the UE based on the pose information (Peri, [0069] “the media client 310 can send the latest pose information to the network AS 316 in operation 434. The network AS 316 can perform pre-rendering of the media based on the latest received pose information and any original scene updates in operation 436. The pre-rendering may include decoding and rendering of immersive media and encoding the rendered media…The pose information can be sent from the UE 202 to the server” Peri teaches generating pre-rendered content (e.g., decoding and rendering of immersive media and encoding the rendered media) for processing based on the pose information received from UE (202). Yip and Peri are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Yip to combine with pre-rendered content (as taught by Peri) in order to because Peri can provide generating pre-rendered content (e.g., decoding and rendering of immersive media and encoding the rendered media) for processing based on the pose information received from UE (202) (Peri, [0069]). Doing so, it may provide the pose information can be redundant, such as when the motion of the UE (AR glasses) is minimal and a new rendered frame is unnecessary or when properties of the immersive content allow for re-projection by the UE (AR glasses) (Peri, [0038 ]). Potetsianakis teaches to indicate buffer information for one or more buffers by which the network computing device will stream the pre-rendered content (Potetsianakis, [0033] “the processing of the content data may be performed by a server application, preferably by an adaptive streaming server application” and [0056] “An adaptation may be linked to a part of the content data of a content file or content stream using for example presentation time information, e.g. presentation time stamps, associated with the content…remote rendering of the 3D objects at a server system (e.g. a cloud system) so that pre-rendered content data is transmitted to the rendering device for playout by the rendering device” and [0063] in addition (part of) the content data may be processed and rendered remotely by a rendering engine 216 server system and transmitted as pre-rendered content data to the rendering device. Suitable streaming protocols such as RTP, RTSP over UDP and [0107] “Content data for such rendering device, that may include video data and/or synthetic image data, can be linked with adaptation information that identifies (at least) a target affect parameter value and an adaptation process for adapting the content (e.g. changing colors, changing or adding a 3D model, etc.) or for adapting the processing of the content data (e.g. increase or decrease playout speed, skip playout of certain content data, etc.)” and [0108] “The render device may include a network interface 408 for establishing communication with the server system” Potetsianakis teaches generating, based on a buffer information of adaptation information that identifies (at least) a target affects parameter values to indicate a buffer storages pre-rendered content data linked with a content file which includes a description information extension, e.g., presentation time stamps, associated with the content, changing colors, changing or adding a 3D model, increase or decrease playout speed, etc. for one or more buffer and the pre-rendered content data is transmitted to the rendering device from server via an adaptive streaming network protocol such as RTP, RTSP over UDP. Yip, Peri and Potetsianakis are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Yip to combine with a buffer contains pre-rendered content (as taught by Potetsianakis) in order to transmit the pre-rendered content from a buffer to rendering device as a stream because Potetsianakis can provide a buffer storages pre-rendered content data which includes description information extension and is transmitted to the rendering device from server via an adaptive streaming application (Potetsianakis, [0033], [0056], [0108]). Doing so, it may provide adapting the content data, the adaptation process being configured to add, remove, replace or adjust at least part of the content data (Potetsianakis, [0023]). Regarding Claim 2, (Canceled). Regarding Claim 3, a combination of Yip, Peri and Potetsianakis discloses the method of claim 1, wherein the description information is configured to indicate view configuration information for the pre-rendered content (Yip, [0094] “The MEC 540 transmits the compressed media packet and view selection metadata to the AR glasses 520” Yip teaches the description information indicates view configuration information e.g., view selection metadata for the rendered content. However, Yip does not explicitly teach the pre-rendered content; Peri teaches the pre-rendered content (Peri, [0069] “The network AS 316 can perform pre-rendering of the media based on the latest received pose information and any original scene updates in operation 436. The pre-rendering may include decoding and rendering of immersive media and encoding the rendered media” Peri teaches the pre-rendered content includes decoding and rendered media. Yip, Peri and Potetsianakis are combinable see rationale in claim 1. Regarding Claim 6 (Canceled). Regarding Claim 7, the method of claim 1, Yip does not explicitly teach wherein the description information is configured to indicate composition layer type information for the pre-rendered content. However, Peri teaches wherein the description information is configured to indicate composition layer type information for the pre-rendered content (Peri, [0069] “The pre-rendering may include decoding and rendering of immersive media and encoding the rendered media” and [0081] “The media client 638 can include scene description delivery functions 64” and [0079] “The compositor 634 can represent functions for compositing layers of images at different levels of depth for presentation” Peri teaches the compositing layer type of images (layers of images at different levels of depth for presentation) for the rendered content. Yip and Peri are combinable see rationale in claim 1. Regarding Claim 8, the method of claim 1, Yip does not explicitly teach wherein the description information is configured to indicate audio configuration properties for the pre-rendered content. However, Peri teaches wherein the description information is configured to indicate audio configuration properties for the pre-rendered content (Peri, [0069] “The pre-rendering may include decoding and rendering of immersive media and encoding the rendered media” and [0079] “The rendering operations may include 2D or 3D visual/audio rendering, as well as pose correction functionalities. The rendering operations may also include audio rendering” and [0082] “The speakers 616 can allow rendering of audio content to enhance the immersive experience” Peri teaches indicate audio configuration properties e.g., the speakers 616 can allow rendering of audio content to enhance the immersive experience, for the rendered content. Yip, Peri and Potetsianakis are combinable see rationale in claim 1. Regarding Claim 11, a combination of Yip, Peri and Potetsianakis discloses the method of claim 1, wherein transmitting to the UE the description information comprises transmitting to the UE a data channel message including information that is configured to enable the UE to process the pre-rendered rendered content (Yip, [0042] “the 3D rendered 2D view output from the 3D media decoder and renderer 142 should be compressed before transmitted to the AR glasses through the data channel’ and [0094] “The MEC 540 transmits the compressed media packet and view selection metadata to the AR glasses 520” and [0095] “The pose predicted view selector 524 of the AR glasses 520 processes the view selection metadata to select a pose predicted view” However, Yip does not explicitly teach the pre-rendered content; Peri teaches the pre-rendered content (Peri, [0069] “The network AS 316 can perform pre-rendering of the media based on the latest received pose information and any original scene updates in operation 436. The pre-rendering may include decoding and rendering of immersive media and encoding the rendered media” Peri teaches the pre-rendered content includes decoding and rendered media. Yip, Peri and Potetsianakis are combinable see rationale in claim 1. Regarding Claim 12 (Currently amended), a combination of Yip, Peri and Potetsianakis discloses a network computing device (Yip, [0002] “device for rendering 3D media data in a communication system”, comprising: a memory (Yip, [0029] “computer-readable memory”); and a processing system coupled to the memory and including one or more processors (Yip, [0029] “the instructions executed through a processor of a computer”) configured to: receive pose information from a user equipment (UE); generate pre-rendered content for processing by the UE based on the pose information received from the UE; generate, based on the pre-rendered content, description information that is configured to enable the UE to perform rendering operations using the pre-rendered content and to indicate buffer information for one or more buffers by which the network computing device will stream the pre-rendered content, transmit the description information to the UE; and transmit the pre-rendered content to the UE. Claim 12 is substantially similar to claim 1 is rejected based on similar analyses. Regarding Claim 13, (Canceled). Regarding Claim 14, a combination of Yip and Peri discloses the network computing device of claim 12, wherein the one or more processors are configured such that the description information is configured to indicate view configuration information for the pre-rendered content. Claim 14 is substantially similar to claim 3 is rejected based on similar analyses. Regarding Claim 17 (Canceled). Regarding Claim 18, a combination of Yip and Peri discloses the network computing device of claim 12, wherein the one or more processors are configured such that the description information is configured to indicate composition layer type information for the pre-rendered content. Claim 18 is substantially similar to claim 7 is rejected based on similar analyses. Regarding Claim 19, a combination of Yip and Peri discloses the network computing device of claim 12, wherein the one or more processors are configured such that the description information is configured to indicate audio configuration properties for the pre-rendered content. Claim 19 is substantially similar to claim 8 is rejected based on similar analyses. Regarding Claim 22, a combination of Yip and Peri discloses the network computing device of claim 12, wherein the one or more processors are further configured to including information that is configured to enable the UE to process the pre-rendered content in the description information transmitted to the UE a data channel message. Claim 22 is substantially similar to claim 11 is rejected based on similar analyses. Regarding Claim 23 (Currently amended), a combination of Yip, Peri and Potetsianakis discloses a method performed by a processor of a user equipment UE (Yip, [0008] “a method for performing rendering by a first device receiving 3D media data from a media server in a communication system, 2D rendering is to be performed by the AR glasses” and [0036] “FIG. 1 the device 120 may be a user equipment (UE)” Yip teaches a method performed by a processor of UE), comprising: sending pose information to a network computing device (Yip, [0036] “FIG.1, the device 120 may be a user equipment (UE), such as AR glasses, and the device 140 may be a cloud network-based MEC” and Fig. 5, [0090] “The AR glasses 520 transmits, to the MEC 540, at least one of the user’s pose information P(t1) (pose information at time t1)”, Yip teaches send (transmit) pose information to a network computing device (MEC 540), Fig. 5; receive from a network computing device description information that is configured to enable the UE to perform rendering operations using pre-rendered rendered content (Yip, [0092] “the 3D renderer 544 of the MEC 540 renders a plurality of 2D view video frames based on the pose information predicted in operation 503 and [0094] The MEC 540 transmits the compressed media packet and view selection metadata to the AR glasses 520” Yip teaches receive from a network computing device (MEC 540) description information (the compressed media packet and view selection metadata) to enable the UE (AR glasses 520) using rendered content, wherein the description information comprises a description information extension; However, Yid does not explicitly teach the pre-rendered content; receiving pre-rendered content via buffers described in the description information extension; and sending rendered frames to an extended reality (XR) runtime for composition and display. Peri teaches the pre-rendered content (Peri, [0069] “The network AS 316 can perform pre-rendering of the media based on the latest received pose information and any original scene updates in operation 436. The pre-rendering may include decoding and rendering of immersive media and encoding the rendered media” Peri teaches the pre-rendered content includes decoding and rendered media. sending rendered frames to an extended reality (XR) runtime for composition and display (Peri, [0005] “processing and displaying the pre-rendered content on the XR device” and [0038] “The rendered frame is sent to the AR glasses and corrected using the latest pose information to compensate for the latency between the rendering and the presentation of the frame” and Fig. 3A, [0070] “The immersive runtime 304 can perform further processing, such as composition, pose correction” Peri teaches sending rendered frame to an XR runtime (runtime 304 of AR glasses, a XR device) for composition and display. Yip, Peri and Potetsianaki are combinable see rationale in claim 1. Potetsianakis teaches receiving pre-rendered content via buffers described in the description information extension (Potetsianakis, [0033] “the processing of the content data may be performed by a server application, preferably by an adaptive streaming server application” and [0056] “An adaptation may be linked to a part of the content data of a content file or content stream using for example presentation time information, e.g. presentation time stamps, associated with the content…remote rendering of the 3D objects at a server system (e.g. a cloud system) so that pre-rendered content data is transmitted to the rendering device for playout by the rendering device” [0107] “Content data for such rendering device, that may include video data and/or synthetic image data, can be linked with adaptation information that identifies (at least) a target affect parameter value and an adaptation process for adapting the content (e.g. changing colors, changing or adding a 3D model, etc.) or for adapting the processing of the content data (e.g. increase or decrease playout speed, skip playout of certain content data, etc.). A measured affect value can be used to determine if adaptation is needed nor not” and [0108] “The render device may include a client processor 404 connected to data storage 406, e.g. a buffer, and a network interface 408 for establishing communication with the server system” Potetsianakis teaches indicate a buffer storages pre-rendered content data described in linked with description information extension, e.g., presentation time stamps, associated with the content, changing colors, changing or adding a 3D model, increase or decrease playout speed, etc. which is transmitted to the rendering device from server via an adaptive streaming application. Yip and Peri and Potetsianakis are combinable see rationale in claim 1. Regarding Claim 24, a combination Yip, Peri and Potetsianakis discloses the method of claim 23, further comprising: transmitting information about UE capabilities and configuration to the network computing device (Yip, [0086] “FIG. 5 illustrates an example of a configuration for remote rendering in which rendering for 3D media data requiring a relatively high processing capability is performed by the MEC 540” and [0090] The AR glasses 520 transmits, to the MEC 540, at least one of the user’s pose information P(t1) (pose information at time t1) Yip teaches UE (the AR glasses 520) transmits pose information to the network computing device (MEC 540) has high processing capability and configured to render media data; and However, Yip and Potetsianakis does not explicitly teach receiving from the network computing device a scene description for a split rendering session Peri teaches receiving from the network computing device a scene description for a split rendering session (Peri [0035] “Multimedia contents can include scene description. The multimedia contents can include support for split rendering between AR glasses and a cloud/edge server” and [0061] “FIG. 3A “The media client 310 can establish a transport session for receiving the entry point or scene description in operation 332” Peri teaches the media client (AR glasses) receives from the network computing device (a cloud server) a scene description for a split rendering session” Yip, Peri and Potetsianakis are combinable see rationale in claim 2. Regarding Claim 25, a combination of Yip, Peri and Potetsianakis discloses the method of claim 24, further comprising: receiving information for rendering one or more 3D scene images and rendering the one or more 3D scene images in response to determining to select the 3D rendering configuration (Yip, [0033] “3D media are based on 3D representations of actual objects and scenes” and [0092] “The 3D media decoder 543 of the MEC 540 depacketizes and decodes the 3D media data received from the media server 560, and then, the 3D renderer 544 of the MEC 540 renders a plurality of 2D view video frames based on the pose information predicted in operation 503” Yip receiving information for rendering 3D scene images (3D media) and rendering 3D scene images by a 3D render of the MEC. However, Yip does not explicitly teach determining whether to select a three-dimensional (3D) rendering configuration or a two-dimensional (2D) rendering configuration based at least in part on the received scene description; receiving pre-rendered content via buffers described in the description information extension of the scene description in response to determining to select the 2D rendering configuration; Peri teaches determining whether to select a three-dimensional (3D) rendering configuration or a two-dimensional (2D) rendering configuration based at least in part on the received scene description (Peri, [0057] “the server 204 can use the latest pose information 206 to render immersive 3D media as 2D frames before encoding and sending the 2D rendered frames to the AR glasses 210” and [0061] “The media client 310 can establish a transport session for receiving the scene description in operation 332” Peri teaches whether to select a 2D rendering configuration based on the received scene description. Yip, Peri and Potetsianakis are combinable see rationale in claim 1. Potetsianakis teaches receiving pre-rendered content via buffers described in the description information extension of the scene description in response to determining to select the 2D rendering configuration (Potetsianakis, [0033] “the processing of the content data may be performed by a server application, preferably by an adaptive streaming server application” and [0056] “remote rendering of the 3D objects at a server system (e.g. a cloud system) so that pre-rendered content data is transmitted to the rendering device for playout by the rendering device” and [0108] “The render device may include a client processor 404 connected to data storage 406, e.g. a buffer, for establishing communication with the server system” and [0116] “ the XR rendering which may be configured to render spatially aligned 3D and 2D assets over an external environment” Potetsianakis teaches indicate a buffer storages pre-rendered content data which is transmitted to the rendering device from server via an adaptive streaming application to select the 2D rendering configuration. Yip, Peri and Potetsianakis are combinable see rationale in claim 1. Regarding Claim 26 (Currently amended), a combination Yip, Peri and Potetsianakis disclose a user equipment (UE) (Yip, [0036] “a user equipment (UE), such as a smartphone, or AR glasses”) , comprising: a memory (Yip, [0029] “computer-readable memory”); a transceiver (Yip, [0186] “The transceiver 1110 may transmit/receive XR/AR data to/from, e.g., a media server”); and a processing system coupled to the memory and the transceiver, and including one or more processors (Yip, [0011] “a communication system comprises a transceiver and a processor configured to transmit” configured to: send pose information to a network computing device; receive from a network computing device description information that is configured to enable the UE to perform rendering operations using pre-rendered content, ; receive pre-rendered content via buffers described in the description information extension; send rendered frames to an extended reality (XR) runtime for composition and display; Claim 26 is substantially similar to claim 23 is rejected based on similar analyses. Regarding Claim 27, a combination Yip, Peri and Potetsianakis disclose the UE of claim 26, wherein the one or more processors are further configured to: transmit information about UE capabilities and configuration to the network computing device; and receive from the network computing device a scene description for a split rendering session. Claim 27 is substantially similar to claim 24 is rejected based on similar analyses. Regarding Claim 28, a combination Yip, Peri and Potetsianakis disclose the UE of claim 27, wherein the one or more processors are further configured to: determine whether to select a 3D rendering configuration or a 2D rendering configuration based at least in part on the received scene description; receive pre-rendered content via buffers described in the description information extension of the scene description in response to determining to select the 2D rendering configuration; and receive information for rendering 3D scene images and rendering the one or more 3D scene images in response to determining to select the 3D rendering configuration. Claim 28 is substantially similar to claim 25 is rejected based on similar analyses. Claims 4, 15 are rejected under 35 U.S.C. 103 as being unpatentable by Yip et al. (U.S. 2023/0316583A1) in view of Peri et al. (U.S. 2023/0215075 A1) and further in view of Potetsianakis et al. (U.S. 2023/0205313 A1) and further in view of Nourai et al. (U.S. 2022/0319094 A1). Regarding Claim 4, the method of claim 1, a combination of Yip, Peri and Potetsianakis does not explicitly teach wherein the description information is configured to indicate an array of layer view objects. However, Nourai teaches the description information is configured to indicate an array of layer view objects (Nourai, [0013] “the server receiving the current camera pose (e.g., current view matrix) from the client device, the visible geometry primitives placed on the atlas may be scaled down in size to create a layer of buffer between the visible geometry primitives included in the atlas. The shading phase involves rendering the texture of all of the visible geometry primitives on the atlas” and Fig. 3A, [0027] triangles that are placed within an atlas are reduced in size (e.g., scaled down) to create a layer of buffer between each of the adjacent triangles in the atlas…FIG. 3B, a sub-block 370 with the same triangles that have scaled down to create a layer of buffer around the triangles” and [0035] “FIG. 4, “At step 406., sending, to the client device, the texture atlas being configured for rendering images of the visible geometric primitives from different viewpoints” Nourai the description information to indicate an array of layer view objects e.g., Fig. 3b create an array of layer (sub-block 370) view objects (the visible same triangles on the atlas) is rendered and sent it to the client. Yip, Peri, Potetsianakis and Nourai are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Yip to combine with an array of layer view objects (as taught by Nourai) in order to apply an array of layer view objects because Nourai can provide the description information to indicate an array of layer view objects e.g., Fig. 3b create an array of layer (sub-block 370) view objects (the visible same triangles on the atlas) is rendered and sent it to the client (Nourai, [0013], Fig. 3A, 3B [0027]). Doing so, it may provide the corresponding mapping information to allow the client device to identify the texture data of the corresponding triangle in the texture atlas (Nourai, [0030]). Regarding Claim 15, a combination of Yip, Peri, Potetsianakis and Nourai disclose the network computing device of claim 12, wherein the one or more processors are configured such that the description information is configured to indicate an array of layer view objects. Claim 15 is substantially similar to claim 4 is rejected based on similar analyses. Claims 5, 16 are rejected under 35 U.S.C. 103 as being unpatentable by Yip et al. (U.S. 2023/0316583A1) in view of Peri et al. (U.S. 2023/0215075 A1) and further in view of Potetsianakis et al. (U.S. 2023/0205313 A1) and further in view of Mooney et al. (U.S. 2024/0164636 A1). Regarding Claim 5, the method of claim 1, a combination of Yip, Peri and Potetsianakis does not explicitly teach wherein the description information is configured to indicate eye visibility information for the pre-rendered content. However, Mooney teaches the description information is configured to indicate eye visibility information for the pre-rendered content (Mooney, [0003] “Visual stimuli are signals presented on a display monitor, such as a pre-rendered video, a pre-rendered video can be considered a single visual stimulus, as can a single character... Visibility can also be defined in a relative sense as the ease with which an observer can consciously attend to it” and [0096] “FIG. 2C, Eye movements that match the speed and direction of each stimulus with high adherence may provide evidence of stimulus visibility” Mooney teaches the description information e.g., eye movements that match the speed and direction of each stimulus with high adherence indicates eye visibility evidence for the pre-rendered video content. Yip, Peri, Potetsianakis and Mooney are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Yip to combine with an eye visibility information (as taught by Mooney) in order to indicate eye visibility information for the pre-rendered content because Mooney can provide the description information e.g., eye movements that match the speed and direction of each stimulus with high adherence indicates eye visibility evidence for the pre-rendered video content (Mooney, [0003], [0096]). Doing so, it may provide evidence about the visibility of each stimulus from the observer's eye movements and advances that stimulus by changing its spatial frequency and/or contrast as soon as visibility is inferred (Mooney, [0033]). Regarding Claim 16, a combination of Yip, Peri, Potetsianakis and Mooney disclose the network computing device of claim 12, wherein the one or more processors are configured such that the description information is configured to indicate eye visibility information for the pre-rendered content. Claim 16 is substantially similar to claim 5 is rejected based on similar analyses. Claims 9, 20 are rejected under 35 U.S.C. 103 as being unpatentable by Yip et al. (U.S. 2023/0316583A1) in view of Peri et al. (U.S. 2023/0215075 A1) and further in view of Potetsianakis et al. (U.S. 2023/0205313 A1) and further in view of Lee et al. (U.S. 2019/0384383 A1). Regarding Claim 9, the method of claim 1, a combination of Yip, Peri and Potetsianakis does not explicitly teach further comprising: receiving from the UE an uplink data description that is configured to indicate information about the content to be pre-rendered for processing by the UE ; wherein generating the pre-rendered content for processing by the UE based on pose information received from the UE comprises generating the pre-rendered content based on the uplink data description. However, Lee teaches receiving from the UE an uplink data description that is configured to indicate information about the content to be pre-rendered for processing by the UE (Lee, [0062] “uplink (UL) refers to communication from the UE to the BS” and Fig. 5, [0170] “the UE may transmit information about UL data to be transmitted to the BS, and the BS may allocate UL resources to the UE based on the information. The information about the UL data to be transmitted is referred to as a buffer status report (BSR), and the BSR is related to the amount of UL data stored in a buffer of the UE” and [0341] “The XR device 1230 may acquire information about a surrounding space or a real object and may render an XR object to be output” Lee teaches receiving from the UE an uplink data description (e.g. UL grant, buffer status report (BSR) to indicate the content to be rendered for processing by the UE (rendered an XR object); wherein generating the pre-rendered content for processing by the UE based on pose information received from the UE comprises generating the pre-rendered content based on the uplink data description (Lee, [0062] “uplink (UL) refers to communication from the UE to the BS” and [0341] The XR device 1230 may acquire information about a surrounding space or a real object by analyzing 3D point cloud data or image data and thus generating position data and attribute data for the 3D points, and may render an XR object to be output “ Lee teaches generating the pre-rendered content for processing by the UE based on pose information (generating position data and attribute data for the 3D points) received from the UE comprises generating the pre-rendered content (rendered an XR object) based on the uplink data description (e.g. UL grant, buffer status report (BSR). Yip, Peri, Potetsianakis and Lee are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Yip to combine with an uplink transmission (as taught by Lee) in order to receive from the UE an uplink data description to indicate information about the content to be pre-rendered for processing by the UE because Lee can provide receiving from the UE an uplink data description (e.g. UL grant, buffer status report (BSR) to indicate the content to be rendered for processing by the UE (rendered an XR object) (Lee, [0062], [0170]). Doing so, it may provide the user with many more XR environments using user movement estimation information, thereby improving user experience (Lee, [0004]). Regarding Claim 20, a combination of Yip, Peri, Potetsianakis and Lee discloses the network computing device of claim 12, wherein the one or more processors are further configured to: receive from the UE an uplink data description that is configured to indicate information about the content to be pre-rendered for processing by the UE; and generate the pre-rendered content for processing by the UE based on the uplink data description. Claim 20 is substantially similar to claim 9 is rejected based on similar analyses. Claims 10, 21 are rejected under 35 U.S.C. 103 as being unpatentable by Yip et al. (U.S. 2023/0316583A1) in view of Peri et al. (U.S. 2023/0215075 A1) and further in view of Potetsianakis et al. (U.S. 2023/0205313 A1) and further in view of Ahsan et al. (U.S. 2023/0214009 A1). Regarding Claim 10, the method of claim 1, a combination of Yip, Peri and Potetsianakis does not explicitly teach wherein transmitting to the UE the description information comprises transmitting to the UE a packet header extension including information that is configured to enable the UE to process the pre-rendered content. However, Ahsan teaches (Ahsan, [0028] “The stream may be processed for different functions e.g. pre-rendering, activation of specific commands/requests etc.” and [0049] “A device that is streaming pose and/or an interactivity stream as an RTP stream may receive the validity range updates from the server” and [0061] “When RTP is used, the stop signal may be part of the payload in the RTP stream or the stop signal can be an RTP header extension” and [0159] “UE user equipment (e.g., typically mobile device)” Asan teaches transmitting to the UE a packet header extension (an RTP stream header extension to enable the UE (e.g., stop signal of payload) to process the pre-rendered content in the stream. Yip, Peri, Potetsianakis and Ahsan are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Yip to combine with a packet header extension (as taught by Ahsan) in order to transmitting to the UE a packet header extension to enable the UE process because Ahsan can provide transmitting to the UE a packet header extension (an RTP stream header extension to enable the UE (e.g., stop signal of payload) to process the pre-rendered content (Ahsan, [0028], [0049], [0061]). Doing so, it may provide pose streams may be paused due to maximum players, server load, a device being outside validity range (Ahsan, [0062]). Regarding Claim 21, a combination of Yip, Peri, Potetsianakis and Ahsan disclose the network computing device of claim 12, wherein the one or more processors are further configured to transmit to the UE the description information and the pre-rendered content as a packet header extension including information that is configured to enable the UE to process the pre-rendered content. Claim 21 is substantially similar to claim 10 is rejected based on similar analyses. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /KHOA VU/Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Nov 09, 2023
Application Filed
Aug 18, 2025
Non-Final Rejection — §103
Nov 21, 2025
Response Filed
Dec 11, 2025
Final Rejection — §103
Feb 16, 2026
Response after Non-Final Action
Mar 12, 2026
Request for Continued Examination
Mar 13, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598266
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597087
HIGH-PERFORMANCE AND LOW-LATENCY IMPLEMENTATION OF A WAVELET-BASED IMAGE COMPRESSION SCHEME
2y 5m to grant Granted Apr 07, 2026
Patent 12578941
TECHNIQUE FOR INTER-PROCEDURAL MEMORY ADDRESS SPACE OPTIMIZATION IN GPU COMPUTING COMPILER
2y 5m to grant Granted Mar 17, 2026
Patent 12567181
SYSTEMS AND METHODS FOR REAL-TIME PROCESSING OF MEDICAL IMAGING DATA UTILIZING AN EXTERNAL PROCESSING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12548431
CONTEXTUALIZED AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+15.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month