Prosecution Insights
Last updated: April 19, 2026
Application No. 18/280,432

Distributed Content Rendering

Final Rejection §103
Filed
Sep 05, 2023
Examiner
PUNTIER, CHRIS ALEJANDRO
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Final)
94%
Grant Probability
Favorable
4-5
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
29 granted / 31 resolved
+31.5% vs TC avg
Moderate +10% lift
Without
With
+10.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
12 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
70.9%
+30.9% vs TC avg
§102
15.4%
-24.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claim 39 and 46 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments, filed 1/30/2026, with respect to the rejection(s) of claim(s) 28,38,40,43,45, and 47 under USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Carion(Pub No. US 2009/0037724 A1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 28,35,38,40,43,45 and 47-52 are rejected under 35 U.S.C. 103 as being unpatentable over Sylvan et al. (Pub No. US 2011/0210982 A1) in view of Carion et al. (Pub No. US 2009/0037724 A1) and Murakami et al. (Pub No. US 2015/0301586 A1). As per claim 28, Sylvan teaches the claimed: 28. A method comprising: at a first device including a display, one or more processors, and a memory (The first device corresponds to collection of the user’s equipment to play their game. Figure 3B shows that the first device includes both a display 242 and processor 259 and memory 222): determining a pose of a virtual object in a volumetric environment (Sylvan in [0006] “A further embodiment of the present technology relates to a system for rendering three-dimensional objects on a display … The system includes a capture device for capturing image data relating to an object within a field of view of the capture device, and a computing environment. The computing environment includes a first processor for generating: i) pose information for rendering objects in a given position” and [0020] “… These objects may be representations of real world objects or they may be virtual objects existing solely in the game or application space.” In this instance, the pose information for a virtual 3D object corresponds to determining a pose in a volumetric (3D) environment); generating a request for content rendering instructions based on the pose of the virtual object (Sylvan in [0020] “In embodiments, a first processor (also referred to herein as a central processor or CPU) generates pose information and render commands for rendering objects on a display. These objects may be representations of real world objects or they may be virtual objects existing solely in the game or application space” The “first processor” or “CPU” is part of the “first device”, e.g. please see figure 3A in piece 101); In this instance, the GPU or “second processor” corresponds to the claimed “second device”). Sylvan alone does not explicitly teach the remaining claim limitations. However, Sylvan in combination with Carion teaches the claimed: Wirelessly sending, to a second device, separate from the first device, the request for the content rendering instructions (Carion in [0141] “Referring now to FIGS. 2A and 2B, an exemplary communication sequence 200A and 200B between a wireless device 210 and a remote server 230 in accordance with one embodiment of the present invention is shown. At step 212 the client 210 sends a message to the server 230 identifying the wireless device type and its capabilities along with a request to access an application on the server.” Here Carion distinctly discloses two separate devices, the wireless device and remote server communicating wirelessly. This idea can be used in Sylvan’s teachings. Sylvan teaches in [0020] “In embodiments, a first processor (also referred to herein as a central processor or CPU) generates pose information and render commands for rendering objects on a display. These objects may be representations of real world objects or they may be virtual objects existing solely in the game or application space. The pose information and render commands are passed to a second processor (also referred to as a graphics processor or GPU).” wirelessly receiving, from the second device, the content rendering instructions (Carion in [0178] “The wireless device 300 further includes a transceiver 330 for facilitating wireless communication with a remote server. The transceiver 330 may receive a series of basic commands from a remote server that may be used to render application and/or content on the display 350.” This passage teaches explicit disclosure of being able to receive commands used to render from a second device, in this case the remote server.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have two separate devices to send and receive instructions as taught by Carion with the system of Sylvan. This may ensure that only relevant content is sent, avoiding transmitting unsupported or unnecessary data. This also allows for lower device requirements and easier maintenance and updates. Sylvan and Carion do not explicitly teach the remaining claim limitations. However, Sylvan in combination with Murakami teaches the claimed: and displaying, based on the content rendering instructions, a content rendering of the virtual object (Murakami in figure 5 where the output from the “Drawing End Determination Unit” is fed into “Display Driver 23b” for displaying content rendering. Also, please see Murakami in [0060]. When the display of Murakami is used with Sylvan, then the display also includes a content rendering of the virtual object from Sylvan as well). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to receive the content rendering instructions and display based off of these instructions as taught by Murakami with the system of Sylvan. This may help the display drive better understand when each frame of the GPU has been completed and how long each frame took to render. This may also help with comparing power consumption for various rendering commands as well (Murakami towards the end of [0004]). As per claim 35, Sylvan teaches the claimed: The method of claim 28, wherein the request for the content rendering instructions indicates a size and/or shape of the content rendering (please see Sylvan in [0033] “The skeletal model may then be provided to the computing environment 12 such that the computing environment may track the skeletal model and render an avatar associated with the skeletal model. The computing environment may further determine which controls to perform in an application executing on the computer environment based on, for example, gestures of the user that have been recognized from the skeletal model” and Sylvan in claim 9: “The method of claim 1, wherein said step b) comprises the step of rendering an avatar mimicking the movements of a user.” In this instance, in order to for the avatar to be rendered to match the tracked skeletal model, the content rendering instructions would have to indicate the size and/or shape of various portions of the avatar’s body and the user moves their own body and the virtual 3D avatar is updated to match these movements). Claim 43, which is similar in scope to claim 35, thus rejected under the same rationale. Claim 45, which is similar in scope to claim 38, thus rejected under the same rationale. As per claim 38, Sylvan teaches the claimed: The method of claim 28, further comprising: detecting a user input (Figure 1 shows that the 3D avatar object moves in response to a trigger, e.g. user 18 performing hand gestures. Also, please see [0022] “… The system 10 further includes a capture device 20 for detecting movements and gestures of a user captured by the device 20, which the computing environment receives and uses to control the gaming or other application”).Sylvan alone does not explicitly teach the remaining claim limitations. However, Sylvan in combination with Carion and Murakami teaches the claimed: Wirelessly sending, to the second device, data indicative of the user input; and wirelessly receiving, from the second device, updated content rendering instructions based on the data indicative of the user input (Sylvan teaches of altering the movements of the 3D avatar object indicative of the user input, e.g. please see Sylvan in [0033] “The skeletal model may then be provided to the computing environment 12 such that the computing environment may track the skeletal model and render an avatar associated with the skeletal model. The computing environment may further determine which controls to perform in an application executing on the computer environment based on, for example, gestures of the user that have been recognized from the skeletal model”. Murakami is the second device (GPU) performing the rendering, then the rendering command sent to the GPU in Murakami includes data indicative of the user input when Sylvan is combined with Murakami. The updated content rendering instructions are updated rendered depicts of the 3D avatar object moving in response to user inputs. As discussed in the rejection of claim 28, Carion discloses the act of wirelessly communicating rendering instructions with two separate devices, which can be used in this system.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Carion and Murakami with the system of Sylvan. The motivation of claim 1 is incorporated herein. As per claim 47, claim 47 recites similar claim language to claim 28. However, Claim 47 recites A non-transitory memory storing one or more programs, which, when executed by one or more processors of a first device including a display, cause the first device to. Sylvan also recites A non-transitory memory storing one or more programs, which, when executed by one or more processors of a first device including a display, cause the first device to (para. [0030] “The capture device 20 may further include a memory component 34 that may store the instructions that may be executed by the processor 32, images or frames of images captured by the 3-D camera or RGB camera, or any other suitable information, images, or the like.“) Claim 40, which is similar in scope to claim 1 and 47, thus rejected under the same rationale. As per claim 49, Sylvan teaches The method of claim 28, further comprising generating the content rendering based on the content rendering instructions (para. [0056] “ In general, the GPU receives information from the CPU that allows the GPU to generate three-dimensional (3-D) objects on the display, including avatar 19, and objects 21 through 27 of FIG. 1. These 3-D objects are refreshed and updated for example once a frame. In embodiments, there may be 20-60 frames per second, though there may be more or less frames per second in further embodiments.” Sylvan teaches the GPU generating content based on instructions received from the CPU.) As per claim 50, Sylvan alone does not teach The method of claim 49, wherein generating the content rendering includes transforming an image based on the pose. Sylvan in combination with Carrion and Marukami do disclose this limitation( Sylvan in [0060] “In step 402, the CPU receives raw pose data from the capture device 20 regarding the current position of user 18. In step 404, the CPU takes this raw data and processes this data into pose information defining the position of the user. … The pose information thread generates pose information that is used by the GPU to render the position of the user avatar 19 and other objects as explained below.” Here Sylvan discloses the use of pose information taken from an image. Carion discloses in [0182] “For example, basic command 410 may include descriptions for rendering an image by specifying the Cartesian coordinates 412 and 414 of a screen region. Moreover, basic command 410 may further include the width 416 and the height 418 of the screen region to include image.” Carion then teaches rendering instructions transforming images.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Carion and Murakami with the system of Sylvan. The motivation of claim 1 is incorporated herein. As per claim 51, Sylvan does not teach The method of claim 50, wherein the content rendering instructions include the image. Sylvan in combination with Carion and Marukami do disclose this limitation (Carion in [0019] “. Each command is typically associated with an operation to be performed by a rendering block of the wireless device and carries parameters, content, etc., for operation of that rendering block;” and further in [0165] “In one example, a page description contains basic commands that may include a description of the scrolling area (e.g., starting and ending vertical positions), the horizontal and vertical coordinates, the width, the height, the type of component to be displayed (e.g., text, image, video, audio and the like), the unique identification of the rendering block to be used to render the component, related parameters for the rendering block and for display components (e.g., version number of the image) and the like.” Here Carion discusses the content rendering instructions as basic commands and states those commands can include image content.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Carion and Murakami with the system of Sylvan. The motivation of claim 1 is incorporated herein. As per claim 52, Sylvan teaches The method of claim 50, wherein the content rendering instructions include graphic commands and generating the content includes generating the image based on the graphic commands ([0067] “The application running on the computing environment 12 may include a rendering thread for adding graphics formatting commands to the data stream generated by the pose information thread in step 408…. The rendering thread includes these graphics formatting commands for both non-LLR objects and LLR objects. The data and command stream created by the pose information thread and rendering thread is then passed to the GPU in step 410.” Disclosure of graphics commands generated and passed to the GPU for rendering.) Claims 31,41 are rejected under 35 U.S.C. 103 as being unpatentable over Sylvan in view of Carion and Murakami in further view of Szilagyi et al. (US 2021/0225075 A1). As per claim 31, Sylvan alone does not explicitly teach the remaining claim limitations. However, Sylvan, Carion and Murakami in combination with Szilagyi teaches the claimed: The method of claim 28, wherein the virtual object includes a flat two-dimensional surface, wherein displaying the content rendering on the virtual object includes displaying the content rendering on the flat two-dimensional surface; (Please see Szilagyi in figures 3A and 3B and in [0116] “Indeed, FIG. 7B shows a section 710 of the head of avatar 700, which clearly shows that pairs of triangles, such as triangles 712, can be joined to form a rectangular quadrilateral. However, as shown in FIG. 7C, the clothing 720 of avatar 700 does not exhibit a pattern that warrants representing the mesh as a set of quadrilaterals. In this case, visualization data optimization process 248 may detect that the avatar body is built out of quadrilaterals (quads) and will optimize the corresponding mesh with an algorithm specific to quads”. The claimed “displaying the content rendering on the virtual object includes displaying the content rendering on the flat two-dimensional surface” corresponds to texture mapping onto the 2D surfaces making up the primitives on the mesh in [0082] of Szilagyi). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the flat surfaces of rectangular quadrilateral as taught by Szilagyi with the system of Sylvan as modified by Carion and Murakami because it is common to use a set of quads to represent a 3D structure in 3D space, e.g. by using a mesh to represent the outer surface of a character object such as an avatar. Claim 41, which is similar in scope to claim 31, thus rejected under the same rationale. Claims 33 and 42 are rejected under 35 U.S.C. 103 as being unpatentable over Sylvan in view of Carion and Murakami in further view of Holzer(US 2019/0080499 A1). As per claim 33 Sylvan alone does not explicitly teach the remaining claim limitations. However, Sylvan, Carion and Murakami in combination with Holzer teaches the claimed : The method of claim 28, wherein the request for the content rendering instructions indicates a perspective transform()(para. [0034] “In the present example, the transformation (T_AB) is estimated between the two frames, where T_AB maps a pixel from frame A to frame B. This transformation is performed using methods such as homography, affine, similarity, translation, rotation, or scale.” In this passage a homography transformation is disclosed, which is a type of perspective transformation matching the claim limitation.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Holzer into the combination of teachings of Sylvan, Carion and Murakami in order to a render content with enhanced depth perception in the user’s environment. Claim 42, which is similar in scope to claim 33, thus rejected under the same rationale. Claims 34 are rejected under 35 U.S.C. 103 as being unpatentable over Sylvan in view of Carion and Murakami in further view of Holzer and Stearns (US 5475803 B1). As per claim 34 Sylvan alone does not explicitly teach the remaining claim limitations. However, Sylvan, Carion and Murakami in combination with Stearns teaches the claimed : wherein the request for the content rendering instructions indicates an affine transform (col. 7 lines 49-61 “According to the invention, affine image transformations are performed in a interleaved fashion, whereby coordinate transformations and intensity calculations are alternately performed incrementally on small portions of an image. An input image in an input image space and input coordinate system comprising an array of unit input pixels is to be transformed into an output image in an output image space and output coordinate system comprising an array of unit output pixels. An order of processing of input pixels is chosen such that input pixels are processed in vertical or horizontal rows and such that after a first pixel, each subsequent pixel is adjacent to a previously processed input pixel.” This passage discloses how the affine transformation instructions that take place to compute the output pixels.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Stearns into the combination of teachings of Sylvan, Carion and Murakami in order to a render content more accurately, efficiently and have more consistent rendering. Claims 36 and 44 are rejected under 35 U.S.C. 103 as being unpatentable over Sylvan in view of Carion, Murakami in further view of Yamat (US 2017/0024182 A1). As per claim 36 Sylvan alone does not explicitly teach the remaining claim limitations. However, Sylvan, Carion and Murakami in combination with Yamat teaches the claimed The method of claim 28, wherein the request for the content rendering instructions indicates a resolution of the content rendering(para. [0032]“The method 300 proceeds to OPERATION 320, where the web client application 110 sends a request for application content 222. According to embodiments, the desired size input parameter may be sent to the server 104 as part of a request.” This passage discloses the rendering instructions containing size input parameters(resolution) in the request.) It would have been It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Yamat into the combination of teachings of Sylvan, Carion and Murakami in order to a have a system that can accept resolution as an input to allow the user more control when rendering content. Claim 44, which is similar in scope to claim 36, thus rejected under the same rationale. Claim 37 is rejected under 35 U.S.C. 103 as being unpatentable over Sylvan in view of Carion and Murakami in further view of Yu (US 2013/0314520 A1). As per claim 37, Sylvan alone does not explicitly teach the remaining claim limitations. However, Sylvan and Murakami in combination with Yu teaches the claimed wherein the request for the content rendering instructions indicates a landscape mode or a portrait mode(para. [0133] “In some embodiments of the disclosure, when the currently played image is a vertical media resource and the screen 275 is currently in the landscape mode, the controller 250 sends an instruction for controlling the rotating component to rotate on the one hand, so that the rotating component 276 is made to drive the screen 275 to rotate clockwise 90 degrees so as to adjust the state of the screen 275 to the portrait mode.” This passage describes content being rendered on a mobile device being dependent on the devices orientation, displaying content in portrait or landscape mode. The claimed combination of Sylvan, Carion and Murakami in combination with Yu is achieved by using a tablet device to act as the display in Sylvan.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings of Yu into the combination of teachings of Sylvan, Carion and Murakami in order to a have a system that can render content properly in multiple orientations. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRIS ALEJANDRO PUNTIER whose telephone number is (703)756-1893. The examiner can normally be reached M-F 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRIS ALEJANDRO PUNTIER/Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §103
Aug 15, 2025
Applicant Interview (Telephonic)
Aug 15, 2025
Examiner Interview Summary
Aug 22, 2025
Response Filed
Nov 21, 2025
Non-Final Rejection — §103
Jan 28, 2026
Applicant Interview (Telephonic)
Jan 28, 2026
Examiner Interview Summary
Jan 30, 2026
Response Filed
Apr 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586298
CONTROLLED ILLUMINATION FOR IMPROVED 3D MODEL RECONSTRUCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12586291
Fast Large-Scale Radiance Field Reconstruction
2y 5m to grant Granted Mar 24, 2026
Patent 12573103
ENVIRONMENT MAP UPSCALING FOR DIGITAL IMAGE GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12548226
SYSTEMS AND METHODS FOR A THREE-DIMENSIONAL DIGITAL PET REPRESENTATION PLATFORM
2y 5m to grant Granted Feb 10, 2026
Patent 12536679
APPLICATION MATCHING METHOD AND APPLICATION MATCHING DEVICE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
94%
Grant Probability
99%
With Interview (+10.0%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month