Prosecution Insights
Last updated: April 19, 2026
Application No. 18/601,287

Scene Rendering Method, Apparatus, Device, and System

Final Rejection §103
Filed
Mar 11, 2024
Examiner
YANG, ANDREW GUS
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Huawei Cloud Computing Technologies Co. Ltd.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
384 granted / 558 resolved
+6.8% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
583
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 558 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 8-13, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lan et al. (U.S. PGPUB 20110227938) in view of Wang et al. (U.S. PGPUB 20190318546). With respect to claim 1, Lan et al. disclose a method comprising: obtaining, from a first device, a first request requesting to render a first scene of an application (paragraph 82, The message processor is responsible for processing commands and data sent from the client, and different commands can be processed by different message processors. For a command of changing the 3D scene, for example, creating, modifying deleting an object, and creating, moving, and making an avatar exit, etc., the message processor will update the 3D scene data based on the program logic. When the avatar is moving, a corresponding message processor will notify the streaming media manager to select a video stream to send to the client); determining, from among a plurality of rendering hosts, a first rendering host to render the first scene and to provide a first rendering result of the first scene to the first device (paragraph 82, The streaming media manager dynamically selects a suitable rendering agent, paragraph 83, Additionally, when a rendering agent group is implemented on a plurality of servers, it will be advantageous for each rendering agent to includes a 3D scene description corresponding to its location and observation angle, because when a scene changes, it is possible to send a simple message to notify the corresponding server to update the scene data); obtaining a scene switching notification of the application, wherein the scene switching notification indicates to render a second scene of the application (paragraph 82, For a command of changing the 3D scene, for example, creating, modifying deleting an object, and creating, moving, and making an avatar exit, etc., the message processor will update the 3D scene data based on the program logic); and determining, from among the rendering hosts, a second rendering host to render the second scene and to provide a second rendering result of the second scene to the first device (paragraph 82, The streaming media manager dynamically selects a suitable rendering agent and a suitable TGRN node based on the observation angle and location of the avatar, and clips the corresponding rendering result; then a streaming media format supported by the client is generated through a streaming media generator, and finally the video stream is provided to the client). However, Lan et al. do not expressly disclose a first computing resource of the first rendering host is based on a first complexity degree of the first scene; and a second computing resource of the second rendering host is based on a second complexity degree of the second scene. Wang et al., who also deal with distributed rendering, disclose a method wherein a first computing resource of the first rendering host is based on a first complexity degree of the first scene (paragraph 35, when the scene picture collected by the front end device D1 is relatively simple, and the augmented reality picture to be superposed with the scene picture is relatively simple, the method for processing display data provided by the embodiment of the present invention may be directly executed under the computing resources of the client D2, and rendered data are displayed by a display device of the client D2); and a second computing resource of the second rendering host is based on a second complexity degree of the second scene (paragraph 36, even if the scene picture collected by the front end device D1 is relatively complicated, and/or, the augmented reality picture to be superimposed with the scene picture is relatively complicated, the server S may provide the sufficient computing resources to implement the method for processing display data provided by the embodiment of the present invention, therefore, the front end device D1 sends the scene picture to the server S after collecting the same, the server S executes the method for processing display data provided by the embodiment of the present invention, and sends the rendered data to the display device of the client D2 for display). Lan et al. and Wang et al. are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply the method wherein a first computing resource of the first rendering host is based on a first complexity degree of the first scene; and a second computing resource of the second rendering host is based on a second complexity degree of the second scene, as taught by Wang et al., to the Lan et al. system, because only few computing resources may be configured to the client D2 to reduce the cost (paragraph 36 of Wang et al.), thus allowing stronger computing resources, i.e., a server, to render complicated scenes. With respect to claim 2, Lan et al. as modified by Wang et al. disclose the method of claim 1, further comprising loading, while the first rendering host renders the first scene, a program and data for rendering the second scene on the second rendering host (Lan et al.: paragraph 29, With the embodiments of the present invention, real-time rendering can be performed at a server, while the client only needs to receive image streams generated by the at least part of rendering results for playing, Lan et al.: paragraph 83, Additionally, when a rendering agent group is implemented on a plurality of servers, it will be advantageous for each rendering agent to includes a 3D scene description corresponding to its location and observation angle, because when a scene changes, it is possible to send a simple message to notify the corresponding server to update the scene data. This can be accomplished without transferring a huge quantity of scene data in real time, which therefore, reduces the amount of data required to transfer). Notifying the corresponding server to update the scene data corresponds to loading a program and data for rendering the different scene. With respect to claim 3, Lan et al. as modified by Wang et al. disclose the method of claim 1, further comprising configuring a gateway to receive, in a local area network, the first rendering result from the first rendering host and to send the first rendering result to the first device in an external network (Lan et al.: paragraph 81, Sending occurs between the message processor and the server through the network, including processing of a message from the server to display on a 2d UI, Lan et al.: paragraph 82, The server further includes a data structure for storing relevant data, for example a 3D scene description and the TGRN of the 3D scene (set in a corresponding rendering agent). A communication module is responsible for processing communication at the network layer, receiving commands and data from the client, and sending necessary commands and data to the client). With respect to claim 4, Lan et al. as modified by Wang et al. disclose the method of claim 1, further comprising connecting the first device to the first rendering host to enable the first rendering host to send the first rendering result to the first device (Lan et al.: paragraph 80, This system architecture includes clients and a server, which communicate with one another through a network, such as an Internet or telecommunication network). With respect to claim 5, Lan et al. as modified by Wang et al. disclose the method of claim 1, wherein the scene switching notification is based on the first rendering host receiving a scene switching operation instruction from the first device, and wherein the scene switching operation instruction instructs to render the second scene (Lan et al.: paragraph 82, For a command of changing the 3D scene, for example, creating, modifying deleting an object, and creating, moving, and making an avatar exit, etc., the message processor will update the 3D scene data based on the program logic. When the avatar is moving, a corresponding message processor will notify the streaming media manager to select a video stream to send to the client). With respect to claim 6, Lan et al. as modified by Wang et al. disclose the method of claim 1, further comprising: obtaining, from a second device, a second request requesting to render the first scene; and instructing the first rendering host to provide the first rendering result to the second device (Lan et al.: paragraph 32, As illustrated in FIG. 1, the system includes a server 100, client 110.1, client 110.2 . . . , and client 110.n, where information is transferred, and interaction occurs, between respective clients and the server, and where the virtual world server 100, responsive to the status of an avatar corresponding to each client, transfers corresponding 3D scene data to the client or clients). With respect to claim 8, Lan et al. as modified by Wang et al. disclose a management device (Lan et al.: paragraph 96, FIG. 19 illustrates a structural block diagram of a computer device that can implement embodiments according to the present invention) comprising: a memory configured to store instructions; and a processor coupled to the memory and configured to execute the instructions (Lan et al.: paragraph 96, Among these components, connected to the system bus 1904 are the CPU 1901, the RAM 1902, the ROM 1903, the hard disk controller 1905, the keyboard controller 1906, the serial interface controller 1907, the parallel controller 1908 and the display controller 1909. The hard disk 1910 is connected to the hard disk controller 1905) to cause the management device to execute the method of claim 1; see rationale for rejection of claim 1. With respect to claim 9, Lan et al. as modified by Wang et al. disclose the management device of claim 8, for executing the method of claim 2; see rationale for rejection of claim 2. With respect to claim 10, Lan et al. as modified by Wang et al. disclose the management device of claim 8, for executing the method of claim 3; see rationale for rejection of claim 3. With respect to claim 11, Lan et al. as modified by Wang et al. disclose the management device of claim 8, for executing the method of claim 4; see rationale for rejection of claim 4. With respect to claim 12, Lan et al. as modified by Wang et al. disclose the management device of claim 8, for executing the method of claim 5; see rationale for rejection of claim 5. With respect to claim 13, Lan et al. as modified by Wang et al. disclose the management device of claim 8, for executing the method of claim 6; see rationale for rejection of claim 6. With respect to claim 15, Lan et al. as modified by Wang et al. disclose a system (Lan et al.: paragraph 80, FIG. 16 schematically shows a block diagram of system architecture of implementing a virtual world according to an exemplary embodiment of the present invention) comprising: rendering hosts comprising: a first rendering host; and a second rendering host (Lan et al.: paragraph 82, The rendering agent group includes a k number of rendering agents (RA.1 to RA.k), paragraph 83, a rendering agent group can be implemented on a server or on a plurality of servers); and a management device coupled to the rendering hosts (Lan et al.: paragraph 82, The server further includes a data structure for storing relevant data, for example a 3D scene description and the TGRN of the 3D scene (set in a corresponding rendering agent). A communication module is responsible for processing communication at the network layer, receiving commands and data from the client, and sending necessary commands and data to the client) and configured to execute the method of claim 1; see rationale for rejection of claim 1. With respect to claim 16, Lan et al. as modified by Wang et al. disclose the system of claim 15, for executing the method of claim 2; see rationale for rejection of claim 2. With respect to claim 17, Lan et al. as modified by Wang et al. disclose the system of claim 15, for executing the method of claim 3; see rationale for rejection of claim 3. With respect to claim 18, Lan et al. as modified by Wang et al. disclose the system of claim 15, for executing the method of claim 4; see rationale for rejection of claim 4. With respect to claim 19, Lan et al. as modified by Wang et al. disclose the system of claim 15, for executing the method of claim 5; see rationale for rejection of claim 5. With respect to claim 20, Lan et al. as modified by Wang et al. disclose the system of claim 15, for executing the method of claim 6; see rationale for rejection of claim 6. Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lan et al. (U.S. PGPUB 20110227938) in view of Wang et al. (U.S. PGPUB 20190318546) and further in view of Madruga et al. (U.S. PGPUB 20100141665). With respect to claim 7, Lan et al. as modified by Wang et al. disclose the method of claim 1. However, Lan et al. as modified by Wang et al. do not expressly disclose adjusting, based on a load for rendering the first scene, a resource for the first rendering host to render the first scene. Madruga et al., who also deal with distributed computing, disclose a method for adjusting, based on a load for rendering the first scene, a resource for the first rendering host to render the first scene (paragraph 39, Both the compute servers 150 and graphics client 110 use rendering times to adjust load balancing factors dynamically). Lan et al., Wang et al., and Madruga et al. are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply the method of adjusting, based on a load for rendering the first scene, a resource for the first rendering host to render the first scene, as taught by Madruga et al., to the Lan et al. as modified by Wang et al. system, because roughly equivalent response times indicate a balanced load and help to reduce idle time for the PEs/servers (paragraph 40 of Madruga et al.). With respect to claim 14, Lan et al. as modified by Wang et al. and Madruga et al. disclose the management device of claim 8, for executing the method of claim 7; see rationale for rejection of claim 7. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 8, and 15 have been considered but are moot in view of the new ground(s) of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW GUS YANG whose telephone number is (571)272-5514. The examiner can normally be reached M-F 9 AM - 5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW G YANG/Primary Examiner, Art Unit 2614 2/7/26
Read full office action

Prosecution Timeline

Mar 11, 2024
Application Filed
Mar 26, 2024
Response after Non-Final Action
Oct 18, 2025
Non-Final Rejection — §103
Jan 22, 2026
Response Filed
Feb 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602856
DICING ORACLE FOR TEXTURE SPACE SHADING
2y 5m to grant Granted Apr 14, 2026
Patent 12602872
DRIVABLE IMPLICIT THREE-DIMENSIONAL HUMAN BODY REPRESENTATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12592023
INTERSECTION TESTING FOR RAY TRACING
2y 5m to grant Granted Mar 31, 2026
Patent 12579728
MEMORY ALLOCATION FOR RECURSIVE PROCESSING IN A RAY TRACING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567207
THREE-DIMENSIONAL MODELING AND RECONSTRUCTION OF CLOTHING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
77%
With Interview (+8.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 558 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month