Prosecution Insights
Last updated: April 19, 2026
Application No. 18/422,466

LIGHTWEIGHT RENDERING SYSTEM WITH ON-DEVICE RESOLUTION IMPROVEMENT

Non-Final OA §102§103
Filed
Jan 25, 2024
Examiner
TSUI, WILSON W
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
365 granted / 593 resolved
+6.6% vs TC avg
Strong +58% interview lift
Without
With
+58.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
44 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
15.5%
-24.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 593 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/25/2024, 09/12/2024 and 07/01/2025 are being considered by the examiner. Drawings The drawings filed on 01/25/2024 are accepted. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 4, 8, 10, 11, 13 and 19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cuervo et al (“Kahawai: High-Quality Mobile Gaming Using GPU Offload”, publisher: ACM, published: May 2015, pages 121-135). With regards to claim 1, Cuervo et al teaches a computer-implemented method (Section 3.2, pages 123 and 132 : a computing system having a computing processor and memory is implemented) of content distribution, comprising: generating first content locally within a first data processing system (Figure 1, pages 122 and 123, right column (section 3.1) and Section 3.2: a client system is a first data processing system that generates first low detail content); monitoring, by the first data processing system, for second content conveyed from a second data processing system (page 123, left column, Figure 1: second content from a server (second system) is monitored that comes from compress deltas video via encoded h.264); playing a version of the first content by the first data processing system (page 126, left column: a version of low detail first content is rendered when monitored data is missing due to server disconnect ); and dynamically switching, by the first data processing system, between playing the version of the first content and playing a version of the second content based on receipt of the second content by the first data processing system (page 126, left column: when monitored content is available from the server the version of the first content is switched to an enhanced higher resolution version by applying second content ). With regards to claim 2. The computer-implemented method of claim 1, Cuervo et al teaches wherein the first content includes a first level of motion and the second content includes a second level of motion that exceeds the first level of motion (Figure 1: the first content is interpreted as a frame of content in a video (motion) data set of frames and the second content is also interpreted as additional frame specific content in a video (motion) data set of frames. Since the second content is motion data and the second data contains content yields a higher detail motion data with respect to the first content, then the examiner interprets the second content as exceeding the detail motion quality-data of the first level of motion data ). With regards to claim 4. The computer-implemented method of claim 1, Cuervo et al teaches wherein the version of the first content is played continuously in absence of the second content and until the second content is received, as similarly explained in the rejection of claim 1 (page 126, left column: the first low detail content is played until second content from the server is available/received), and is rejected under similar rationale. With regards to claim 8. The computer-implemented method of claim 1, Cuervo et al teaches wherein the first content includes one or more first latent space image representations and the second content includes one or more second latent space image representations (Figure 1: the first content represents low detail latent data prior to a patch step and the second content represents additional detail prior to a patch step, and thus both first and second content can be intermediate data prior to actual rendering). With regards to claim 10, Cuervo et al teaches a data processing system, comprising: a processor configured to executing operations including: generating first content locally within the data processing system; monitoring, by the data processing system, for second content conveyed from a remote system; playing a version of the first content by the data processing system; and dynamically switching, by the data processing system, between playing the version of the first content and playing a version of the second content based on receipt of the second content by the data processing system, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. With regards to claim 11. The data processing system of claim 10, Cuervo et al teaches wherein the first content includes a first level of motion and the second content includes a second level of motion that exceeds the first level of motion, as similarly explained in the rejection of claim 2, and is rejected under similar rationale. With regards to claim 13. The data processing system of claim 10, Cuervo et al teaches wherein the version of the first content is played content continuously in absence of the second content and until the second content is received, as similarly explained in the rejection of claim 4, and is rejected under similar rationale. With regards to claim 19, Cuervo et al teaches a computer program product, comprising: one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, wherein the program instructions are executable by a data processing system to perform operations including: generating first content locally within the data processing system; monitoring, by the data processing system, for second content conveyed from a remote system; playing a version of the first content by the data processing system; and dynamically switching, by the data processing system, between playing the version of the first content and playing a version of the second content based on receipt of the second content by the data processing system, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cuervo et al (“Kahawai: High-Quality Mobile Gaming Using GPU Offload”, publisher: ACM, published: May 2015, pages 121-135) in view of Castelli et al (US Application: US 20100013828, published: Jan. 21, 2010, filed: Jul. 17, 2008). With regards to claim 3. The computer-implemented method of claim 1, Cuervo et al teaches wherein the first content … , and wherein the second content … , as similarly explained in the rejection of claim 1, and is rejected under similar rationale. However Cuervo et al does not expressly teach … wherein the first content includes media of a non-speaking digital human, and wherein the second content includes media of a speaking digital human. Yet Castelli et al teaches … wherein the first content includes media of a non-speaking digital human, and wherein the second content includes media of a speaking digital human (paragraphs 0007, 0010 and 0022: an avatar for a human is depicted and what is depicted can either be a static image state (non speaking/non-animated) and a video state (live video of a human speaking) ). It would have been obvious to one ordinary skill in the art before the effective filing of the invention to have modified Cuervo et al’s ability to selectively apply first and/or second content to render/depict on screen such that different content for first and second content (such as a non-animated human avatar and an animated speaking human avatar) is depicted, respectively, to adapt to dynamic bandwidth/availability conditions, as taught by Castelli et al. The combination would have allowed Cuervo et al to have dynamically adapted what is depicted based upon network bandwidth conditions in view of manageable rendering cost(s) of the avatar. With regards to claim 12. The data processing system of claim 10, Cuervo et al and Castelli et al teaches wherein the first content includes media of a non-speaking digital human, and wherein the second content includes media of a speaking digital human, as similarly explained in the rejection of claim 3, and is rejected under similar rationale. Claim(s) 5, 6, 14, 15, 17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cuervo et al (“Kahawai: High-Quality Mobile Gaming Using GPU Offload”, publisher: ACM, published: May 2015, pages 121-135) in view of Delattre et al (US Application: US 20210304355, published: Sep. 30, 2021, filed: Mar. 25, 2020). With regards to claim 5. The computer-implemented method of claim 1, Cuervo et al teaches wherein the first content and the second content are generated at … a resolution, … the version of the first content … the version of the second content …, as similarly explained in the rejection of claim 5, and is rejected under similar rationale. However Cuervo et al does not expressly … are generated at a first resolution, the method further comprising: generating the version of the first content by increasing the first resolution of the first content to a second resolution, wherein the second resolution is higher than the first resolution; and generating the version of the second content by increasing the first resolution of the second content to the second resolution. Yet Delattre et al teaches generating the version of the first content by increasing the first resolution of the first content to a second resolution, wherein the second resolution is higher than the first resolution; and generating the version of the second content by increasing the first resolution of the second content to the second resolution (paragraph 0049: each content can undergo client side upscaling to increase resolution of image content ). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Cuervo et al’s ability to process and generate each first and second versions of content, such that each version of content could have undergone additional upscaling to increase resolution. The combination would have allowed Cuervo to have implemented in real-time, a way to increase /convert images of one resolution into another higher resolution (Delattre et al, paragraph 0002). With regards to claim 6. The computer-implemented method of claim 5, the combination of Cuervo and Delattre et al teaches wherein an identity-specific, generative machine learning model increases the first resolution of the first content and increases the first resolution of the second content, in the rejection of claim 5, and is rejected under similar rationale. With regards to claim 14. The data processing system of claim 10, Cuervo et al and Delattre et al teaches wherein the first content and the second content are generated at a first resolution, and wherein the processor is configured to execute operations comprising: generating the version of the first content by increasing the first resolution of the first content to a second resolution, wherein the second resolution is higher than the first resolution; and generating the version of the second content by increasing the first resolution of the second content to the second resolution, as similarly explained in the rejection of claim 5, and is rejected under similar rationale. With regards to claim 15. The data processing system of claim 14, Cuervo et al and Delattre et al teaches wherein an identity-specific, generative machine learning model increases the first resolution of the first content and increases the first resolution of the second content, as similarly explained in the rejection of claim 6, and is rejected under similar rationale. With regards to claim 17. The data processing system of claim 14, Cuervo et al and Delattre et al teaches wherein the first content includes one or more first latent space image representations and the second content includes one or more second latent space image representations (Cuervo et al, Figure 1: the first content represents low detail latent data prior to a patch step and the second content represents additional detail prior to a patch step, and thus both first and second content can be intermediate data prior to actual rendering). With regards to claim 20. The computer program product of claim 19, Cuervo et al and Delattre et al teaches wherein the first content and the second content are generated at a first resolution, wherein the program instructions are executable by the data processing system to perform operations comprising: generating the version of the first content by increasing the first resolution of the first content to a second resolution, wherein the second resolution is higher than the first resolution; and generating the version of the second content by increasing the first resolution of the second content to the second resolution, as similarly explained in the rejection of claim 5, and is rejected under similar rationale. Claim(s) 7, 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cuervo et al (“Kahawai: High-Quality Mobile Gaming Using GPU Offload”, publisher: ACM, published: May 2015, pages 121-135) in view of Delattre et al (US Application: US 20210304355, published: Sep. 30, 2021, filed: Mar. 25, 2020) in view of Kato et al (“Split Rendering of the Transparent Channel for Clout AR”, publisher: IEEE, published: 2021, pages 1-6). With regards to claim 7. The computer-implemented method of claim 5, Cuervo teaches wherein the first content … and the second content … (see Figure 1, section 3.2: where first detail content and second content detail content are latent data that are subsequently merged /patched through H.264 decoding in a collaborative rendering environment ), as similarly explained in the rejection of claim 5, and is rejected under similar rationale. However Cuervo does not expressly teach wherein the first content includes one or more first red, green, blue (RGB) images and the second content includes one or more second RGB images. Yet Kato et al teaches wherein the first content includes one or more first red, green, blue (RGB) images and the second content includes one or more second RGB images (Page 3, left column, Fig 2 and Fig. 3: client first content and server second content both are rendered with RGB image data. The data can be any variation of image data such as depicting a human(oid)/robot or vehicle). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Cuervo et al and Delattre’s ability to render first and second content, such that the first and second content contain RGB image data when rendering to depict images of a plurality of content (such as a human(oid)/robot), as taught by Kato et al. The combination would have allowed a more efficient way to render image content by reducing the bitrate from the server and also prevented errors .. caused by coding noise (Kato et al, page 3, left column). With regards to claim 16. The data processing system of claim 14, Cuervo et al, Delattre and Kato et al teaches wherein the first content includes one or more first red, green, blue (RGB) images and the second content includes one or more second RGB images, as similarly explained in the rejection of claim 7, and is rejected under similar rationale. With regards to claim 18. The data processing system of claim 17, Cuervo et al, Delattre and Kato et al teaches wherein the processor is configured to execute operations comprising: for the one or more first latent space image representations, generating the version of the first content as one or more red, green, blue (RGB) images of a digital human that correspond to the one or more first latent space image representations; and for the one or more second latent space image representations, generating the version of the second content as one or more RGB images of the digital human that correspond to the one or more second latent space image representations, as similarly explained in the rejection of claim 7 (see Figure 1, section 3.2 of Kato et al: where first detail content and second content detail content are latent data that are subsequently merged /patched through H.264 decoding in a collaborative rendering environment. It is noted that the humanoid/robot visualized via RGB data is interpreted as the claimed ‘digital human’ and the claim does not functionally distinguish a ‘digital human’ with a digital humanoid/robot. Rather the ‘digital human’ present in the claim is merely a label for RGB image(s) and is also considered non-functional descriptive material), and is rejected under similar rationale. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cuervo et al (“Kahawai: High-Quality Mobile Gaming Using GPU Offload”, publisher: ACM, published: May 2015, pages 121-135) in view of Kato et al (“Split Rendering of the Transparent Channel for Clout AR”, publisher: IEEE, published: 2021, pages 1-6). With regards to claim 9. The computer-implemented method of claim 8, Cuervo et al teaches …. Further comprising: for the one or more first latent space representations, generating the version of the first content … that correspond to the one or more first latent space image representations; and for the one or more second latent space image representations, generating the version of the second content … that correspond to the one or more second latent space representations (see Figure 1, section 3.2: where first detail content and second content detail content are latent data that are subsequently merged /patched through H.264 decoding in a collaborative rendering environment ). However Cuervo does not expressly teach further comprising … generating the version of the first content as one or more red, green, blue (RGB) images of a digital human ….; … generating the version of the second content as one or more RGB images of the digital human. Yet Kato et al teaches … generating the version of the first content as one or more red, green, blue (RGB) images of a digital human ….; … generating the version of the second content as one or more RGB images of the digital human (Page 3, left column, Fig 2 and Fig. 3: client first content and server second content both are rendered with RGB image data. The data can be any variation of image data such as depicting a human(oid)/robot or vehicle. It is noted that the humanoid/robot visualized via RGB data is interpreted as the claimed ‘digital human’ and the claim does not functionally distinguish a ‘digital human’ with a digital humanoid/robot. Rather the ‘digital human’ present in the claim is merely a label for RGB image(s) and is also considered non-functional descriptive material). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Cuervo et al’s ability to render first and second content, such that the first and second content contain RGB image data when rendering to depict images of a plurality of content (such as a human(oid)/robot), as taught by Kato et al. The combination would have allowed a more efficient way to render image content by reducing the bitrate from the server and also prevented errors .. caused by coding noise (Kato et al, page 3, left column). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mihaly et al (US Application: US 2023/0217034): This reference teaches split rendering to improve tolerance to delay variation. Ramaswamy et al (US Application: US 20200014961): This reference teaches rendering a lower resolution first and then rendering a higher resolution. Miller et al (US Application: US 2013/0227158): This reference teaches allowing a client to switch from one resolution version of a video to another quality/resolution of the video. Kimpe (US Application: US 2007/0183493): This reference teaches prioritizing areas of an image/video for compression in an environment with varying bandwidth. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILSON W TSUI/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Jan 25, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602535
COMMENT DISPLAY METHOD AND APPARATUS OF A DOCUMENT, AND DEVICE AND MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12589766
AUTONOMOUS DRIVING SYSTEM AND METHOD OF CONTROLLING SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12570284
AUTONOMOUS DRIVING METHOD AND DEVICE FOR A MOTORIZED LAND VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12552376
VEHICLE CONTROL APPARATUS
2y 5m to grant Granted Feb 17, 2026
Patent 12511993
SYSTEMS AND METHODS FOR CONFIGURING A HIERARCHICAL TRAFFIC MANAGEMENT SYSTEM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+58.1%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 593 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month