Prosecution Insights
Last updated: April 19, 2026
Application No. 18/841,345

RENDERING METHOD AND APPARATUS FOR 3D MATERIAL, AND DEVICE AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Aug 28, 2024
Examiner
TAHA, AHMED
Art Unit
2613
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
5 granted / 8 resolved
+0.5% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
35 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
59.8%
+19.8% vs TC avg
§102
29.9%
-10.1% vs TC avg
§112
3.8%
-36.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 3, 13, 14, 15, 16, and 23 are rejected under 35 U.S.C. 102 (1) as being anticipated by Vogels et al. (U.S. Patent Publication No. 2018/0293496). Regarding claim 1, Vogels discloses a rendering method for a 3D material [Vogels: 0006 “Monte Carlo ( MC ) path tracing is a technique for rendering images of three - dimensional scenes”], comprising: acquiring first original 3D information of a 3D material to be rendered [Vogels: 0064“The inputs may include , for example , pixel color and its variance , as well as a set of auxiliary buffers ( and their corresponding variances ) that encode scene information ( e . g . , surface normal , albedo , depth , and the like ).”][Vogels: 0073 “The raw image data may also include other auxilliary data produced by the renderer 302 . For example , the renderer 302 may also produce object identifiers , visibility data , and bidirectional reflectance dis tribution function ( BRDF ) parameters ( e . g . , other than albedo data )”] (teaches obtaining auxiliary buffers that encode scene information including surface normal, albedo, depth, and other renderer produced data including BRDF parameters, these items are 3D scene material descriptors used for rendering); generating an intermediate rendered graph according to the first original 3D information [Vogels: 0186 “The input to the generator may include a noisy image rendered by MC path tracing , and possibly also auxiliary rendering features such as surface normals , depth , and albedo”]; and inputting the intermediate rendered graph into a generator of a set generative adversarial neural network, so as to obtain a 3D rendered graph [Vogels: 0182 “Embodiments of the present invention use generative adversarial networks ( GANs ) for training a machine learning based denoiser as an alternative to using a pre defined loss function”]. Regarding claim 2, Vogels discloses the method according to claim 1, wherein the first original 3D information comprises at least one selected from the group consisting of vertex coordinates, normal information, camera parameters, surface tiling map and illumination parameters (interpreted as includes at least one of the listed type of data)[Vogels: 0189 “The input may also include a set of auxiliary buffers ( also referred herein as " feature buffers ” ) that encode scene information”](teaches scene information which corresponds to normal information). Regarding claim 3, Vogels discloses the method according to claim 2, wherein the generating an intermediate rendered graph according to the first original 3D information comprises: generating the intermediate rendered graph according to at least one item of the first original 3D information [Vogels: 0145 “a new denoised image corresponding to the new input image may be generated by passing the new input image through the neural network using the final set of parameters.”], wherein the intermediate rendered graph comprises at least one selected from the group consisting of a white film map, a normal map, a depth map, and a coarse hair map [Vogels: 0189 “The auxiliary buffers may include information about surface normal , albedo , depth , and the like”](teaches depth). Claim 13, 14, and 23 are device, apparatus, and a computer readable medium claim corresponding to method claim 1 without any additional limitations. Thus, claims 13, 14, and 23 are rejected for the same reasons as claim 5 above. Claim 15 is an apparatus claim corresponding to method claim 2 without any additional limitations. Thus, claim 15 is rejected for the same reasons as claim 2 above. Claim 16 is an apparatus claim corresponding to method claim 3 without any additional limitations. Thus, claim 16 is rejected for the same reasons as claim 3 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4, 5, 6, 11, 12, 17, 18, 19, 20, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Vogels et al. (U.S. Patent Publication No. 2018/0293496), in view ofPlawinski et al. (U.S. Patent Publication No. 2021/0312634). Regarding claim 4, Vogels discloses The method according to claim 1, and the set adversarial neural network is trained by: acquiring second original 3D information of a 3D material sample to be rendered (interpreted as for training, the method obtains 3D scene material information for a training sample that will be rendered)[Vogels: 0220-0221 “Object library 1620 can include elements configured for storing and accessing information related to objects used by the one or more design computers 1610 during the various stages of a production process to produce CGI and animation . Some examples of object library 1620 can include a file , a database , or other storage devices and mechanisms . Object library 1620 may be locally accessible to the one or more design computers 1610 or hosted by one or more external computer systems . Some examples of information stored in object library 1620 can include an object itself , metadata , object geometry , object topology , rigging , control data , animation data , animation cues , simulation data , texture data , lighting data , shader code , or the like”](teaches accessing stored object/scene information and describes rendering as generating an image from a model based on geometry, viewpoint, texture, etc.); generating an intermediate rendered graph sample and a rendered graph sample corresponding to the intermediate rendered graph sample based on the second original 3D information (interpreted as, using that same 3d sample information, the method generated an intermediate rendered graph sample and a rendered graph sample that corresponds to that intermediate (paired target/ground truth for training) [Vogels: 0189 “A noisy input image rendered by a renderer 1310 may be input into a generator 1320 and a discriminator 1330”][Vogels: 0190 “The discriminator 1330 also receives a corresponding reference image ( i . e . , the ground truth ) as input. The reference image may be a high - quality image that has been rendered with many rays .”][Vogels: 0201 “At 1502 , an input image rendered by MC path tracing and a corresponding reference image are received”](teaches creating/using paired training examples, a noisy rendered input image and a corresponding reference (ground truth) image rendered at higher quality (many rays)); and performing alternating iterative training on the generator and the discriminator based on the intermediate rendered graph sample and the rendered graph sample corresponding to the intermediate rendered graph sample (interpreted as training is done by alternating updates between the generator and discriminator (iteratively) using the paired samples (intermediate input + corresponding target/reference))[Vogels: 0194 “FIGS . 14A and 14B illustrate exemplary procedures of training a GAN . The generator 1320 and the discriminator 1330 may be alternatingly trained”](teaches alternating training of generator and discriminator) but fails to explicitly disclose wherein the set generative adversarial neural network is a pix2pix generative adversarial neural network comprising a generator and a discriminator. However, Plawinski discloses wherein the set generative adversarial neural network is a pix2pix generative adversarial neural network comprising a generator and a discriminator [Plawinski: 0019 “The embodiment of FIG . 1 may relate to Pix2Pix . Pix2Pix utilizes a Generative Adversarial Network ( GAN ) , in which a generator and a discriminator are adversarially trained .”]. Vogels and Plawinski are considered to be analogous to the claimed invention because they are in the same field of supervised image to image translation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vogels to incorporate Plawinski’s teachings of utilizing Pix2Pix generative adversarial neural network. The motivation for such a combination would provide the benefit of improving reconstruction/denoising quality. Regarding claim 5, Vogels and Plawinski disclose the method according to claim 4, wherein the performing alternating iterative training on the generator and the discriminator based on the intermediate rendered graph sample and the rendered graph sample corresponding to the intermediate rendered graph sample comprises (preamble): inputting the intermediate rendered graph sample into the generator and outputting a generated graph [Vogels: 0186 “The output of the generator is an image that would , after the generator is trained , look like the ground truth corresponding to the input image”]; combining the generated graph and the intermediate rendered graph sample into a negative sample pair, and combining the rendered graph sample and the intermediate rendered graph sample into a positive sample pair [Vogels: 0188 “the discriminator may receive two pairs of data as input : ( a noisy input image , a denoised image output by the generator ) and ( a noisy input image , a ground truth reference image”](teaches the same 2 paired inputs (noisy input and denoised output correspond to negative and positive sample pair); inputting the positive sample pair into the discriminator to obtain a first discrimination result, and inputting the negative sample pair into the discriminator to obtain a second discrimination result [Vogels: 0191 “The discriminator 1330 may be configured to output a quality metric , which is input to the generator 1320 . In some embodiments , the quality metric may be a number between 0 and 1 , indicating the probability that the input image the discriminator 1330 receives belongs to the class of denoised images or the class of ground truth images”](teaches the discriminator outputs a quality metric, since Vogels feeds both noisy and ground truth pairs to the discriminator, meaning Vogels necessarily obtains a discriminator output for each pair first and second discrimination result); determining a first loss function based on the first discrimination result and the second discrimination result; and performing alternating iterative training on the generator and the discriminator based on the first loss function [Vogels: 0198 “the generator 1320 or the discriminator 1330 may reach a local minimum of their loss function , where error gradients vanish causing the optimization is stuck”]. Regarding claim 6, Vogels discloses the method according to claim 5, but fails to explicitly disclose wherein after the determining a first loss function based on the first discrimination result and the second discrimination result, the method further comprises: determining a second loss function according to the generated graph and the rendered graph sample; and linearly superposing the first loss function and the second loss function to obtain a target loss function; and the performing alternating iterative training on the generator and the discriminator based on the first loss function comprises: performing alternating iterative training on the generator and the discriminator based on the target loss function. However, Plawinski discloses wherein after the determining a first loss function based on the first discrimination result and the second discrimination result, the method further comprises: determining a second loss function according to the generated graph and the rendered graph sample (interpreted as after forming the adversarial based loss, the method additionally computes another loss based on similarity between the generators output (generated graph) and corresponding ground truth (rendered graph sample))[Plawinski: 0064 “In embodiments , the training section may train the generator 21 with a reconstruction loss between the recon structed image and the original image , in addition to the adversarial loss and the edge loss . In one non - limiting embodiment , the training section may use a weighted sum of the adversarial loss , the edge loss , and the reconstruction loss”]; and linearly superposing the first loss function and the second loss function to obtain a target loss function [Plawinski: 0050 “generator such that a sum or a weighted sum of the edge loss and the adversarial loss is minimized’] (this is a linear combination of adversarial loss and another loss); and the performing alternating iterative training on the generator and the discriminator based on the first loss function comprises: performing alternating iterative training on the generator and the discriminator based on the target loss function [Plawinski: 0049 “train the discriminator by using the adversarial loss”][Plawinski: 0050 “At S700 , the training section 150 may train the generator by using the edge loss and the adversarial loss”]. Vogels and Plawinski are considered to be analogous to the claimed invention because they are in the same field of supervised image to image translation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vogels to incorporate Plawinski’s teachings of combining adversarial and another loss and performing alternative iterative training. The motivation for such a combination would provide the benefit of improving reconstruction quality. Regarding claim 11, Vogels and Plawinski disclose the method according to claim 5, wherein values of the first discrimination result and the second discrimination result range from 0 to 1 [Vogels: 0191 “the quality metric may be a number between 0 and 1 , indicating the probability that the input image the discriminator 1330 receives belongs to the class of denoised images or the class of ground truth images”], and are configured to characterize a matching degree between sample pairs [Vogels: 0188 “ the discriminator may receive two pairs of data as input : ( a noisy input image , a denoised image output by the generator ) and ( a noisy input image , a ground truth reference image ) . The discriminator ' s task is to compare a datum to the noisy input image , and determine whether the datum is the denoised image output by the generator or the ground - truth reference image”](teaches a conditional discriminator receiving paired inputs and outputting a probability like metric indicating whether the datum is ground truth vs generator output). Regarding claim 12, Vogels and Plawinski disclose the method according to claim 5, wherein the determining a first loss function based on the first discrimination result and the second discrimination result comprises (preamble): calculating a first difference value between the first discrimination result and a true discrimination result corresponding to the positive sample pair [Vogels: 0188 “The discriminator may be optimized to predict to which of these two classes a datum belongs”][Vogels: 0191 “a value of “ O ” may mean that it is highly probable that the input image belongs to the class of ground truth images , and a value of “ 1 ” may mean that it is highly probable that the input image belongs to the class of denoised images”](teaches two classes and that the discriminator is optimized to predict class membership with an explicit convention that 0 corresponds to ground truth class, this necessarily involves comparing the predicted output to the true class target), calculating a second difference value between the second discrimination result and a true discrimination result corresponding to the negative sample pair [Vogels: 0191 “a value of “ 1 ” may mean that it is highly probable that the input image belongs to the class of denoised images”][Vogels: 0182 “a discriminative model D that estimates the probability that a sample comes from the training data rather than G . The training procedure for G is to maximize the probability of D making a mistake”], but fails to explicitly disclose solving logarithms of the first difference and the second difference respectively and then accumulating to obtain the first loss function. However, Plawinski discloses solving logarithms of the first difference and the second difference respectively and then accumulating to obtain the first loss function [Plawinski: 0022 “the adversarial loss may include a value related to the realism ( e.g. , log ( realism ). It is contemplated that for the original image , the adversarial loss may include a value related to 1 - the realism ( e.g. , log ( 1 - realism ) and the real ism may be calculated from distance between the distribution of the original images and distribution of reconstructed images”](teaches computing an adversarial loss using log terms). Vogels and Plawinski are considered to be analogous to the claimed invention because they are in the same field of supervised image to image translation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vogels to incorporate Plawinski’s teachings of computing adversarial loss. The motivation for such a combination would provide the benefit of improving reconstruction quality. Claim 17 is an apparatus claim corresponding to method claim 4 without any additional limitations. Thus, claim 17 is rejected for the same reasons as claim 4 above. Claim 18 is an apparatus claim corresponding to method claim 5 without any additional limitations. Thus, claim 18 is rejected for the same reasons as claim 5 above. Claim 19 is an apparatus claim corresponding to method claim 11 without any additional limitations. Thus, claim 19 is rejected for the same reasons as claim 11 above. Claim 20 is an apparatus claim corresponding to method claim 12 without any additional limitations. Thus, claim 20 is rejected for the same reasons as claim 12 above. Claim 21 is an apparatus claim corresponding to method claim 6 without any additional limitations. Thus, claim 21 is rejected for the same reasons as claim 6 above. Claims 7 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Vogels et al. (U.S. Patent Publication No. 2018/0293496), in view of Plawinski et al. (U.S. Patent Publication No. 2021/0312634), in further view of Larsen et al. (WO 2021/113846). Regarding claim 7, Vogels and Plawinski disclose the method according to claim 4, but fail to explicitly disclose wherein network layers in the generator are connected in a U-shaped skipping structure; and the discriminator adopts a patch discriminator PatchGAN. However, Larsen discloses wherein network layers in the generator are connected in a U-shaped skipping structure [Larsen: 00236 “the generator 408 can include a U-Net convolutional neural network”]; and the discriminator adopts a patch discriminator PatchGAN [Larsen: 00237 “the discriminator 416 can be a PatchGAN discriminator”]. Vogels, Plawinski, and Larsen are considered to be analogous to the claimed invention because they are in the same field of supervised image to image translation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Vogels and Plawinski to incorporate Larsen’s teachings of utilizing U-Net CNN and aPatchGAN. The motivation for such a combination would provide the benefit of improving the ground truth target. Claim 22 is an apparatus claim corresponding to method claim 7 without any additional limitations. Thus, claim 22 is rejected for the same reasons as claim 7 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED TAHA whose telephone number is (571)272-6805. The examiner can normally be reached 8:30 am - 5 pm, Mon - Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571)272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AHMED TAHA/Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Aug 28, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12565101
WINDSHIELD AND VISIBILITY IMPROVEMENTS FOR DRIVERS IN ADVERSE WEATHER AND LIGHTING CONDITIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12561880
AUGMENTED REALITY TATTOO
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+75.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month