Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION Election/Restrictions Due to discovered art, the Examiner has withdrawn the restriction/election requirement mailed on 10/14/2025. Claim Objections Claims 1-2, 10-11, 19 are objected to because of the following informalities: Claim 1 recites the limitation "the triangle primitives" in line 7. Claim 2 recites the limitation "the triangle primitives" in line 5. Claim 10 recites the limitation "the triangle primitives" in line 8. Claim 11 recites the limitation "the triangle primitives" in line 5. Claim 19 recites the limitation "the triangle primitives" in line 8. They should be “the plurality of triangle primitives” Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . 1. Claims 1, 8-10, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" Hutchins, U.S Patent No. 8432394 (Hutchins”) in view of Genova et al,. WO2022046113A1 , (“Genova”) further in view of Hickman et al., U.S Patent No.825846 (“Hickman”) further in view of BAO et al, IDS , CN109961498A (English translated) (“BAO”) Regarding independent claim 1, Hutchins teaches a three-dimensional model rendering method performed by a computer device ( Fig. 1, col. 1, lines 36-51 “ The rendering of three-dimensional (3D) graphical images is of interest in a variety of electronic games and other applications. Rendering is the general term that describes the overall multi-step process of transitioning from a database representation of a 3D object to a two-dimensional projection of the object onto a viewing surface. The rendering process involves a number of steps, such as, for example, setting up a polygon model that contains the information which is subsequently required by shading/texturing processes, applying linear transformations to the polygon mesh model, culling back facing polygons, clipping the polygons against a view volume, scan converting/rasterizing the polygons to a pixel coordinate set, and shading/lighting the individual pixels using interpolated or incremental shading techniques. ”) and the method comprising: acquiring material information of a plurality of triangle primitives in a three- dimensional model ; at least two triangle primitives in the plurality of triangle primitives having different material information; ( col. 6, lines 17-33 “ A setup stage 405 receives instructions and graphics primitives from a host, such as a software application running on the CPU 201. In one embodiment, setup stage 405 performs the functions of geometrical transformation of coordinates (X-form), clipping, and setup. T he setup unit takes vertex information (e.g., x, y, z, color and/or texture attributes, etc.) from primitives and applies a user defined view transform to calculate screen space coordinates for each geometric primitive (often referred to as triangles because primitives are typically implemented as triangles), which is then sent to the raster stage 410 to draw the given triangle. A vertex buffer 408 may be included to provide a buffer for vertex data used by setup stage 405. In one embodiment, setup stage 405 sets up barycentric coordinate transforms. In one implementation, setup stage 405 is a floating point Very Large Instruction Word (VLIW) machine that supports 32-bit IEEE fl , S15.16 fixed point and packed 0.8 formats . ” where vertex information( e.g., x, y, z, color and/or texture attributes, etc.) which is considered including material as user’s desired ) ; generating model parameters of the three-dimensional model based on the material information of the triangle primitives, the model parameters comprising rendering parameters of rasters in the three-dimensional model ( see at least col.6, lines 34-51 “ “ Raster stage 410 receives data from setup stage 405 regarding triangles that are to be rendered (e.g., converted into pixels). Raster stage 410 processes parameters for each pixel of a given triangle by interpolation and determines shader attributes that need to be interpolated for a pixel as part of rendering, such as calculating color, texture, and fog blend factors. In one embodiment, raster stage 410 calculates barycentric coordinates for pixel packets. In a barycentric coordinate system, distances in a triangle are measured with respect to its vertices. The use of barycentric coordinates reduces the required dynamic range, which permits using fixed point calculations that require less power than floating point calculations. Raster stage 410 generates at least one pixel packet for each pixel of a triangle that is to be processed. Each pixel packet includes fields for a payload of pixel attributes required for processing (e.g., color, texture, depth, fog, ( x,y ) location) along with sideband information, and an instruction sequence of operations to be performed on the pixel packet. An instruction area in raster stage 410 (not shown) assigns instruction sequence numbers to pixel packets. The sideband information may also include a valid field, and a kill field. The pixel packet may include one or more rows of pixel information. ”; col. 9,lines 1-35 “As described above, the raster stage 410 receives data from setup stage 405 regarding triangles that are to be rendered (e.g., converted into pixels). For each received triangle, the raster stage 410 rasterizes the triangle into each of its constituent pixels with a number parameters interpolated for each pixel. The rasterizer computes rendering parameters for each of the pixels of the triangle by systematically evaluating each of the pixels in a deterministic, sequential manner (e.g., "walking" the triangle). The parameters are computed through an interpolation process from the data associated with the triangle's vertices. The raster stage 410 advantageously utilizes an array of programmable interpolators 501-508 to compute the parameters in parallel. As the raster stage 410 walks each pixel, the parameters for that pixel are iterated, and the resulting data is passed down to subsequent stages of the pipeline (e.g., as a pixel packet). The interpolated results can be placed in programmably selectable positions in the pixel packet. As is generally known, complex 3D scenes can typically have a large number of polygons, and additionally, a large number of rendering parameters for each polygon. Such parameters include, for example, color, texture coordinates, transparency, depth, level of detail (LOD), and the like. A real-time 3D rendering pipeline needs to perform many millions of calculations per second to maintain the pixel throughput (e.g., fill rate) required to draw a realistic 60-70 frames per second. The raster stage 410 utilizes the parallel array of interpolators 501-508 to maintain the required pixel fill rate while conserving power consumption and silicon area. ”) ; and rendering a two-dimensional image of the three-dimensional model based on the rendering parameters of the rasters ( see at least col.9,lines 1-35 " A s described above, the raster stage 410 receives data from setup stage 405 regarding triangles (e.g., polygons) that are to be render ed (e.g., converted into pixels). This is illustrated in FIG. 6 as the triangle 630 propagating down to the raster stage 410 from the set up stage 405. The triangle 630 comprises a geometric primitive having associated therewith instructions (e.g., instructions 631) indicating the manner in which the triangle is to be rasterized and render ed, and primitive data (e.g., parameter data such as color, texture coordinates, transparency, xy , depth, etc .) The raster stage 410 advantageously utilizes an array of programmable interpolators 501-508 to compute the parameters in parallel. As the raster stage 410 walks each pixel, the parameters for that pixel are iterated, and the resulting data is passed down to subsequent stages of the pipeline (e.g., as a pixel packet). The interpolated results can be placed in programmably selectable positions in the pixel packet. As is generally known, complex 3D scenes can typically have a large number of polygons, and additionally, a large number of rendering parameters for each polygon. Such parameters include, for example, color, texture coordinates, transparency, depth, level of detail (LOD), and the like. A real-time 3D rendering pipeline needs to perform many millions of calculations per second to maintain the pixel throughput (e.g., fill rate) required to draw a realistic 60-70 frames per second. The raster stage 410 utilizes the parallel array of interpolators 501-508 to maintain the required pixel fill rate while conserving power consumption and silicon area. ” ; col. 14, lines 21-54 “ Thus, for example, even though a z stepper of the raster stage 410 may begin at -2.0 z value, by the time the raster stage 410 steps into the view volume (e.g., z values between 0.0 and 1.0) the fractional portion will behave correctly and consistently. Similarly, in a case where the z stepping process begins at positive 6.0 z value, the fractional portion of z will consistently and deterministically roll over as the integer value steps from 6.0 to 0.0. It is possible to take advantage of this behavior because other separately iterated parameters (the barycentric coefficients) determine which pixels are within the two-dimensional x, y projection of the primitive. Rasterizing correct z values is only important within this two-dimensional projection of the primitive in the x,y plane; outside of this region the z stepper need only act as an error term such that the correct z values are generated once the rasterizer steps into the triangle ”) Hutchins is understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Ganova teaches acquiring material information of a plurality of t riangle primitives in a three- dimensional model, at least two triangle primitives in the plurality of triangle primitives having different material information ( see at least [ 0020] More particularly, a computing system can obtain a three-dimensional mesh. The three-dimensional mesh can be or otherwise include a plurality of polygons and associated texture data and/or associated shading data . As an example, the three-dimensional mesh can be a mesh representation of an object, and the associated shading and/or texture data can indicate a one or more colors for the polygons of the three-dimensional mesh (e.g., a texture data for a character model mesh in a video game, etc .) ; [ 0025] The computing system can determine an initial color value for each pixel of the subset of pixels. The initial color value can be based on the coordinates of the pixel and the associated shading data and/or the associated texture data. As an example, the polygon coordinates for a first pixel can indicate that the first pixel is located in a first polygon. Based on the shading and/or texture data, the polygon can be determined to have an orange color where the pixel is located (e.g., with respect to the polygon’s vertices, etc.) . The first pixel can be colored orange based on the pixel coordinates and the shading and/or texture data. ” w here shading data, for polygons are considered material information and polygons is considered as triangle) the model parameters comprising rendering parameters of rasters in the three-dimensional model ( see at least [0022] The computing system can rasterize the three-dimensional mesh to obtain a two- dimensional raster of the three-dimensional mesh. The two-dimensional raster can be or otherwise include a plurality of pixels and a plurality of coordinates respectively associated with a subset of the plurality of pixels (e.g., by sampling the surface of the three-dimensional mesh, etc.). These coordinates can describe the locations of pixels relative to the vertices of the polygons in which the pixels are located. As an example, a first coordinate can describe a location of a first pixel relative to the vertices of a first polygon in which the first pixel is located. A second coordinate can describe a location of a second pixel relative to the vertices of a second polygon in which the second pixel is located. In some implementations, each of the plurality of coordinates can be or otherwise include barycentric coordinates . ”; [ 0025] The computing system can determine an initial color value for each pixel of the subset of pixels. The initial color value can be based on the coordinates of the pixel and the associated shading data and/or the associated texture data. As an example, the polygon coordinates for a first pixel can indicate that the first pixel is located in a first polygon. Based on the shading and/or texture data, the polygon can be determined to have an orange color where the pixel is located (e.g., with respect to the polygon’s vertices, etc.). The first pixel can be colored orange based on the pixel coordinates and the shading and/or texture data. [0026] More particularly, in some implementations, the polygon identifier and coordinates for each pixel can allow for perspective-correct interpolation of vertex attributes (e.g., of the polygons of the three-dimensional mesh, etc.) to determine a color for the pixel. In some implementations, the determination of the color for each pixel can include application of the shading data and/or the texture data to the two-dimensional raster using a shading and/or texturing scheme. For either application, any sort of application scheme can be used. As an example, a deferred- shading, image-space application scheme can be used to apply the shading data and/or the texture data to determine a color for each of the subset of pixel s. ” where color values are considered as rendering parameters”); rendering a two-dimensional image of the three-dimensional model based on the rendering parameters of the rasters ( see at least [0022] [0027] The computing system can construct a splat for each pixel of the subset of pixels at the coordinates of each of the respective pixels. More particularly, each rasterized surface point of the two-dimensional raster can be converted into a splat, centered at the coordinates of the respective pixel, and colored by the corresponding shaded color C that was determined for the pixel. In some implementations, a splat can be constructed as a small area of color with a smooth falloff of color that spreads out from the origin of the splat. As an example, the color of a splat with a bright red center location may smoothly fall off as the distance from the center of the splat grows further (e.g., a gradient from the edge of the splat to the center of the splat, etc.). [ 0029] The computing system can determine an updated color value for each of the subset of pixels to generate a two-dimensional differentiable rendering of the three- dimensional mesh. The updated color value can be based on a weighting of a subset of the constructed splats. The weighting of the respective splats in the subset of splats for a pixel can be based at least in part on the coordinates of the respective pixel and the coordinates of the respective splat. As an example, a first pixel can have a determined color of yellow. Coordinates of a second pixel can be located a certain distance away from the first pixel, and a third pixel can be located an even further distance from the first pixel. The application of the first pixel splat (e.g., constructed at the coordinates of the first pixel, etc.) to the second pixel can be weighted more heavily than the application of the first pixel splat to the third pixel, as the second pixel is located closer to the first pixel than the third pixel is. As such, the weighting of the subset of splats in the determination of an updated color value for a pixel can be based on the proximity of each of the splats to the pixel. By determining the updated color values for each of the subset of splats, a differentiable two-dimensional rendering of the three-dimensional mesh can be generated. As such, the differentiable two-dimensional rendering (e.g., generated through application of the splats, etc.) can be utilized to find smooth derivatives at occlusion boundaries of the rendering. ”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the method comput ing rendering parameters for each of the pixels of the triangle by systematically evaluating each of the pixels of Hutchins with generat ing a two-dimensional differentiable rendering of the three- dimensional mesh of Genova because this modification would find smooth derivatives at occlusion boundaries of the rendering ([ 0029] of Genova) Both Hutchins and Genova are understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Hickman teaches a cquiring material information of a plurality of triangle primitives in a three- dimensional model ( col.8, lines 43-58 “ The 3D object data model may include material properties that are associated with geometry of the 3D object data model. In one example, the 3D object data model may include a list of geometry coordinates of vertices of a polygonal mesh having pointers to material attributes associated with the geometry coordinates. The material properties may identify appearance attributes or material shaders for the geometry coordinates. For example, the appearance attributes may be lighting values, texture coordinates, and/or colors that are provided as inputs for a material shader. The material shader may be configured to determine a pixel color or other rendering effect based on the appearance attributes. In another instance, the material properties may be texture maps, such as diffuse maps, bump maps, opacity maps, glow maps, or specular maps. ”); generating model parameters of the three-dimensional model based on the material information of the triangle primitives ( col. 7, lines 21-46 “ In some examples, components of the client device 124 may be configured to refine material properties associated with a 3D object data model based on a comparison between a rendered view of a portion of the 3D object data model and a two-dimensional image of a product that is represented by the 3D object data model. For instance, the object data model render/viewer 126 may, in some examples, include a rendering component that is configured to render a view of a portion of a 3D object data model of an object. T he view of the portion of the 3D object data model may be rendered by the rendering component based on material properties that are associated with geometry (e.g., coordinates of vertices of a polygonal mesh) of the 3D object data model. Additionally, a refinement component of the client device 124 may be configure to determine an appearance metric between an appearance of the portion in a rendered view and an appearance of the portion in a 2D image. Based on the appearance metric, t he refinement component may also be configured to determine a modification to the material properties associated with the object, and provide the modified material properties to the rendering component. Also, a material component of the client device 124 may be configured to store the 3D object data model of the object having modified material properties for the portion. T he modified material properties may be material properties which yield the minimum appearance metric of one or more determined appearance metrics, for example. ” ; col.9,lines 43-56 “ At block 306, the method 300 includes based on the first appearance metric, determining a modification to one or more of the material properties. In one example, the material properties may include a given shader that is configured to determine the pixel color of the portion in the rendered view. A different shader for the portion may be selected from a database, for example. The database may include multiple shaders for different types of materials, such as leather, wood, metal, plastic, rubber, etc. In some instances, a given type of material may include multiple shaders, and a different shader may be selected from within the shaders for the type of material. For example, a different shader may provide a different type of shading or may include bump mapping or translucency effects. .. ” ; col.10, lines 4-11 “ At block 308, the method 300 includes rendering another view of the portion of the 3D object data model based on the modification to the material properties. For instance after modified appearance attributes or a new shader has been selected, another view of the portion of the 3D object data model may be rendered. In some instances, the portion is rendered having the same orientation and viewpoint as the first rendered view.” ); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the method comput ing rendering parameters for each of the pixels of the triangle by systematically evaluating each of the pixels of Hutchins and Genova with rendering component based on material properties that are associated with geometry of the 3D object model as seen in Hickman because this modification would render the view of the portion of the 3D object data model ( col.8, lines 43-58 of Hickman) Hitchins , Genova and Hickman are understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, BAO teaches acquiring material information of a plurality of triangle primitives in a three- dimensional model, at least two triangle primitives in the plurality of triangle primitives having different material information ([0005 ] In a scene where a building is being rendered, the building can first be modeled to obtain a 3D model of the building, which has multiple faces. To make different faces of a building present different visual effects, different materials can be set for different faces. By applying different textures to different materials, faces with different textures can present different visual effects. ”; [0022] By generating a single material identifier for the initial 3D model of the target image according to the material merging instruction, multiple material identifiers of the initial 3D model are merged into a single material identifier. Multiple vertices of the initial 3D model are then shading to obtain a first 3D model. The colors of these multiple vertices can then be used to identify multiple rendering parts of the initial 3D model. Texture offsetting of these vertex colors results in a second 3D model with multiple texture maps. These texture maps provide multiple display textures for the multiple rendering parts. Based on this material identifier, the drawing interface is called to render the second 3D model, obtaining the target image. This transforms the process from distinguishing different rendering parts by different material identifiers to distinguishing them by multiple vertex colors. Since the entire second 3D model uses a single material identifier, the drawing interface can be called only once for rendering, still resulting in a target image with rich visual effects. This reduces the number of times the drawing interface is called, avoids CPU overload, improves CPU processing efficiency, and ensures the normal operation of the terminal “; [ 0047] In step 201 above, the terminal can model the target image based on 3D modeling software. During the modeling process, a material identifier (material ID) is assigned to each of the multiple rendering parts of the initial 3D model, so that different rendering parts can be distinguished by different material identifiers. [0057 ] In the above process, for each of the original multiple material identifiers, the terminal can determine the multiple vertices included in the rendering part based on the rendering part corresponding to that material identifier. The multiple vertices can be points used to define the contour in the rendering part. For example, when a rendering part is a triangle, the rendering part includes 3 vertices, which are the 3 endpoints of the triangle. ”) generating model parameters of the three-dimensional model based on the material information of the triangle p rimitives , the model parameters comprising rendering parameters of rasters i n the three-dimensional mode l ( [0022] By generating a single material identifier for the initial 3D model of the target image according to the material merging instruction, multiple material identifiers of the initial 3D model are merged into a single material identifier. Multiple vertices of the initial 3D model are then shading to obtain a first 3D model. The colors of these multiple vertices can then be used to identify multiple rendering parts of the initial 3D model. Texture offsetting of these vertex colors results in a second 3D model with multiple texture maps. These texture maps provide multiple display textures for the multiple rendering parts. Based on this material identifier, the drawing interface is called to render the second 3D model, obtaining the target image. This transforms the process from distinguishing different rendering parts by different material identifiers to distinguishing them by multiple vertex colors. Since the entire second 3D model uses a single material identifier, the drawing interface can be called only once for rendering, still resulting in a target image with rich visual effects. This reduces the number of times the drawing interface is called, avoids CPU overload, improves CPU processing efficiency, and ensures the normal operation of the terminal “; [ 0047] In step 201 above, the terminal can model the target image based on 3D modeling software. During the modeling process, a material identifier (material ID) is assigned to each of the multiple rendering parts of the initial 3D model, so that different rendering parts can be distinguished by different material identifiers. [0057 ] In the above process, for each of the original multiple material identifiers, the terminal can determine the multiple vertices included in the rendering part based on the rendering part corresponding to that material identifier. The multiple vertices can be points used to define the contour in the rendering part. For example, when a rendering part is a triangle, the rendering part includes 3 vertices, which are the 3 endpoints of the triangle. ”; [0049] Figure 3 is a schematic diagram of an interface for setting multiple material identifiers provided in an embodiment of the present invention. Referring to Figure 3, the model of a three-story building in 3ds Max is used as an example for illustration. The model of the building can be divided into three rendering parts: the first floor exterior wall, the second floor exterior wall, and the third floor exterior wall. After selecting "first floor exterior wall" in the model, the user can set the material identifier for the first floor exterior wall through the "Set ID" option shown in Figure 3. For example, the material identifier can be selected as "Material ID (1)" [0058] For example, in the initial 3D model of a building, if the material identifier of the first-floor exterior wall is ID1, then the N1 vertices included in the first-floor exterior wall corresponding to ID1 can be determined; if the material identifier of the second-floor exterior wall is ID2, then the N2 vertices included in the second-floor exterior wall corresponding to ID2 can be determined; if the material identifier of the third-floor exterior wall is ID3, then the N3 vertices included in the third-floor exterior wall corresponding to ID3 can be determined, and so on for other rendering parts. Wherein, N1, N2 and N3 are all positive integers greater than or equal to 1. N1, N2 and N3 can be the same or different. This embodiment of the invention does not specifically limit the range of values for the number of vertices. ”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to comput ing rendering parameters for each of the pixels of the triangle by systematically evaluating each of the pixels of Hutchins , Genova , Hickman with including different materials as seen in BAO because modification would present different visual effects ( [0005] of BAO). Thus, the combination of Hutchins, Genova , Hickman and BAO teaches three-dimensional model rendering method performed by a computer device and the method comprising: acquiring material information of a plurality of triangle primitives in a three- dimensional model, at least two triangle primitives in the plurality of triangle primitives having different material information; generating model parameters of the three-dimensional model based on the material information of the triangle primitives, the model parameters comprising rendering parameters of rasters in the three-dimensional model; and rendering a two- dimensional image of the three-dimensional model based on the rendering parameters of the rasters . Regarding claim 8, Hutchins, Genova , and BAO teach the method according to claim 1, wherein the rendering a two- dimensional image of the three-dimensional model based on the rendering parameters of the rasters comprises: r endering an initial image of the three-dimensional model based on the rendering parameters of the rasters ( see at least Genova: [ 0020] More particularly, a computing system can obtain a three-dimensional mesh. The three-dimensional mesh can be or otherwise include a plurality of polygons and associated texture data and/or associated shading data. As an example, the three-dimensional mesh can be a mesh representation of an object, and the associated shading and/or texture data can indicate a one or more colors for the polygons of the three-dimensional mesh (e.g., a texture data for a character model mesh in a video game, etc.). In some implementations, the three- dimensional mesh can be generated using one or more machine-learning techniques ”; [0021] “ As another example, the three-dimensional mesh can represent pose and/or orientation adjustments to a first three-dimensional mesh. For example, a machine-learned model can process a first three-dimensional mesh at a first pose and/or orientation and obtain a second three-dimensional mesh at a second pose and/or orientation different from the first. Thus, the three-dimensional mesh and the associated texture/ shading data can, in some implementations, be an output of a machine-learned model, and as such, the capacity to generate smooth derivatives for a rendering of the three-dimensional mesh (e.g., at occlusion boundaries, etc.) can be necessary to optimize the machine-learned model used to generate the three- dimensional mesh. ”; Hickman: at least col.3, lines 26-42 “ This disclosure may disclose, inter alia, methods and systems for shader and material layers for rendering three-dimensional (3D) object data models. In some examples, a client device may receive a 3D object data model from a server and render a representation of the 3D object data model by executing a shader layer and a materials layer for various components of the 3D object data model. For instance, the 3D object data model may include components made of different types of materials, and the components may be separated based on type of material. Additionally, individual types of materials may be assigned a given shader to facilitate rendering the material. When a client device receives a 3D object data model, the client device may also receive shader and material information for the various components of the 3D object data model, and render the various components of the 3D object data model using the respective shaders for each type of material. ”; col.10, lines 4-11 “ At block 308, the method 300 includes rendering another view of the portion of the 3D object data model based on the modification to the material properties. For instance after modified appearance attributes or a new shader has been selected, another view of the portion of the 3D object data model may be rendered. In some instances, the portion is rendered having the same orientation and viewpoint as the first rendered view.” see BAO: at least [0112] The apparatus provided in this invention generates a single material identifier for an initial 3D model of a target image according to a material merging instruction, thereby merging multiple material identifiers of the initial 3D model into a single material identifier. It then performs coloring processing on multiple vertices of the initial 3D model to obtain a first 3D model. This allows the multiple rendering parts of the initial 3D model to be identified by the colors of its multiple vertices. Finally, it performs texture offset on the colors of these multiple vertices of the first 3D model to obtain a second 3D model with multiple texture maps. These texture maps provide multiple display textures for the multiple rendering parts. Based on the material identifier, it calls a drawing interface to render the second 3D model, obtaining the target image. This transforms the process from distinguishing different rendering parts by different material identifiers to distinguishing them by multiple vertex colors. Since the entire second 3D model uses a single material identifier, rendering can be performed by calling the drawing interface only once, still obtaining a target image with rich visual effects. This reduces the number of times the drawing interface is called, avoids CPU overload, improves CPU processing efficiency, and ensures the normal operation of the terminal. ”) ; and updating the initial image of the three-dimensional model to obtain the two- dimensional image of the three-dimensional model ( see at least col. 9,lines 1-35 , col. 14, lines 21-54 of Hutchins, see Genova [0022] The computing system can rasterize the three-dimensional mesh to obtain a two- dimensional raster of the three-dimensional mesh. The two-dimensional raster can be or otherwise include a plurality of pixels and a plurality of coordinates respectively associated with a subset of the plurality of pixels (e.g., by sampling the surface of the three-dimensional mesh, etc.). These coordinates can describe the locations of pixels relative to the vertices of the polygons in which the pixels are located. As an example, a first coordinate can describe a location of a first pixel relative to the vertices of a first polygon in which the first pixel is located. A second coordinate can describe a location of a second pixel relative to the vertices of a second polygon in which the second pixel is located. In some implementations, each of the plurality of coordinates can be or otherwise include barycentric coordinates. ”; [ 0029] The computing system can determine an updated color value for each of the subset of pixels to generate a two-dimensional differentiable rendering of the three- dimensional mesh. The updated color value can be based on a weighting of a subset of the constructed splats. The weighting of the respective splats in the subset of splats for a pixel can be based at least in part on the coordinates of the respective pixel and the coordinates of the respective splat. As an example, a first pixel can have a determined color of yellow. Coordinates of a second pixel can be located a certain distance away from the first pixel, and a third pixel can be located an even further distance from the first pixel. The application of the first pixel splat (e.g., constructed at the coordinates of the first pixel, etc.) to the second pixel can be weighted more heavily than the application of the first pixel splat to the third pixel, as the second pixel is located closer to the first pixel than the third pixel is. As such, the weighting of the subset of splats in the determination of an updated color value for a pixel can be based on the proximity of each of the splats to the pixel. By determining the updated color values for each of the subset of splats, a differentiable two-dimensional rendering of the three-dimensional mesh can be generated. As such, the differentiable two-dimensional rendering (e.g., generated through application of the splats, etc.) can be utilized to find smooth derivatives at occlusion boundaries of the rendering . ”; ) In addition, the same motivation is used in the rejection for claim 1. Regarding claim 9, Hutchins, Genova, and BAO teach t he method according to claim 8, wherein the rendering an initial image of the three-dimensional model based on the rendering parameters of the rasters comprises: performing material rendering on a raster based on a material parameter in rendering parameters of the raster ( see at least Hutchins,col. 9, lines 1-10 “ As described above, the raster stage 410 receives data from setup stage 405 regarding triangles (e.g., polygons) that are to be rendered (e.g., converted into pixels). This is illustrated in FIG. 6 as the triangle 630 propagating down to the raster stage 410 from the set up stage 405. The triangle 630 comprises a geometric primitive having associated therewith instructions (e.g., instructions 631) indicating the manner in which the triangle is to be rasterized and rendered, and primitive data (e.g., parameter data such as color, texture coordinates, transparency, xy , depth, etc. ). Genova : see at least [ 0020] More particularly, a computing system can obtain a three-dimensional mesh. The three-dimensional mesh can be or otherwise include a plurality of polygons and associated texture data and/or associated shading data. As an example, the three-dimensional mesh can be a mesh representation of an object, and the associated shading and/or texture data can indicate a one or more colors for the polygons of the three-dimensional mesh (e.g., a texture data for a character model mesh in a video game, etc.). In some implementations, the three- dimensional mesh can be generated using one or more machine-learning techniques ”; [0021] “ As another example, the three-dimensional mesh can represent pose and/or orientation adjustments to a first three-dimensional mesh. For example, a machine-learned model can process a first three-dimensional mesh at a first pose and/or orientation and obtain a second three-dimensional mesh at a second pose and/or orientation different from the first. Thus, the three-dimensional mesh and the associated texture/ shading data can, in some implementations, be an output of a machine-learned model, and as such, the capacity to generate smooth derivatives for a rendering of the three-dimensional mesh (e.g., at occlusion boundaries, etc.) can be necessary to optimize the machine-learned model used to generate the three-dimensional mesh. ” Hickman: at least col.3, lines 26-42 “ This disclosure may disclose, inter alia, methods and systems for shader and material layers for rendering three-dimensional (3D) object data models. In some examples, a client device may receive a 3D object data model from a server and render a representation of the 3D object data model by executing a shader layer and a materials layer for various components of the 3D object data model. For instance, the 3D object data model may include components made of different types of materials, and the components may be separated based on type of material. Additionally, individual types of materials may be assigned a given shader to facilitate rendering the material. When a client device receives a 3D object data model, the client device may also receive shader and material information for the various components of the 3D object data model, and render the various components of the 3D object data model using the respective shaders for each type of material. ”; col.10, lines 4-11 “ At block 308, the method 300 includes rendering another view of the portion of the 3D object data model based on the modification to the material properties. For instance after modified appearance attributes or a new shader has been selected, another view of the portion of the 3D object data model may be rendered. In some instances, the portion is rendered having the same orientation and viewpoint as the first rendered view.”; see BAO: at least [0058] For example, in the initial 3D model of a building, if the material identifier of the first-floor exterior wall is ID1, then the N1 vertices included in the first-floor exterior wall corresponding to ID1 can be determined; if the material identifier of the second-floor exterior wall is ID2, then the N2 vertices included in the second-floor exterior wall corresponding to ID2 can be determined; if the material identifier of the third-floor exterior wall is ID3, then the N3 vertices included in the third-floor exterior wall corresponding to ID3 can be determined, and so on for other rendering parts. Wherein, N1, N2 and N3 are all positive integers greater than or equal to 1. N1, N2 and N3 can be the same or different. This embodiment of the invention does not specifically limit the range of values for the number of vertices. [0059] 204. The terminal sets the same vertex color for the multiple vertices of the rendering part, and the vertex color is used to uniquely identify the rendering part . ” ; performing color rendering on the raster based on a color parameter in the rendering parameters of the raster ( see at least Hutchins col.7, lines 14-19 “ The output of ALU pipeline 440 goes to data write stage 455. The data write stage 455 converts pixel packets into pixel data and stores the result (e.g., color, z depths, etc.) in a write buffer 452 or directly to a frame buffer in memory. Examples of functions that data write stage 455 may perform include color and depth write back, and format conversion . ”; col. 7, lines 56-67-col.1, lines 1-9 “ The outputs of the interpolators 501-508 are used to construct a plurality of pixel packet rows (e.g., a data structure in a memory array). In the present embodiment, a programmable packing logic module 510 (e.g., including a crossbar switch) functions by arranging the outputs of the interpolators 501-508 into a pixel packet row and formatting the fields of the row for the pixel parameters required for subsequent processing (e.g., color, texture, depth, fog, etc.). The placement of the outputs (e.g., of the interpolators 501-508) into the rows is programmable. In addition to these parameters, the packing logic module 510 arranges processing instructions (e.g., for the subsequent operations to be performed on the pixel packet) into the pixel packet row. For example, as a pixel is iterated, the computed parameters produced by the interpolators 501-508 enable subsequent stages of the graphics pipeline to fetch the required surface attributes (e.g., color, texture, etc.) needed to complete the pixel's rendering. For a simple 3D scene, a given pixel can be described using a single row (e.g., a one row pixel packet). In comparison, for a more complex 3D scene, a given pixel description may require a plurality of rows (e.g., a four row pixel packet) ”; col.9, lines 1-10 “ As described above, the raster stage 410 receives data from setup stage 405 regarding triangles (e.g., polygons) that are to be rendered (e.g., converted into pixels). This is illustrated in FIG. 6 as the triangle 630 propagating down to the raster stage 410 from the set up stage 405. The triangle 630 comprises a geometric primitive having associated therewith instructions (e.g., instructions 631) indicating the manner in which the triangle is to be rasterized and rendered, and primitive data (e.g., parameter data such as color, texture coordinates, transparency, xy , depth, etc.). ; Genova: see at least [ 0020] More particularly, a computing system can obtain a three-dimensional mesh. The three-dimensional mesh can be or otherwise include a plurality of polygons and associated texture data and/or associated shading data. As an example, the three-dimensional mesh can be a mesh representation of an object, and the associated shading and/or texture data can indicate a one or more colors for the polygons of the three-dimensional mesh (e.g., a texture data for a character model mesh in a video game, etc.). In some implementations, the three- dimensional mesh can be generated using one or more machine-learning techniques ”; [0021] “ As another example, the three-dimensional mesh can represent pose and/or orientation adjustments to a first three-dimensional mesh. For example, a machine-learned model can process a first three-dimensional mesh at a first pose and/or orientation and obtain a second three-dimensional mesh at a second pose and/or orientation different from the first. Thus, the three-dimensional mesh and the associated texture/ shading data can, in some implementations, be an output of a machine-learned model, and as such, the capacity to generate smooth derivatives for a rendering of the three-dimensional mesh (e.g., at occlusion boundaries, etc.) can be necessary to optimize the machine-learned model used to generate the three-dimensional mesh. ” ; Hickman: see at least col.9, lines 63-67-col.1, lines 1-11 “…I n another example, appearance attributes associated with the portion of the 3D object data model may be modified. For example, the portion of the 3D object data model may include a base color, texture coordinates, and lighting information, among other parameters, which are used by a shader to determine a final pixel color that is rendered or displayed on a screen. Any of the appearance attributes may be modified based on the appearance metric …”; BAO: see at least [0059] 204. The terminal sets the same vertex color for the multiple vertices of the rendering part, and the vertex color is used to uniquely identify the rendering part. [0060] The vertex color can be any color. This embodiment of the invention does not specifically limit the vertex color of each rendering part. It should be noted that for the same rendering part, the vertices of the rendering part have the same vertex color. However, since the vertex color is used to uniquely identify a rendering part, the vertices of different rendering parts have different vertex … [0064] Based on the above example, assuming RGB three-channel recording of vertex colors, in 3ds Max, vertex colors can be set using the built-in Max Script language: For the N1 vertices included in the first-floor outer wall, the R, G, and B channels in the vertex properties of these N1 vertices can all be set to 0 (R: 0, G: 0, B: 0), thus setting the vertex color of these N1 vertices to black; For the N2 vertices included in the second-floor outer wall, the R and G channels in the vertex properties of these N2 vertices can all be set to 0, and the B channel can be set to 1 (R: 0, G: 0, B: 1), thus setting the vertex color of these N2 vertices to blue; For the N3 vertices included in the third-floor outer wall, the R and B channels in the vertex properties of these N3 vertices can all be set to 0, and the G channel can all be set to 1 (R: 0, G: 1, B: 0), thus setting the vertex color of these N3 vertices to green. ”) performing depth rendering on the raster based on a depth parameter in the rendering parameters of the raster ( Hutchins col.7, lines 14-19 “ The output of ALU pipeline 440 goes to data write stage 455. The data write stage 455 converts pixel packets into pixel data and stores the result (e.g., color, z depths, etc.) in a write buffer 452 or directly to a frame buffer in memory. Examples of functions that data write stage 455 may perform include color and depth write back, and format conversion . ”; col. 7, lines 56-67-col.1, lines 1-9 “ The outputs of the interpolators 501-508 are used to construct a plurality of pixel packet rows (e.g., a data structure in a memory array). In the present embodiment, a programmable packing logic module 510 (e.g., including a crossbar switch) functions by arranging the outputs of the interpolators 501-508 into a pixel packet row and formatting the fields of the row for the pixel parameters required for subsequent processing (e.g., color, texture, depth, fog, etc.). The placement of the outputs (e.g., of the interpolators 501-508) into the rows is programmable. In addition to these parameters, the packing logic mo