DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 13 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over LE MEUR (US 20140152664 A1), referred herein as MEUR in view of Chalfin et al. (US 20050088450 A1), referred herein as Chalfin.
Regarding Claim 1, MEUR in view of Losasso teaches a method comprising (MEUR Abst: A method of rendering a terrain stored in a massive database):
determining, by a computing device and in response to a determination that a refresh timing of a terrain image in a virtual scene is reached, a target level for a refreshing of the terrain image and surface range information for the refreshing of the terrain image (MEUR [0025] the strategy chosen for refreshing the rendered image, and notably to the technique for transition between the levels of detail; [0027] the extracted terrain data forming an extraction pyramid composed of an extraction window for each level of detail, placed in cache memory; [0030] a step (83, 9, 121) of selecting the patches of the extraction pyramid which contribute to the image; [0031] a step (122) of plotting the rendering on the basis of the selected patches);
obtaining, by the computing device and from a pre-generated image set, a level area image corresponding to the target level, the pre-generated image set comprising at least two map images, and each map image comprising a plurality of level area images that are generated using a clipmap rendering process and that are of different levels of detail (MEUR [0028] a step (81) of generating several regular grids (1, 2, 3) of different resolution level terrain patches (LOD0, LOD1, LOD2) so as to represent the terrain data of the massive database; [0029] a step (82, 120) of extracting terrain data from the massive database for several resolution levels, the extracted terrain data forming an extraction pyramid composed of an extraction window for each level of detail, placed in cache memory; [0055] paving the database into tiles for several levels of different resolution. For example the paving can be analogous to the pavings used in clipmapping. The paving of the database can produce for example a first decomposition grid 1 of a terrain, for a first level of detail LOD0. The first grid 1 can be a square grid comprising eight rows and eight columns such as represented in FIG. 1 in respect of the example. Each patch of this grid contains the description of the data of the corresponding portion of terrain, for example in the form of irregular triangular meshes. A second grid 2 represents the same terrain as the first grid 1, for a level of detail LOD1 less than the level of detail LOD0; [0074] These data consist for example of image data, of geometric data describing the surface of the terrain, or else of three-dimensional type objects, modelled by external tools, and positioned in the database. The geometric description data can notably be arbitrary irregular triangular meshes. A step prior to the rendering computation is a generation 81 of several representations of the database for different levels of details);
updating, by the computing device and based on the surface range information, the level area image corresponding to the target level to obtain an updated level area image corresponding to the target level (MEUR [0076] A third step 83 consists in selecting, from among the patches present in cache memory, those which will contribute to the rendering of the image. This selection consists notably in determining the patches visible from the current viewpoint, and in selecting the appropriate level of detail, so as to display the content most suited to the viewpoint);
updating, by the computing device and based on the updated level area image, the pre-generated image set to obtain an updated image set (MEUR [0093] FIG. 11 represents various states that can be taken by a cell of a sliding window. As described hereinabove, for each patch of the sliding window, the passage from a resolution level L to a more precise resolution level L-1 is done by mixing the patch with its four child patches, that is to say with the four patches corresponding of the level of detail L-1);
determining, by the computing device and based on the updated image set, rendering information of MEUR [0074] FIG. 8 represents a flowchart which decomposes the activities implemented in the method into three tasks. The source data 80 describing the terrain constitute the input data of the method. These data consist for example of image data, of geometric data describing the surface of the terrain, or else of three-dimensional type objects, modelled by external tools, and positioned in the database; [0077] FIG. 9 represents a flowchart for selecting the patches contributing to the rendering of the final image. The patches of the clipwindows corresponding to the least resolved level of detail are considered first. For each of these patches, the processings described below are carried out. Hereinafter, the current patch is denoted P, the current clipwindow is denoted W, and the current level of detail is denoted );
MEUR teaches rendering data for the point of interest but does not explicitly teach rendering information of each pixel point in a visual range;
MEUR in view of Chalfin teaches
rendering information of each pixel point in a visual range (Chalfin [0053] Texture is applied to an object being rendered by mapping texel data from the ECM stored in texture memory to corresponding pixel data. For each pixel in the primitive (i.e., triangle) of a frame being rendered (loop 510), steps 512 and 514 are carried out); and
rendering, by the computing device, the terrain image in the visual range based on the rendering information of each pixel point in the visual range (Chalfin [0077] The embodiment of the invention shown in FIG. 7 has a multipass graphics pipeline. It is capable of operating on each pixel of an image (object) during each pass that the image makes through the graphics pipeline. For each pixel of the image, during each pass that the image makes through the graphics pipeline, texture unit 734 can obtain at least one texture sample from the textures and/or images stored in texture memory 740; MEUR [0029] a step (82, 120) of extracting terrain data from the massive database for several resolution levels, the extracted terrain data forming an extraction pyramid composed of an extraction window for each level of detail, placed in cache memory; [0030] a step (83, 9, 121) of selecting the patches of the extraction pyramid which contribute to the image; [0031] a step (122) of plotting the rendering on the basis of the selected patches).
Chalfin discloses a technique for texture roaming via dimension elevation, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified MEUR to incorporate the teachings of Chalfin, and apply individual pixel rendering method into the method of rendering a terrain stored in a massive database.
Doing so would eliminate the computation required for deciding the optimum range level of detail used to render a primitive.
Regarding Claim 13, MEUR in view of Chalfin teaches a apparatus comprising (MEUR Abst: A method of rendering a terrain stored in a massive database):
memory storing computer-readable instructions which, when executed by the one or more processors, cause the apparatus to (Chalfin [0074] Host system 710 comprises an application program 712, a hardware interface or graphics API 714, a processor 716, and a memory 718. Application program 712 can be any program requiring the rendering of a computer image):
The metes and bounds of the claim substantially correspond to the claimed limitations set forth in claim 1; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Regarding Claim 17, MEUR in view of Chalfin teaches a non-transitory computer-readable media storing instructions which, when executed by a computing device, cause the computing device to (MEUR Abst: A method of rendering a terrain stored in a massive database; Chalfin [0074] Host system 710 comprises an application program 712, a hardware interface or graphics API 714, a processor 716, and a memory 718. Application program 712 can be any program requiring the rendering of a computer image):
The metes and bounds of the claim substantially correspond to the claimed limitations set forth in claim 1; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over LE MEUR (US 20140152664 A1), referred herein as MEUR in view of Chalfin et al. (US 20050088450 A1), referred herein as Chalfin and Laine et al. (US 20220051481 A1), referred herein as Laine.
Regarding Claim 8, MEUR in view of Chalfin teaches the method according to claim 1, but does not teach all the limitations herein. However, in view of Laine, the prior art teaches wherein the determining rendering information of each pixel point in a visual range based on the updated image set comprises:
obtaining world coordinates of each pixel point in the visual range (Laine [0053] The rasterizer 220 performs perspective division and implements dynamic mapping between world coordinates and discrete pixel coordinates. Per-pixel auxiliary data may be stored in the form of barycentric coordinates and triangle IDs in the forward pass through the rendering pipeline 205);
determining texture mapping coordinates of each pixel point based on the world coordinates of each pixel point (Chalfin [0016] Once the levels of a clip-map are placed in an extra dimension coordinate space, the extra dimension texture coordinate value can be computed based on clip-mapping rules. In one embodiment, a degree elevated texture having three dimensions addressed by texture coordinates (s,t,r) is used to store a 2D clip-map. The 2D clip-map is a clip map in (s,t) texture coordinates of a 2D texture; [0053] Steps 510-514 are carried out to apply texture in a scene for an initial frame. Texture is applied to an object being rendered by mapping texel data from the ECM stored in texture memory to corresponding pixel data); and
determining, based on the texture mapping coordinates of each pixel point, the rendering information of each pixel point from the updated image set (Chalfin [0053] For each pixel in the primitive (i.e., triangle) of a frame being rendered (loop 510), steps 512 and 514 are carried out. First, an extra-dimension coordinate is determined based on the highest level of detail texture available in the loaded ECM which offers coverage for the (s,t) location being rasterized (step 512)).
Laine discloses a three-dimensional (3D) model of an object is recovered from two-dimensional (2D) images of the object, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified MEUR to incorporate the teachings of Laine, and apply the dynamic mapping between world coordinates and discrete pixel coordinates into the method of rendering a terrain stored in a massive database.
Doing so would provide a recovered 3D model that is accurate and may be generated by rendering analytically antialiased images of the 3D model and propagating differences between the rendered images and reference images backwards through the rendering pipeline to iteratively adjust the 3D model.
Allowable Subject Matter
Claims 2-7, 9-12, 14-16 and 18-20 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding Claim 2, MEUR in view of Chalfin teaches the method according to claim 1, but does not teach wherein the method further comprises:
obtaining, based on detecting that a virtual object has moved in the virtual scene, a movement distance of the virtual object and size information corresponding to each different level of detail;
determining a ratio of the movement distance to the size information corresponding to each different level of detail; and
determining, when it is determined that a level of detail of the different levels of detail has a ratio greater than a preset ratio threshold, that the refresh timing of the terrain image is reached.
Therefore, claim 2 in the context of claim 1 as a whole would be allowable if rewritten in independent form.
Claims 14 and 18 are allowable for the same reason as above.
Regarding Claim 9, MEUR in view of Chalfin teaches the method according to claim 8, and further teaches wherein the determining, based on the texture mapping coordinates of each pixel point, the rendering information of each pixel point from the updated image set comprises:
obtaining a level of detail at which each pixel point is located and center coordinates of a center position of the level of detail (MEUR [0062] FIG. 3 represents a clipwindow 30 according to the invention. A clipwindow 30 is associated with each level of detail L, and comprises a subset of the patches defined for this level in the terrain database. A clipwindow 30 according to the invention comprises three distinct zones 31, 32, 33 centred on a point 34 named the clipcenter 34. Clipcenter is an expression that may be regarded as equivalent to slice centre. The clipwindow 30 therefore comprises: a central zone 31, a transition zone or margin 32, a preloading zone or margin 33; [0064] The clipcenter 34 can be defined by integer coordinates in two dimensions, which correspond to the indices of a patch in the matrix formed by the terrain database regular paving grid for the associated level of detail.);
determining a distance between each pixel point and the center position based on the texture mapping coordinates of each pixel point and the center coordinates (Chalfin [0053] For each pixel in the primitive (i.e., triangle) of a frame being rendered (loop 510), steps 512 and 514 are carried out. First, an extra-dimension coordinate is determined based on the highest level of detail texture available in the loaded ECM which offers coverage for the (s,t) location being rasterized (step 512). In one example, a distance from a clip center (e.g., a distance obtained by subtracting an original fragment texture coordinate from clipcenter) along with an invalid border for a clip map tile is used to determine the highest detail level texture available in texture memory which offers coverage of the location being rasterized (ClipLOD).); and
However, the prior art does not teach
determining, when a distance corresponding to an ith pixel point is less than a preset distance threshold, pixel values corresponding to texture mapping coordinates of the ith pixel point in the updated image set as rendering information of the ith pixel point, i = 1, 2, ..., N, N being a total quantity of pixel points, and N being a positive integer greater than 2.
Therefore, claim 9 in the context of claims 1 and 8 as a whole would be allowable if rewritten in independent form.
Regarding Claim 11, MEUR in view of Chalfin teaches the method according to claim 8, but does not teach all the limitations herein. However, in view of Boissé, the prior art teaches
wherein the method further comprises:
obtaining a model identifier of a to-be-rendered model (Laine [0003] t enable recovery of a 3D model of an object from a set of 2D images of the objec);
obtaining a map image corresponding to another model when it is determined, based on the model identifier, that the to-be-rendered model is the another model above a surface;
obtaining normal information of each pixel point on the another model (Laine [0025] texture coordinates of the initial 3D model 132 are associated with each vertex defining the initial 3D model 132 and the association between the vertices and texture coordinates is unchanged even when locations of the vertices are modified to produce the constructed 3D model 134);
performing reverse processing on the normal information of each pixel point, to obtain reverse normal information (Laine [0026] The 3D model recovery system 100 also receives the set of 2D images of an object 110, that may include the reference image 112. As previously described, the goal of the 3D model recovery system 100 is to find a global texture and a constructed 3D model 134; [0039] Here, P(x, y) denotes the world point visible at (continuous) image coordinates (x, y) after projection from 3D to 2D, and M(P) denotes all the spatially-varying factors (texture maps, normal vectors, etc.) that live on the surfaces of the scene);
determining a height difference between each pixel point and the surface (Boissé [0030] Hence in an embodiment of the present invention a render node made of 32×32 quads will cover 32×32 texels of the height-map if the loaded data corresponds to the finest mip level (level 0), and will cover 64×64 texels if the loaded data corresponds to the next mip level (level 1), and so on. There may be any number of levels, although 8 levels may be typical. It will be appreciated that 32×32 etc., are nevertheless non-limiting example values);
However, the prior art does not teach
determining products of the reverse normal information and the height difference corresponding to each pixel point as adjustment coordinates;
adjusting the world coordinates of each pixel point by using the adjustment coordinates, to obtain adjusted world coordinates of each pixel point; and
determining the rendering information of each pixel point based on the adjusted world coordinates of each pixel point.
Therefore, claim 11 in the context of claims 1 and 8 as a whole would be allowable if rewritten in independent form.
The corresponding dependent claims are allowed by virtue of their dependencies.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Boissé et al. (US 20170200301 A1), referred herein as Boissé.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Samantha (YUEHAN) WANG/
Primary Examiner
Art Unit 2617