DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on/after Mar. 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Response to Arguments
Applicant’s arguments, see pp. 7-10, filed 16 December 2025, with respect to the rejection(s) of claims 1, 22, and 28 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Surti et al. (U.S. PG-PUB 2018/0300951). The Examiner notes that the HENRIKSSON and DOGGETT references are not being relied upon in this Office action. Please see the Office action below for further rationale regarding the rejection(s) of the newly-amended or newly-added claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8, 22-24, and 27-29 are rejected under 35 U.S.C. 103 as being unpatentable over Hemmer et al. (US PG-PUB 2020/0265611, ‘HEMMER*5611’) in view of Surti et al. (US PG-PUB 2018/0300951, 'SURTI').
Regarding claim 1, HEMMER*5611 discloses a method of processing a decoded 3D textured mesh (HEMMER*5611; FIG. 3; ¶ 117; “… an output of the … cost-driven progressive encoding is a progressive bitstream 380 [which] includes a header 382 generated from the single-rate geometry encoding operation 370 on the lowest LOD 362. The header 382 records the encoded mesh of the lowest LOD 362, the lowest resolution texture encoded via progressive JPEG, the number of bits used for XYZ [‘3D’] and UV coordinates, and the bounding box required to reverse the quantization operations. … progressive bitstream 380 also includes a heterogeneous series of geometry and/or texture refinements 383-385 … Recovery bitstream 388 is placed in front of older recovery bitstreams 383-385, i.e., closer to the header 382. [This] placement is used in a decoding operation that reverses the encoding operations.”), the method comprising:
receiving from a mesh decoder (HEMMER*5611; FIG. 3; ¶ 0065; “The recovery bitstream data 158 represents a recording of the LOD reduction operation—in this case, the decimation operation—by which the previous LOD may be recovered. The recovery bitstream data 158 is used by a decoder to generate the next higher LOD using the inverse of the LOD reduction operation, in this case the inverse of the mesh decimation operation performed by the mesh decimation manager 142.”)
a decoded base mesh m’(i) (HEMMER*5611; ¶ 0162; “At 1302, the LOD manager 1230 obtains data (e.g., initial LOD data 1232) representing an initial level of detail (LOD), the initial LOD including an initial mesh LOD (e.g., initial mesh LOD data 1234) and an initial texture image LOD (texture image LOD data 1239), the initial mesh LOD [‘base mesh’] including an initial triangular mesh (e.g., position data 1235, connectivity data 1236, edge data 1237) and an initial texture atlas (texture atlas data 1238).”) and
displacements d’(i) associated with vertices of the decoded base mesh m’(i) (HEMMER*5611; ¶ 0093; “… the measure of distortion ΔD is an absolute value of a sum over displacements of one of the XYZ [‘vertex coordinates’] and UV coordinates. … the measure of distortion ΔD is an absolute value of a sum over displacements of the XYZ and UV coordinates.”); and
receiving control parameters (HEMMER*5611; ¶ 0030; “A technical solution to the above-described technical problem involves defining a cost metric that predicts how much computing resources are necessary to decode and render a mesh at a given LOD. The cost metric may be optimized by a selection of a LOD reduction process of a plurality of processes at each LOD reduction step. For each process of the plurality of processes, the LOD is reduced according to that process and the resulting reduced LOD is evaluated according to the cost metric.” ¶ 0031; “A technical advantage of the … technical solution is that a progressive mesh encoder resulting from such an optimization of the agony at each LOD reduction phase will improve the rate-distortion tradeoff and, ultimately, the user's experience in decoding and rendering the LODs representing [3-D] objects in an application.”) for consuming the decoded 3D textured mesh (HEMMER*5611; ¶ 0032; “… a LOD as defined herein is a representation of a surface that includes a mesh (e.g., a triangular mesh including connectivity data), … texture atlas(es) (i.e., a section of a plane that include texture patches, each of which is mapped onto clusters of faces of the mesh), as well as potentially other attributes (e.g., vertex coordinates, normal, colors). LODs rather than meshes are operated on herein so that cost metrics may be dependent on all data used to describe the object represented by the LOD rather than the mesh only, for instance, the LOD may also be reduced by reducing the quality of the used texture image.”);
([SURTI discloses this limitation.]); and
processing the subdivided decoded base mesh m’’(i) (HEMMER*5611; ¶ 51-55; “A subdivision is a decomposition of a first set into a second set of subsets such that the intersection of each pair of subsets is empty and such that the union of all subsets equals the first set. Thus, each element of the first set belongs exactly to one subset of the second set. … the first set may be a set of all triangles of an input mesh; the subdivision is a set of patches (not necessarily triangular) embedded in a texture image.”) with the displacements d’(i) to generate the decoded 3D textured mesh (HEMMER*5611; ¶ 0063; “The UV quantization manager 146 … performs a UV quantization operation on the texture data 138, specifically a UV quantization operation on UV coordinate pairs represented by the texture data 138, to produce the candidate data 150″. Similar to the XYZ quantization operation, the UV quantization operation is a reduction in which the length of a bit string representing an approximation to the real numbers of the UV coordinates is decremented by a bit. The UV coordinate of the texture atlas is at the center of a cell of a [2-D] lattice of UV coordinates. The UV quantization manager then moves the coordinate to the center of a cell of a new lattice that has a larger spacing than the previous lattice.” ¶ 0093; “… the measure of distortion ΔD is an absolute value of a sum over displacements of one of the XYZ and UV coordinates. … the measure of distortion ΔD is an absolute value of a sum over displacements of the XYZ and UV coordinates.”);
([SURTI discloses this limitation.]).
PNG
media_image1.png
252
585
media_image1.png
Greyscale
HEMMER*5611 does not explicitly disclose adaptively tessellating the decoded base mesh m’(i) by performing … subdivision(s) of the decoded base mesh m’(i) based on the received control parameters to produce a subdivided decoded base mesh m’’(i) or that the control parameters include available processing and rendering capabilities, which SURTI discloses (SURTI; FIG. 6; ¶ 0126-127; “… efficient power management [‘control parameters’: “power consumption constraints”, see ¶ [0004] of the instant specification] can have a direct impact on efficiency, longevity, as well as usage models for electronic devices. … graphics systems … generally consume power to perform tessellation at the same level for different parts of a display or scene. Power is wasted when tessellation is performed at the same level for all parts of a display. FIG. 6 illustrates … adaptive tessellation for foveated rendering … Finer tessellation is used for select object(s) and coarse tessellation for others. … User eye tracking … [is] used to determine a foveated region [‘control parameters’: “region of interest information”, see ¶ [0004] of the instant specification] and/or where to use finer/coarser tessellation. … Sensors [are] located proximate to … display device(s) to detect user eye … movements.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of processing a decoded 3D textured mesh of HEMMER*5611 to include the adaptively tessellating the decoded base mesh m’(i) by performing subdivision(s) of the decoded base mesh m’(i) based on the received control parameters to produce a subdivided decoded base mesh m’’(i) and the disclosure that the control parameters include available processing and rendering capabilities of SURTI. The motivation for this modification is to save power by reducing GPU work when coarser tessellation is used. More efficient power consumption is provided by using different tessellation levels depending on foveation (region of focus) (SURTI; ¶ 0127).
Regarding claim 8, HEMMER*5611-SURTI disclose the method of claim 1 wherein performing the … subdivision(s) of the decoded base mesh m’(i) comprises locally subdividing regions of the decoded base mesh m’(i) (SURTI; FIG. 5; ¶ 0122; “… the geometry processing unit 516 … subdivides the graphics primitives [‘local … regions’] into … new graphics primitive(s) and calculates parameters used to rasterize the new graphics primitive(s).”).
Independent claim 22 exhibits similar scope as compared to independent claim 1; therefore, the same motivation to combine references will be maintained.
Regarding claim 22, HEMMER*5611-SURTI disclose an electronic device comprising:
decoding circuitry (HEMMER*5611; ¶ 0216; “… the method further comprises transmitting the hybrid-encoded recovery bitstream to a client computer, the client computer [‘circuitry’] … configured to perform a decompress operation [‘decoding’] on the hybrid-encoded recovery bitstream to render the surface on a display connected to the client computer at a specified LOD.”) [and] processing circuitry (HEMMER*5611; FIG. 1, ‘processing units 124’; ¶ 0035-36) configured to:
decode a compressed bitstream to generate a decoded base mesh and … displacements, wherein at least a portion of the … displacements are associated with vertices of the decoded base mesh (HEMMER*5611; FIG. 3; ¶ 0065; “The recovery bitstream data 158 represents a recording of the LOD [level-of-detail] reduction operation—in this case, the decimation operation—by which the previous LOD may be recovered. The recovery bitstream data 158 is used by a decoder to generate the next higher LOD using the inverse of the LOD reduction operation, in this case the inverse of the mesh decimation operation performed by the mesh decimation manager 142.” ¶ 0093; “Each of the geometry-based LOD reduction operations, the mesh decimation operation 324, the XYZ quantization step 326, and the UV quantization operations 328, introduces an increase in the measure of distortion ΔD while reversing a LOD reduction operation during a decoding uses a bitrate ΔR. … the measure of distortion ΔD is a perceptual distortion metric (i.e., distortions from multiple points of view). … the measure of distortion ΔD is an absolute value of a sum over displacements of one of the XYZ [‘vertex coordinates’] and UV coordinates [‘texture coordinates’]. … the measure of distortion ΔD is an absolute value of a sum over displacements of the XYZ and UV coordinates.”); and
process the decoded base mesh and the … displacements to generate a decoded mesh (HEMMER*5611; ¶ 0093, ¶ 0216; “… transmitting the hybrid-encoded recovery bitstream to a client computer [which] … performs a decompress operation [‘process the decoded base mesh’] on the hybrid-encoded recovery bitstream to render the surface on a display [‘generate a decoded mesh’] connected to the client computer at a specified LOD.”), wherein processing the decoded base mesh comprises
adaptively tessellating the decoded base mesh based on … control parameter(s) for consuming the decoded mesh (HEMMER*5611; ¶ 0112; “It is noted that the increase in distortion introduced by a coarse quantization is smaller for a coarse mesh than a dense mesh. … an adaptive quantization [‘adaptively tessellating’] operation includes decreasing the precision of vertex or texture (XYZ or UV) coordinates as the LOD [‘control parameter(s)’] reductions progress. … a quantization operation decrements by one the number of quantization bits for a coordinate type (e.g., XYZ or UV). Such a quantization operation is performed when its agony is less than that produced by other types of operations (e.g., mesh decimation). Reversing this operation in decoding involves relocating each vertex to the center of a smaller cell in a denser lattice. Predicting the new locations [is] accomplished by generating a distance to a centroid of neighboring vertices of a cell of a parent lattice. The adaptive quantization operation helps improve the compression rate by using single-rate compression with lower quantization bits, shifting the R-D curve to the left.”), adaptively tessellating the decoded base mesh comprises
performing … subdivision(s) of the decoded base mesh based on the … control parameter(s) (HEMMER*5611; ¶ 0065; “The recovery bitstream data 158 represents a recording of the LOD reduction operation—in this case, the decimation operation—by which the previous LOD may be recovered. The recovery bitstream data 158 is used by a decoder to generate the next higher LOD using the inverse of the LOD reduction operation, in this case the inverse of the mesh decimation operation performed by the mesh decimation manager 142.”), and
the control parameters include power consumption constraints of the electronic device (SURTI; FIG. 6; ¶ 0126-127; “… efficient power management [‘control parameters’: “power consumption constraints”, see ¶ [0004] of the instant specification] can have a direct impact on efficiency, longevity, as well as usage models for electronic devices. … graphics systems … generally consume power to perform tessellation at the same level for different parts of a display or scene. Power is wasted when tessellation is performed at the same level for all parts of a display. … [This] results in power saving since the GPU work is reduced when coarser tessellation is used. … More efficient power consumption can be provided by using different tessellation levels depending on foveation.”).
Regarding claim 23, HEMMER*5611-SURTI disclose the electronic device of claim 22 wherein the decoding/processing circuitry:
decodes the compressed bitstream to generate … attribute map(s) associated with the base mesh (HEMMER*5611; FIG. 13; ¶ 0164; “At 1306, … single-rate encoding manager 1270 performs single-rate encoding operations [‘compressed bitstream’] on the respective mesh LODs [‘base mesh’] … to produce a sequence of single-rate encoded mesh LODs, [that] when combined with an encoding of corresponding texture image LODs [‘attribute map(s)’], form [a] compression of the data (e.g., encoded LOD sequence 1274) that, upon decoding, enables a user to render the surface at any LOD.”), and …
processes the decoded base mesh, the … displacements (HEMMER*5611; ¶ 0093; “… the measure of distortion ΔD is an absolute value of a sum over displacements of one of the XYZ and UV coordinates. … the measure of distortion ΔD is an absolute value of a sum over displacements of the XYZ and UV coordinates.”), and the … attribute map(s) to generate the decoded mesh and … processed attribute map(s) (HEMMER*5611; FIGS. 1, 12; ¶ 0151; “The initial LOD mesh data 1234 [‘decoded base mesh’] represents a triangular mesh and a texture atlas [‘attribute map(s)’]. The mesh includes vertices, edges connecting the vertices, and triangular faces defined by the edges. … the initial LOD mesh data 1234 includes position data 1235, connectivity data 1236, edge data 1237, and texture atlas data 1238.”; ¶ 0216; “… the method … comprises transmitting the hybrid-encoded recovery bitstream to a client computer [which] performs a decompress operation on the hybrid-encoded recovery bitstream to render the surface on a display connected to the client computer at a specified LOD.”).
Regarding claim 24, HEMMER*5611-SURTI disclose the electronic device of claim 22 wherein performing the … subdivision(s) of the decoded base mesh comprises locally subdividing regions of the decoded base mesh (HEMMER*5611; ¶ 0052-55; “A subdivision is a decomposition of a first set into a second set of subsets such that the intersection of each pair of subsets is empty and such that the union of all subsets equals the first set. … each element of the first set belongs exactly to one subset of the second set. … The first set [is] a set of all triangles of an input-mesh [‘decoded base mesh’]; the subdivision is a set of patches (not necessarily triangular) embedded in a texture image.”).
Regarding claim 27, HEMMER*5611-SURTI disclose the electronic device of claim 22 wherein the … control parameter(s) further include available processing and rendering capabilities of the electronic device (SURTI; FIG. 6; ¶ 0126-127; [See the treatment of similar language recited in claim 1.]).
Independent claim 28 exhibits similar scope as compared to independent claim 1; therefore, the same motivation to combine references will be maintained.
Regarding claim 28, HEMMER*5611-SURTI disclose a method of processing a decoded 3D textured mesh, the method comprising:
receiving from a mesh decoder (HEMMER*5611; FIG. 3; ¶ 0065) a decoded base mesh m'(i) (HEMMER*5611; ¶ 0162) and displacements d'(i) associated with vertices of the decoded base mesh m'(i) (HEMMER*5611; ¶ 0093); and
receiving control parameters for consuming the decoded 3D textured mesh (HEMMER*5611; ¶ 30-32);
adaptively tessellating the decoded base mesh m'(i) by performing … subdivision(s) of the decoded base mesh m'(i) based on the received control parameters (SURTI; FIG. 6; ¶ 0126-127); and
processing the subdivided decoded base mesh m"(i) (HEMMER*5611; ¶ 0051-55) with the displacements d'(i) to generate the decoded 3D textured mesh (HEMMER*5611; ¶ 0063, ¶ 0093);
wherein the control parameters include power consumption constraints of an electronic device performing the method (SURTI; FIG. 6; ¶ 0126-127; “… efficient power management [‘control parameters’: “power consumption constraints”, see ¶ [0004] of the instant specification] [has] a direct impact on efficiency, … as well as usage models for electronic devices. … graphics systems … … consume power to perform tessellation at the same level for different parts of a display or scene. Power is wasted when tessellation is performed at the same level for all parts of a display. … [This] results in power saving since the GPU work is reduced when coarser tessellation is used. … More efficient power consumption [is] provided by using different tessellation levels depending on foveation.”).
Regarding claim 29, HEMMER*5611-SURTI disclose the method of claim 28 wherein the control parameters further include at least one of (SURTI; ¶ 0127; “FIG. 6 illustrates … adaptive tessellation for foveated rendering … Finer tessellation is used for select object(s) and coarse tessellation for others. … User eye tracking … [is] used to determine a foveated region [‘region of interest information’] and/or where to use finer/coarser tessellation.”).
Claims 2-3, 6, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over HEMMER*5611 in view of SURTI as applied to claim 1 above, and further in view of Migdal et al. (US PGPUB 20010013866, 'MIGDAL').
Regarding claim 2, HEMMER*5611-SURTI disclose the method of claim 1; however, HEMMER*5611-SURTI do not explicitly disclose that performing the … subdivision(s) of the decoded base mesh m'(i) comprises globally subdividing the decoded base mesh m’(i), which MIGDAL discloses (MIGDAL; ¶ 0052; “[Using] … a symmetrical subdivision technique, … the triangles can be subdivided sequentially [‘globally subdividing the decoded base mesh m’(i)’], rather than all at once and that can provide computational and processing efficiencies such as a saving of memory space and processor resources. A subdividing system … may apply recursive techniques to analyze all or selected edges of a mesh and perform subdivisions following this a symmetrical subdivision technique. With such a technique, each original triangle of the original base mesh can be subdivided, extruded, rendered and recursively subdivided, if needed, into thousands of smaller triangles. When the first triangle is refined to the desired level of smoothness, the memory space used to store the variables needed for these smaller triangles can be released. The next triangle of the base mesh can then be subdivided, extruded, rendered and recursively subdivided, if needed, using the same memory space allocated for the smoothing of the first triangle since that memory space was released.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of HEMMER*5611-SURTI to include the disclose that performing the … subdivision(s) of the decoded base mesh m'(i) comprises globally subdividing the decoded base mesh m’(i) of MIGDAL. The motivation for this modification is to implement a symmetrical subdivision technique in which triangle primitives can be subdivided sequentially, rather than all at once and that can provide computational and processing efficiencies such as a saving of memory space and processor resources (MIGDAL; ¶ [0052]).
Regarding claim 3, HEMMER*5611-SURTI-MIGDAL disclose the method of claim 2, wherein globally subdividing the decoded base mesh m'(i) comprises iteratively performing … subdivisions (MIGDAL; ¶ 0052; “[Using] … a symmetrical subdivision technique, … the triangles can be subdivided sequentially, rather than all at once … A subdividing system … may apply recursive techniques to analyze all or selected edges of a mesh and perform subdivisions following this a symmetrical subdivision technique. With such a technique, each original triangle of the original base mesh can be subdivided, extruded, rendered and recursively subdivided, if needed, into thousands of smaller triangles. When the first triangle is refined to the desired level of smoothness, the memory space used to store the variables needed for these smaller triangles can be released. The next triangle of the base mesh can then be subdivided, extruded, rendered and recursively subdivided, if needed, using the same memory space allocated for the smoothing of the first triangle since that memory space was released.”).
Regarding claim 6, HEMMER*5611-SURTI-MIGDAL disclose the method of claim 2, wherein the control parameters further include power consumption constraints (SURTI; FIG. 6; ¶ 0126-127; “… efficient power management [‘control parameters’: “power consumption constraints”, see ¶ [0004] of the instant specification] can have a direct impact on efficiency, longevity, as well as usage models for electronic devices. … graphics systems … generally consume power to perform tessellation at the same level for different parts of a display or scene. Power is wasted when tessellation is performed at the same level for all parts of a display. … [This] results in power saving since the GPU work is reduced when coarser tessellation is used. … More efficient power consumption can be provided by using different tessellation levels depending on foveation.”).
Regarding claim 9, HEMMER*5611-SURTI disclose the method of claim 8; however, HEMMER*5611-SURTI do not explicitly disclose that locally subdividing regions of the decoded base mesh m’(i) further comprises:
analyzing local properties of the decoded base mesh m’(i);
based on the analyzed local properties, determining for each edge of each polygon of the decoded base mesh m’(i) whether that edge should be subdivided, which MIGDAL discloses (MIGDAL; ¶ 49; “… the decision whether to subdivide an edge is based on the information located in the two vertices of the edge being checked for subdivision a data point generally provides the XYZ 3D spatial location of the point's location within the mesh model. However, a data point may also have associated with it other information such as normal vector data, (e.g. vertex normals reflecting the normal of the object surface at the data point [‘analyzing local properties of the decoded base mesh’], and corner normals which are normal vectors for the triangles [‘polygons’] to which the data point is connected from the data points "corner" of the triangle). … This coordinate … can be examined each edge to determine whether an edge subdivision is to be made.”);
for each polygon of the decoded base mesh m’(i), subdividing the polygon (MIGDAL; ¶ 0052; “A … consequence of a symmetrical subdivision technique is that the triangles [‘polygons’] can be subdivided sequentially, rather than all at once … A subdividing system … may apply recursive techniques to analyze all or selected edges of a mesh and perform subdivisions following this symmetrical subdivision technique. … each original triangle of the original base mesh can be subdivided, extruded, rendered and recursively subdivided, if needed, into thousands of smaller triangles.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 8 of HEMMER*5611-SURTI to include the various teachings of MIGDAL. The motivation for this modification is to implement a symmetrical subdivision technique in which triangle primitives can be subdivided sequentially, rather than all at once and that can provide computational and processing efficiencies such as a saving of memory space and processor resources (MIGDAL; ¶ [0052]).
Regarding claim 16, HEMMER*5611-SURTI disclose the method of claim 8; however, HEMMER*5611-SURTI do not explicitly disclose that locally subdividing the regions of the decoded base mesh m’(i) comprises iteratively subdividing the regions of the decoded base mesh m’(i), which MIGDAL discloses (MIGDAL; ¶ 52; “[Using] … a symmetrical subdivision technique, … the triangles [are] subdivided sequentially, rather than all at once … A subdividing system … may apply recursive techniques to analyze … edges of a mesh and perform subdivisions following … a symmetrical subdivision technique [wherein] each original triangle of the original base mesh [is] subdivided, extruded, rendered and recursively subdivided, if needed, into [many] smaller triangles. When the first triangle is refined to the desired level of smoothness, the memory space … can be released. The next triangle of the base mesh can then be subdivided, extruded, rendered and recursively subdivided, if needed, using the same memory space allocated for the smoothing of the first triangle since that memory space was released.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 8 of HEMMER*5611-SURTI to include iteratively subdividing the regions of the decoded base mesh m’(i) of MIGDAL. The motivation for this modification is to implement a symmetrical subdivision technique in which triangle primitives can be subdivided sequentially, rather than all at once and that can provide computational and processing efficiencies such as a saving of memory space and processor resources (MIGDAL; ¶ [0052]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over HEMMER*5611 in view of SURTI and MIGDAL as applied to claim 2 above, and further in view of Bois et al. (U.S. Patent 11,100,721; 'BOIS').
Regarding claim 4, HEMMER*5611-SURTI-MIGDAL disclose the method of claim 2; however, HEMMER*5611-SURTI-MIGDAL do not explicitly disclose that the control parameters further include at least one of current or future camera position and viewing frustum , which BOIS discloses (BOIS; Col. 4, Lines 36-44; “… the term “view near plane” refers to a plane defined by parameters of a virtual camera which indicates objects that are too close to a virtual camera in 3D space of a 3D model be displayed. Objects between the virtual camera's position and the view near plane in 3D space of the 3D model are clipped out … Objects beyond the view near plane (e.g., but closer than a view far plane) in 3D space of 3D model are potentially displayed (e.g., provided that they are in the view frustum.)”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 2 of HEMMER*5611-SURTI-MIGDAL to include the disclosure that the control parameters include current or future camera position and viewing frustum of BOIS. The motivation for this modification is to efficiently use computational resources such that only visible meshes (not too close/far or peripheral) are rendered and displayed.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over HEMMER*5611 in view of SURTI and MIGDAL as applied to claim 9 above, and further in view of Gueziec et al. (U.S. Patent 6,307,551; 'GUEZIEC').
Regarding claim 10, HEMMER*5611-SURTI-MIGDAL disclose the method of claim 9; however, HEMMER*5611-SURTI-MIGDAL do not explicitly disclose that analyzing the local properties of the decoded base mesh m’(i) includes analyzing the displacements d’(i) associated with the vertices of the decoded base mesh m’(i), which GUEZIEC discloses (GUEZIEC; FIG. 13; Col. 12, Lines 45-53; “In Step 700 the vertex displacements 22 are computed. In a first stage of Step 700, the lower LOD surface is cut through the edges that were marked (edge marks 21) in Step 600. … After the lower LOD surface has been cut, as visualized in FIG. 17C, the vertices that are introduced are assigned new vertex IDs.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 9 of HEMMER*5611-SURTI-MIGDAL to include the analyzing the displacements associated with the vertices of the decoded base mesh of GUEZIEC. The motivation for this modification is to enable a person having ordinary skill in the art to efficiently represent and encode changes of surface levels of detail of a mesh. There is thus a long felt need to overcome problems of the prior art and to provide an automatic generation and representation of surface level of detail changes in a mesh (GUEZIEC; Col. 3, Lines 15-20).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over HEMMER*5611 in view of SURTI and MIGDAL as applied to claim 9 above, and further in view of Stafford et al. (U.S. PG-PUB 2017/0287112, 'STAFFORD').
PNG
media_image2.png
337
913
media_image2.png
Greyscale
Regarding claim 11, HEMMER*5611-SURTI-MIGDAL disclose the method of claim 9; however, HEMMER*5611-SURTI-MIGDAL do not explicitly disclose that analyzing the local properties of the decoded base mesh m’(i) includes analyzing explicitly encoded attributes associated with the decoded base mesh m’(i) describing saliency, importance, priority, OR region of interest information, which STAFFORD discloses (STAFFORD; FIGS. 4A-4B; ¶ 0079-80; “A foveal region may be determined at 506 by an application to be of interest to a viewer because (a) it is a region the viewer is likely look at, (b) it is a region the viewer is actually looking at, or (c) it is a region it is desired to attract the user to look at. With respect to (a), the foveal region may be determined to be likely to be looked at in a context sensitive manner. … the application may determine that certain portions of the screen space or certain objects in a corresponding [3-D] virtual space are “of interest” and such objects may be consistently drawn using a greater number of vertices than other objects in the virtual space. Foveal regions may be contextually defined to be of interest in a static or dynamic fashion. As a non-limiting example of static definition, a foveal region may be a fixed part of the screen space, e.g., a region near the center of the screen, if it is determined that this region is the part of the screen space that a viewer is most likely to look at. … if the application is a driving simulator that displays an image of a vehicle dashboard and a windshield, the viewer is likely to be looking at these portions of the image. … the foveal region may be statically defined in the sense that the region of interest is a fixed portion of the screen space. As a non-limiting example of dynamic definition, in a video game a user's avatar, fellow gamer's avatars, enemy artificial intelligence (AI) characters, certain objects of interest (e.g., the ball in a sports game) may be of interest to the user. Such objects of interest may move relative to the screen space and therefore the foveal region may be defined to move with the object of interest.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 9 of HEMMER*5611-SURTI-MIGDAL to include the disclosure that analyzing local properties of the mesh includes analyzing explicitly encoded attributes associated with the mesh describing saliency, importance, priority, OR region of interest information of STAFFORD. The motivation for this modification is to implement tessellation based on human vision that can reduce computational load and or rendering time for an image by reducing the level of detail (LOD, or number of triangles per unit area) in areas of lesser interest, as opposed to more important regions representing people, moving objects, central vision, etc. (STAFFORD; ¶ 0059).
Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over HEMMER*5611 in view of SURTI and MIGDAL as applied to claim 9 above, and further in view of Alliez et al. (U.S. PG-PUB 2004/0090438, 'ALLIEZ').
Regarding claim 12, HEMMER*5611-SURTI-MIGDAL disclose the method of claim 9; however, HEMMER*5611-SURTI-MIGDAL do not explicitly disclose that analyzing the local properties of the decoded base mesh m’(i) includes analyzing implicitly derived saliency, importance, priority, OR region of interest [ROI] information, which ALLIEZ discloses (ALLIEZ; ¶ 0154-155; “FIGS. 18a-18c show the results obtained using the refinement process … on a … mesh representing a face [The Examiner asserts that optimal detail, refinement, and attention should be paid to the human face, as this is the countenance that portrays human emotion and conveys much of the visual component of interpersonal human communication. The Examiner asserts that a human face has both ‘saliency’ and ‘importance’ and should be given ‘importance’ and ‘priority’.]. FIG. 18a shows the original mesh 181 before … FIG. 18b shows image 182 obtained after four iterations of the refinement process in the field of vision of an observer and eight iterations on the silhouette of the face 181. Note in FIG. 18c that the polygonal aspect of the silhouette has been eliminated, and the geometry of the mesh has only been refined on visually relevant regions of the image 183 [‘ROI information’].”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 9 of HEMMER*5611-SURTI-MIGDAL to include the disclosure that analyzing local properties of the mesh includes analyzing implicitly derived saliency, importance, priority, OR region of interest information of ALLIEZ. The motivation for this modification is to adapt an image to a viewpoint of a virtual observer. These techniques use silhouettes (which means all edges of the mesh sharing two faces, one of which is oriented facing a virtual camera, and the other is in the opposite direction), the pyramid of vision and the orientation of the faces of objects facing a camera or the eye of an observer. The main result is to avoid "over coding" of areas that are only slightly visible or are not visible at all, or are less relevant, to the detriment of visually important areas [‘regions of interest’] (ALLIEZ; ¶ [0013]).
Regarding claim 13, HEMMER*5611-SURTI-MIGDAL-ALLIEZ disclose the method of claim 12, wherein the implicitly derived saliency, importance, priority, OR region of interest information includes surface curvature (ALLIEZ; ¶ 0042; “… selected regions of interest are regions considered to be visually relevant for a person observing the mesh. It may seem pointless to refine areas of the mesh that are not visible or only slightly visible to the user, from the current view point. Selected regions of interest may also be silhouettes of observed objects, which play a preferred role in cognitive processes. It is particularly important to refine curved surfaces of the silhouette of the object which appear in polygonal form when they are described too briefly by a mesh network.”).
Regarding claim 14, HEMMER*5611-SURTI-MIGDAL-ALLIEZ disclose the method of claim 12, wherein the implicitly derived saliency, importance, priority, OR region of interest information includes gradient of vertex attributes or attribute maps (ALLIEZ; ¶ 36; “Regions of interests [are] explicitly defined by an operator … and/or deduced from a phase that detects regions considered to be visually relevant, [e.g.,] such as regions with high illumination gradient, or silhouettes.”) of the decoded base mesh m’(i).
Regarding claim 15, HEMMER*5611-SURTI-MIGDAL-ALLIEZ disclose the method of claim 12, wherein the implicitly derived saliency, importance, priority, OR region of interest information includes edge length (ALLIEZ; ¶ 0142; “It may also be useful to refine regions adjacent to silhouette until an edge length less than the resolution of the output peripheral is obtained (in the event, one pixel).”).
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over HEMMER*5611 in view of SURTI as applied to claim 22 above, and further in view of Levanon et al. (U.S. PG-PUB 2002/0120753, 'LEVANON').
Regarding claim 25, HEMMER*5611-SURTI disclose the electronic device of claim 22; however, HEMMER*5611-SURTI do not explicitly disclose that the … control parameter(s) comprise a viewing frustum associated with a camera of the electronic device, which LEVANON discloses (LEVANON; FIG. 10; ¶ 60; “… Algorithm (180) … [determines] the detail level L value for a given viewing frustum … The optimal detail level L is … the limit at which the resolution of image parcel data functionally exceeds the resolution of the client display. … to determine the optimal detail level L, the viewpoint or camera position of the viewing frustum is determined (182) relative to the displayed image. A nearest polygon P of depth D is then determined (184) from the effective altitude and attitude of the viewpoint.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the electronic device of claim 22 of HEMMER*5611-SURTI to include the disclosure that the … control parameter(s) comprise a viewing frustum associated with a camera of the electronic device of LEVANON. The motivation for this modification is to provide a system and method(s) for efficiently selecting and distributing image parcels through a narrowband or otherwise limited bandwidth communications channel [‘bitstream’] to support presentation of high-resolution images subject to dynamic viewing frustums (LEVANON; ¶ [0004]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M COFINO whose telephone number is (303) 297-4268. The examiner can normally be reached Monday-Friday 10A-4P MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN M COFINO/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614