DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation - 35 USC § 101
The limitation “by the device using a UV stabilization process of the first mesh sequence, wherein the UV stabilization process employs a first texture sequence” amounts to a practical application of stabilizing an unstabilized group of meshes.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 8-10, 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Tykkala et al. (“Photorealistic 3D Mapping of Indoors by RGB-D Scanning Process”, IEEE, 2013.)(Hereinafter referred to as Tykkala) in view of Dai et al. (“A 3D Morphable Model of Craniofacial Shape and Texture Variation”, IEEE., 2017.)(Hereinafter referred to as Dai).
Regarding claim 1, Tykkala teaches A method (In this work, a RGB-D input stream is utilized
for GPU-boosted 3D reconstruction of textured indoor environments. The goal is to develop a process which produces standard 3D models from indoors to explore them virtually. See abstract), comprising:
receiving, by a device comprising at least one processor, a first mesh sequence comprising a group of unstabilized meshes comprising volumetric data associated with images (1) Record RGB-D video (manual) 2) Generate 3D trajectory by RGB-D tracking (automatic) 3) Select keyframes (automatic) 4) Depth map fusion (automatic) 5) Optional : bundle adjustment (semi-automatic) 6) Watertight polygonization (automatic) See section III. The Process Model, right col., last paragraph)( Whether or not RGB-D tracking uses keyframes, only the trajectory is stored. The model keyframes are selected by looping the trajectory and storing a keyframe whenever user-specified angular or translational distance to the existing model is exceeded. The neighboring RGB-D measurements to the keyframes are efficiently localized (timestamp or frame index) and depth map fusion is executed. In depth map fusion, keyframe depth maps are filtered using all RGB-D measurements available. See section III. The Process model., left col., first paragraph)( Polygon models are compact in their memory consumption and are better supported by standard 3D modeling programs than point clouds. A polygonization phase generates a polygon mesh from a point cloud. In our context, the method should take into account noise and missing data. A common approach is to fit the points to a surface using the zeroset of an implicit function, such as a sum of radial bases or piecewise polynomial functions. We select the Poisson method, because it produces a watertight surface based on a photometrically refined, oriented point cloud [2]. See section VII. Watertight Polygonization); and
generating, by the device using a UV stabilization process on the first mesh sequence, a UV stabilized mesh sequence, wherein the UV stabilization process employs a first texture sequence (7) Texture map generation (automatic) 8) UV coordinate generation (automatic) 9) Store Wavefront mesh (automatic) See section III. The Process Model, right col., last paragraph)( Because frequent switches in UV-mapping directions can cause visually disturbing seams, mapping can be improved by enforcing the locally dominant keyframe. One method to do so is to recursively enumerate connected polygon neighbors in n passes, and then prefer the mapping direction which has the largest number of votes. Finally the selected UV-coordinates are converted into global texture coordinates and stored. The keyframe images are not undistorted to better maintain maximum texture quality. The resulting meshes can be observed in Figure 9. See section VIII. Mesh Texturing, right col., paragraph below equation 8)(See figure 9), but is silent to associated with images of a human.
Dai creating a texture map from five raw texture image from five views of faces (We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping. We propose a dense correspondence system, combining a hierarchical parts-based template morphing framework in the shape channel and a refining optical flow in the texture channel. The texture map is generated using raw texture images from five views. See abstract).
Tykkala and Dai teach of texture mapping objects and Dai teaches that the texture mapping can be performed on a face utilizing multiple views to create a 3d Morphable model, therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the invention to combine the system of Tykkala with the morphable model from texture mapping techniques of Dai such that the system could create morphable models of various captured objects in a scene.
Regarding claim 2, Tykkala in view of Dai teaches the method of claim 1, wherein the using of the UV stabilization process comprises stabilizing, by the device, vertices of the first mesh sequence resulting in a second mesh sequence comprising stabilized vertices (Tykkala; Polygon models are compact in their memory consumption and are better supported by standard 3D modeling programs than point clouds. A polygonization phase generates a polygon mesh from a point cloud. In our context, the method should take into account noise and missing data. A common approach is to fit the points to a surface using the zeroset of an implicit function, such as a sum of radial bases or piecewise polynomial functions. We select the Poisson method, because it produces a watertight surface based on a photometrically refined, oriented point cloud [2]. See section VII. Watertight Polygonization) (Tykkala; Because frequent switches in UV-mapping directions can cause visually disturbing seams, mapping can be improved by enforcing the locally dominant keyframe. One method to do so is to recursively enumerate connected polygon neighbors in n passes, and then prefer the mapping direction which has the largest number of votes. Finally the selected UV-coordinates are converted into global texture coordinates and stored. The keyframe images are not undistorted to better maintain maximum texture quality. The resulting meshes can be observed in Figure 9. See section VIII. Mesh Texturing, right col., paragraph below equation 8) (Tykkala; 7) Texture map generation (automatic) 8) UV coordinate generation (automatic) 9) Store Wavefront mesh (automatic) See section III. The Process Model, right col., last paragraph).
Regarding claim 3, Tykkala in view of Dai teaches the method of claim 2, wherein the using of the UV stabilization process further comprises generating, by the device, a multi-view environment (MVE) comprising an MVE sequence based on the first mesh sequence and the first texture sequence (Tykkala; Fig. 9. Final textured Poisson meshes loaded into Meshlab for inspection: a) Room B, b) Kitchen. Poisson reconstruction produces watertight mesh, whose texturing is photorealistic as it is directly mapped from the keyframe images. The cost of reduced memory footprint is over-smoothing, which may occur at thin surfaces such as the shelf in 9a. Also lighting changes can be detected at seams where texture data source switches from one keyframe to another. Otherwise the models are photorealistic and in metric units. See figure 9 and caption).
Regarding claim 8, Tykkala teaches A system (In this work, a RGB-D input stream is utilized
for GPU-boosted 3D reconstruction of textured indoor environments. The goal is to develop a process which produces standard 3D models from indoors to explore them virtually. See abstract)( In our workflow, photorealistic 3D models are produced by using a laptop with a low-end GPU and a RGB-D sensor. See I. Introduction, first paragraph), comprising: at least one memory that stores computer executable components ( In our workflow, photorealistic 3D models are produced by using a laptop with a low-end GPU and a RGB-D sensor. See I. Introduction, first paragraph); and at least one processor that executes at least one of the computer executable components ( In our workflow, photorealistic 3D models are produced by using a laptop with a low-end GPU and a RGB-D sensor. See I. Introduction, first paragraph) that at least:
receive a first mesh sequence comprising a group of unstabilized meshes comprising volumetric data associated with images (1) Record RGB-D video (manual) 2) Generate 3D trajectory by RGB-D tracking (automatic) 3) Select keyframes (automatic) 4) Depth map fusion (automatic) 5) Optional : bundle adjustment (semi-automatic) 6) Watertight polygonization (automatic) See section III. The Process Model, right col., last paragraph)( Whether or not RGB-D tracking uses keyframes, only the trajectory is stored. The model keyframes are selected by looping the trajectory and storing a keyframe whenever user-specified angular or translational distance to the existing model is exceeded. The neighboring RGB-D measurements to the keyframes are efficiently localized (timestamp or frame index) and depth map fusion is executed. In depth map fusion, keyframe depth maps are filtered using all RGB-D measurements available. See section III. The Process model., left col., first paragraph)( Polygon models are compact in their memory consumption and are better supported by standard 3D modeling programs than point clouds. A polygonization phase generates a polygon mesh from a point cloud. In our context, the method should take into account noise and missing data. A common approach is to fit the points to a surface using the zeroset of an implicit function, such as a sum of radial bases or piecewise polynomial functions. We select the Poisson method, because it produces a watertight surface based on a photometrically refined, oriented point cloud [2]. See section VII. Watertight Polygonization); and generate, at least by applying a UV stabilization process to the first mesh sequence, a UV stabilized mesh sequence, wherein the UV stabilization process employs a first texture sequence (7) Texture map generation (automatic) 8) UV coordinate generation (automatic) 9) Store Wavefront mesh (automatic) See section III. The Process Model, right col., last paragraph)( Because frequent switches in UV-mapping directions can cause visually disturbing seams, mapping can be improved by enforcing the locally dominant keyframe. One method to do so is to recursively enumerate connected polygon neighbors in n passes, and then prefer the mapping direction which has the largest number of votes. Finally the selected UV-coordinates are converted into global texture coordinates and stored. The keyframe images are not undistorted to better maintain maximum texture quality. The resulting meshes can be observed in Figure 9. See section VIII. Mesh Texturing, right col., paragraph below equation 8)(See figure 9), but is silent to associated with images of a human.
Dai creating a texture map from five raw texture image from five views of faces (We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping. We propose a dense correspondence system, combining a hierarchical parts-based template morphing framework in the shape channel and a refining optical flow in the texture channel. The texture map is generated using raw texture images from five views. See abstract).
Tykkala and Dai teach of texture mapping objects and Dai teaches that the texture mapping can be performed on a face utilizing multiple views to create a 3d Morphable model, therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the invention to combine the system of Tykkala with the morphable model from texture mapping techniques of Dai such that the system could create morphable models of various captured objects in a scene.
Regarding claim 9, Tykkala in view of Dai teaches The system of claim 8, wherein the applying of the UV stabilization process comprises stabilizing vertices of the first mesh sequence resulting in a second mesh sequence comprising stabilized vertices (Tykkala; Polygon models are compact in their memory consumption and are better supported by standard 3D modeling programs than point clouds. A polygonization phase generates a polygon mesh from a point cloud. In our context, the method should take into account noise and missing data. A common approach is to fit the points to a surface using the zeroset of an implicit function, such as a sum of radial bases or piecewise polynomial functions. We select the Poisson method, because it produces a watertight surface based on a photometrically refined, oriented point cloud [2]. See section VII. Watertight Polygonization) (Tykkala; Because frequent switches in UV-mapping directions can cause visually disturbing seams, mapping can be improved by enforcing the locally dominant keyframe. One method to do so is to recursively enumerate connected polygon neighbors in n passes, and then prefer the mapping direction which has the largest number of votes. Finally the selected UV-coordinates are converted into global texture coordinates and stored. The keyframe images are not undistorted to better maintain maximum texture quality. The resulting meshes can be observed in Figure 9. See section VIII. Mesh Texturing, right col., paragraph below equation 8) (Tykkala; 7) Texture map generation (automatic) 8) UV coordinate generation (automatic) 9) Store Wavefront mesh (automatic) See section III. The Process Model, right col., last paragraph).
Regarding claim 10, Tykkala in view of Dai teaches The system of claim 9, wherein the applying of the UV stabilization process further comprises generating a multi-view environment (MVE) comprising an MVE sequence based on the first mesh sequence and the first texture sequence (Tykkala; Fig. 9. Final textured Poisson meshes loaded into Meshlab for inspection: a) Room B, b) Kitchen. Poisson reconstruction produces watertight mesh, whose texturing is photorealistic as it is directly mapped from the keyframe images. The cost of reduced memory footprint is over-smoothing, which may occur at thin surfaces such as the shelf in 9a. Also lighting changes can be detected at seams where texture data source switches from one keyframe to another. Otherwise the models are photorealistic and in metric units. See figure 9 and caption).
Regarding claim 15, Tykkala teaches A non-transitory computer-readable medium having instructions stored thereon that, in response to execution, cause a system comprising at least one processor to perform operations (In this work, a RGB-D input stream is utilized
for GPU-boosted 3D reconstruction of textured indoor environments. The goal is to develop a process which produces standard 3D models from indoors to explore them virtually. See abstract)( In our workflow, photorealistic 3D models are produced by using a laptop with a low-end GPU and a RGB-D sensor. See I. Introduction, first paragraph) comprising:
receiving a first mesh sequence comprising a group of unstabilized meshes comprising volumetric data associated with images (1) Record RGB-D video (manual) 2) Generate 3D trajectory by RGB-D tracking (automatic) 3) Select keyframes (automatic) 4) Depth map fusion (automatic) 5) Optional : bundle adjustment (semi-automatic) 6) Watertight polygonization (automatic) See section III. The Process Model, right col., last paragraph)( Whether or not RGB-D tracking uses keyframes, only the trajectory is stored. The model keyframes are selected by looping the trajectory and storing a keyframe whenever user-specified angular or translational distance to the existing model is exceeded. The neighboring RGB-D measurements to the keyframes are efficiently localized (timestamp or frame index) and depth map fusion is executed. In depth map fusion, keyframe depth maps are filtered using all RGB-D measurements available. See section III. The Process model., left col., first paragraph)( Polygon models are compact in their memory consumption and are better supported by standard 3D modeling programs than point clouds. A polygonization phase generates a polygon mesh from a point cloud. In our context, the method should take into account noise and missing data. A common approach is to fit the points to a surface using the zeroset of an implicit function, such as a sum of radial bases or piecewise polynomial functions. We select the Poisson method, because it produces a watertight surface based on a photometrically refined, oriented point cloud [2]. See section VII. Watertight Polygonization); and generating, based on an output from a UV stabilization process applied to the first mesh sequence, a UV stabilized mesh sequence, wherein the UV stabilization process employs a first texture sequence (7) Texture map generation (automatic) 8) UV coordinate generation (automatic) 9) Store Wavefront mesh (automatic) See section III. The Process Model, right col., last paragraph)( Because frequent switches in UV-mapping directions can cause visually disturbing seams, mapping can be improved by enforcing the locally dominant keyframe. One method to do so is to recursively enumerate connected polygon neighbors in n passes, and then prefer the mapping direction which has the largest number of votes. Finally the selected UV-coordinates are converted into global texture coordinates and stored. The keyframe images are not undistorted to better maintain maximum texture quality. The resulting meshes can be observed in Figure 9. See section VIII. Mesh Texturing, right col., paragraph below equation 8)(See figure 9), but is silent to associated with images of a human.
Dai creating a texture map from five raw texture image from five views of faces (We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping. We propose a dense correspondence system, combining a hierarchical parts-based template morphing framework in the shape channel and a refining optical flow in the texture channel. The texture map is generated using raw texture images from five views. See abstract).
Tykkala and Dai teach of texture mapping objects and Dai teaches that the texture mapping can be performed on a face utilizing multiple views to create a 3d Morphable model, therefore, it would have been obvious to one or ordinary skill in the art before the effective filing date of the invention to combine the system of Tykkala with the morphable model from texture mapping techniques of Dai such that the system could create morphable models of various captured objects in a scene.
Regarding claim 16, Tykkala in view of Dai teaches The non-transitory computer-readable medium of claim 15, wherein the UV stabilization process comprises stabilizing vertices of the first mesh sequence resulting in a second mesh sequence comprising stabilized vertices (Tykkala; Polygon models are compact in their memory consumption and are better supported by standard 3D modeling programs than point clouds. A polygonization phase generates a polygon mesh from a point cloud. In our context, the method should take into account noise and missing data. A common approach is to fit the points to a surface using the zeroset of an implicit function, such as a sum of radial bases or piecewise polynomial functions. We select the Poisson method, because it produces a watertight surface based on a photometrically refined, oriented point cloud [2]. See section VII. Watertight Polygonization) (Tykkala; Because frequent switches in UV-mapping directions can cause visually disturbing seams, mapping can be improved by enforcing the locally dominant keyframe. One method to do so is to recursively enumerate connected polygon neighbors in n passes, and then prefer the mapping direction which has the largest number of votes. Finally the selected UV-coordinates are converted into global texture coordinates and stored. The keyframe images are not undistorted to better maintain maximum texture quality. The resulting meshes can be observed in Figure 9. See section VIII. Mesh Texturing, right col., paragraph below equation 8) (Tykkala; 7) Texture map generation (automatic) 8) UV coordinate generation (automatic) 9) Store Wavefront mesh (automatic) See section III. The Process Model, right col., last paragraph).
Regarding claim 17, Tykkala in view of Dai teaches The non-transitory computer-readable medium of claim 16, wherein the UV stabilization process further comprises generating a multi-view environment (MVE) comprising an MVE sequence based on the first mesh sequence and the first texture sequence (Tykkala; Fig. 9. Final textured Poisson meshes loaded into Meshlab for inspection: a) Room B, b) Kitchen. Poisson reconstruction produces watertight mesh, whose texturing is photorealistic as it is directly mapped from the keyframe images. The cost of reduced memory footprint is over-smoothing, which may occur at thin surfaces such as the shelf in 9a. Also lighting changes can be detected at seams where texture data source switches from one keyframe to another. Otherwise the models are photorealistic and in metric units. See figure 9 and caption).
Allowable Subject Matter
Claims 4-7, 11-14, 18-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The prior art of record alone or in combination is silent to the limitations “wherein the using of the UV stabilization process further comprises reconstructing, by the device, textures based on the MVE sequence and the second mesh sequence resulting in a third mesh sequence and a second texture sequence.” Of claim 4 when read in light of the rest of the limitations in claim 4 and the claims to which claim 4 depends and thus claim 4 contains allowable subject matter.
Claims 5-7 contain allowable subject matter because they depend on a claim that contains allowable subject matter.
The prior art of record alone or in combination is silent to the limitations “wherein the applying of the UV stabilization process further comprises reconstructing textures based on the MVE sequence and the second mesh sequence resulting in a third mesh sequence and a second texture sequence. ” Of claim 11 when read in light of the rest of the limitations in claim 11 and the claims to which claim 11 depends and thus claim 11 contains allowable subject matter.
Claims 12-14 contain allowable subject matter because they depend on a claim that contains allowable subject matter.
The prior art of record alone or in combination is silent to the limitations “wherein the UV stabilization process further comprises reconstructing textures based on the MVE sequence and the second mesh sequence resulting in a third mesh sequence and a second texture sequence. ” Of claim 18 when read in light of the rest of the limitations in claim 18 and the claims to which claim 18 depends and thus claim 18 contains allowable subject matter.
Claims 19-20 contain allowable subject matter because they depend on a claim that contains allowable subject matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS R WILSON whose telephone number is (571)272-0936. The examiner can normally be reached M-F 7:30-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (572)-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS R WILSON/Primary Examiner, Art Unit 2611