DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 5 and 14 are objected to because of the following informalities: Claims 5 and 14 recite “according to the first iterative order according to the second iterative order” at the end of each claim. This appears to be a typographical error where either the word “and” was omitted between “order” and “according” or a typographical error where “according to the first iterative order” was meant to be deleted from the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 9, 10, 18 and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kulshreshtha (U.S. Publication 2024/0062345).
As to claim 1, Kulshreshtha discloses a processor (p. 15, section 0332-p. 16, section 0339) comprising:
one or more circuits to: obtain an input according to one or more views from a plurality of viewpoints of a two-dimensional (2D) texture model, the 2D texture model corresponding to a surface of a three-dimensional (3D) model (p. 3, section 0094-p. 4, section 0097; p. 10, section 0204-p. 11, section 0212; p. 13, section 0269-0274; p. 13, sections 0281-0282; p. 14, section 0298; p. 15, sections 0313-0323; multiple views of a 2D texture model for inpainting a surface, such as a wall, floor, or ceiling in a 3D model of a room, are input);
and generate, using a generative machine learning model and according to the input, an output that includes a 2D texture for the 3D model, the output corresponding to an indication of the 3D model and the 2D texture (p. 3, section 0094-p. 4, section 0097; p. 10, section 0204-p. 11, section 0212; p. 13, section 0269-0274; p. 13, sections 0281-0282; p. 14, section 0298; p. 15, sections 0313-0323; a neural network, which reads on a machine learning model, is used to refine a 2D texture and generate an output indicating the 2D texture together with the 3D model of the room).
As to claim 9, Kulshreshtha discloses wherein the processor is comprised in at least one of a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system for generating content for a virtual reality (VR), an augmented reality (AR), or a mixed reality (MR) system; a system for rendering content for a virtual reality (VR), an augmented reality (AR), or a mixed reality (MR) system; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (p. 1, section 0002; p. 15, section 0323; generating and rendering of the scene with the texture inpainting is for an AR system; further, synthetic data is generated).
As to claim 10, see the rejection to claim 1. Further, Kulshreshtha discloses one or more processors configured to: obtain one or more views of a three-dimensional (3D) model from a set of vantage points that at least partially envelopes a surface of the 3D model (p. 3-4, section 0094; multiple views of the 3D scene/model are used such that more surfaces can be at least partially included/enveloped in the process).
As to claim 18, see the rejection to claim 9.
As to claim 19, see the rejection to claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kulshreshtha in view of Smith (U.S. Publication 2024/0331322).
As to claim 2, Kulshreshtha discloses wherein the generative machine learning model comprises a model, and the one or more circuits are to: transform, using the model, the 2D texture model to reduce noise in the 2D texture model according to the indication corresponding to the 2D texture (p. 10, section 0204-p. 11, section 0212; p. 13, section 0269-0274; p. 13, sections 0281-0282; using an intermediate output indication of the 2D texture together with the 3D model of the room, loss between the result and ground truth, which can read on noise, is reduced). Kulshreshtha does not disclose, but Smith discloses that the model is a diffusion model (p. 53-54, section 0545; p. 56, section 0563-p. 57, section 0576; p. 66, section 0654; a diffusion model acts in a scene-based image editing system to refine and denoise input texture applied to a scene, which can be a 3D model with surfaces such as those of a human body). The motivation for this is to generate progressively more accurate maps and completed digital images (p. 50, section 0521-p. 51, section 0524). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kulshreshtha to use a diffusion model on a texture model in order to generate progressively more accurate maps and completed digital images as taught by Smith.
As to claim 11, see the rejection to claim 2.
As to claim 20, see the rejection to claim 2.
Claims 3, 4, 12, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kulshreshtha in view of Huang (CN 116228605 A, herein represented by a translation).
As to claim 3, Kulshreshtha does not disclose wherein the one or more circuits are further to transform, in a first iterative order according to a diffusion model corresponding to the generative machine learning model (see description of fig. 3 at p. 12-13; an iterative order is used to transform the image and progressively reduce noise), one or more of a plurality of portions of the 2D texture model to reduce noise in the 2D texture model (see p. 5, Specific implementation examples; also see p. 6-7, description of steps 103-106; also see p. 10, 4th paragraph and p. 11, 2nd-4th paragraphs; noise is added to each image, which corresponds to a portion of a texture model viewed from a particular angle, and iterations are done on each image in a particular order to then complete the image and reduce noise), wherein the 2D texture corresponds to the output subsequent to the first iterative order (p. 10, 4th paragraph; the output of the iterations is the noise-reduced primary image which, as noted above, corresponds to a texture image). The motivation for this is to more accurately produce missing content when views do not cover the entire image (see background at p. 2). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kulshreshtha to transform, in a first iterative order according to a diffusion model corresponding to the generative machine learning model, one or more of a plurality of portions of the 2D texture model to reduce noise in the 2D texture model, wherein the 2D texture corresponds to the output subsequent to the first iterative order in order to more accurately produce missing content when views do not cover the entire image as taught by Huang.
As to claim 4, Huang discloses wherein one or more of the plurality of the portions of the 2D texture model respectively correspond to one or more of the views of the 2D texture model (see p. 5, Specific implementation examples; also see p. 6-7, description of steps 103-106; also see p. 10, 4th paragraph and p. 11, 2nd-4th paragraphs; noise is added to each image, which corresponds to a portion of a texture model viewed from a particular angle, and iterations are done on each image in a particular order to then complete the image and reduce noise). Motivation for the combination is given in the rejection to claim 3.
As to claim 12, see the rejection to claim 3.
As to claim 13, see the rejection to claim 4.
Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kulshreshtha in view of Huang and further in view of Xu (CN 116310046 A, herein represented by a translation).
As to claim 5, as best understood, Kulshreshtha does not disclose, but Xu discloses wherein the one or more circuits are further to: transform, according to the diffusion model in a second iterative order, the portions of the 2D texture model to reduce noise in the 2D texture model, the second iterative order restricting the diffusion model to one or more iterations according to the first iterative model, wherein the 2D texture corresponds to the output subsequent to a plurality of iterations according to the first iterative order according the second iterative order (p. 9, last paragraph-p. 10, 2nd paragraph; p. 13-14; p. 16-17; for a 3D model with a number of respective viewpoints, an image is rendered with an associated texture using a diffusion model to reduce noise; different iterative orders can be specified to limit iterations; as an example, the reference suggests limiting to 40 or 50). The motivation for this is to improve image rendering effects and solve distortion problems. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kulshreshtha and Huang to transform, according to the diffusion model in a second iterative order, the portions of the 2D texture model to reduce noise in the 2D texture model, the second iterative order restricting the diffusion model to one or more iterations according to the first iterative model, wherein the 2D texture corresponds to the output subsequent to a plurality of iterations according to the first iterative order according the second iterative order in order to improve image rendering effects and solve distortion problems as taught by Xu.
As to claim 14, see the rejection to claim 5.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kulshreshtha in view of Xu.
As to claim 6, Kulshreshtha does not disclose, but Xu discloses wherein the one or more circuits are further to: allocate, according to a distance between a viewpoint among the plurality of viewpoints and a portion of the surface of the 3D object, a metric to the portion of the surface of the 3D object; and generate the output according to the metric (p. 9, last paragraph-p. 10, 2nd paragraph; p. 13-14; for a 3D model with a number of respective viewpoints, distance of an object surface from the first viewpoint is used as a metric; a depth image representing these distances is used to generate a rendered image with an associated texture using a diffusion model). The motivation for this is to improve image rendering effects and solve distortion problems. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kulshreshtha to allocate, according to a distance between a viewpoint among the plurality of viewpoints and a portion of the surface of the 3D object, a metric to the portion of the surface of the 3D object and generate the output according to the metric in order to improve image rendering effects and solve distortion problems as taught by Xu.
As to claim 15, see the rejection to claim 6.
Claims 7, 8, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kulshreshtha in view of Xu and further in view of Kunnath (U.S. Publication 2023/0418084).
As to claim 7, Xu discloses wherein the one or more circuits are further to: allocate the metric corresponding to a distortion caused by a diffusion model (p. 9, last paragraph-p. 10, 2nd paragraph; p. 13-14; for a 3D model with a number of respective viewpoints, distance and angle of an object surface from the first viewpoint is used as a metric to indicate whether distortion is introduced in the consistency or accuracy of texture in the model; a depth image representing these distances is used to generate a rendered image with an associated texture using a diffusion model). Motivation for the combination of references is given in the rejection to claim 6. Xu does not disclose, but Kunnath discloses, allocating a metric according to a determination that the distance satisfies a threshold wherein the metric comprises a weight to the portion of the surface of the 3D object for the model that satisfies the threshold (p. 7, section 0113; p. 14, section 0167-p. 15, section 0169; for 3D objects in a 3D scene, a distance is determined from a camera to a surface of an object; if the distance is indeterminate/infinite i.e. if the object is occluded, no weight is assigned to the surface image; if the distance satisfies the threshold of being less than infinite, a weight is assigned to the image with the portion of the surface based on distance to blend images in a stereoscopic model to reduce distortion in the model). The motivation for this is to increase accuracy and range of a depth map. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Kulshreshtha and Xu to allocate a metric according to a determination that the distance satisfies a threshold wherein the metric comprises a weight to the portion of the surface of the 3D object for the model that satisfies the threshold in order to increase accuracy and range of a depth map as taught by Kunnath.
As to claim 8, Xu discloses identifying, according to a first viewpoint among the plurality of viewpoints having a first view including a portion of the 2D texture model on the surface of the 3D model, a first metric indicating a first degree of distortion caused by a diffusion model; and identify, according to a second viewpoint among the plurality of viewpoints having a second view including the portion of the 2D texture model on the surface of the 3D model, a second metric indicating a second degree of distortion caused by the diffusion model (p. 9, last paragraph-p. 10, 2nd paragraph; p. 13-14; for a 3D model with a number of respective viewpoints, distance and angle of an object surface from the viewpoint is used as a metric to indicate whether distortion is introduced in the consistency or accuracy of texture in the model; a depth image representing these distances is used to generate a rendered image with an associated texture using a diffusion model). Motivation for the combination of references is given in the rejection to claim 6. Xu does not disclose, but Kunnath discloses, wherein the one or more circuits are further to select, according to a determination that the first degree of distortion is less than or equal to the second degree of distortion, the input to include the first view (p. 7, section 0113; p. 14, section 0167-p. 15, section 0169; for 3D objects in a 3D scene, a distance is determined from a camera to a surface of an object; if the distance is indeterminate/infinite i.e. if the object is occluded, no weight is assigned to the surface image and a second image is solely relied upon to be input to the stereoscopic model; the occlusion/visibility parameter can read on a degree of distortion parameter since blending an image with an occluded surface would lead to distortion of that surface in the model). Motivation for the combination is given in the rejection to claim 7.
As to claim 16, see the rejection to claim 7.
As to claim 17, see the rejection to claim 8.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON M RICHER whose telephone number is (571)272-7790. The examiner can normally be reached 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON M RICHER/Primary Examiner, Art Unit 2617