DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s preliminary amendment filed on July 2, 2024 has been entered and made of record.
Claim Interpretation
Claims 1-18 are not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because they are all method claims.
Claim 19 is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the recitations of “memory” and “processor” provide a sufficient structure to perform all claimed limitations.
Claim 20 is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it is an article of manufacture claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gausebeck et al. (U.S. Pat. Appl. Pub. No. 2019/0026956 A1, referred as Gausebeck hereinafter)
Regarding claim 1 a representative claim, Gausebeck teaches a method, comprising:
detecting features comprising an input image using a machine learning based framework trained at least in part on a set of training images constrained to a prescribed feature space (see paras. [0065] (3D data derivation component 110 uses machine learning model to process the received image data 102 to determine derived 3D data 116 for the received 2D image data 102; determining deep information for visual features included in the received 2D image), [0066] (using standard models to determine 3D deep information for received 2D image by using machine learnings techniques)); and generating an output image at least in part by replacing features detected in the input image with corresponding features learned from the prescribed feature space (see paras. [0077] (changing the appearance of the visual features of the 3D model by removing objects from 3D model and integrating new 2D and 3D objects), [0079] (3D model generation component 118 generates a floor plan model by employing identified walls associated with the derived data 116 derived from 2D images)).
Regarding claim 2, Gausebeck further teaches wherein features defining the prescribed feature space of the set of training images are imparted to the input image to generate the output image, wherein the input image shares redundancies in feature space with the set of training images (see figure 1 and para. [0062] (3D models generated by component 118 is rendered and displayed on user display 132); para. [0128] (neural network results are not degraded by difference between the training images and real-world images; thus, the 2D input image and training image share redundancies (i.e., color, texture, features, background, foreground) so that results are not degraded)).
Regarding claim 3, Gausebeck further teaches wherein the machine learning based framework is trained to learn the prescribed feature space from the set of training images (see para. [0128] (3D data 116 for the image)).
Regarding claim 4, Gausebeck further teaches wherein the prescribed feature space is known and well defined with respect to the set of training images (see para. [0249] (known depth data)).
Regarding claim 5, Gausebeck further teaches wherein the set of training images share feature space redundancies (see para. [0249] (depth data for pixels, superpixels, object, etc. included in 2D images)).
Regarding claim 6, Gausebeck further teaches wherein the set of training images share feature space correlations (see para. [0249] (depth data for pixels, superpixels, object, etc. included in 2D images; pixels, superpixels and object are included in 2D images and thus correlations)).
Regarding claim 7, Gausebeck further teaches wherein the set of training images comprises priors for defining the prescribed feature space (see para. [0128] (2D images prior to input into models)).
Regarding claim 8, Gausebeck further teaches wherein the input image comprises a lower quality or resolution or size relative to the set of training images (see para. [0114] smaller or cropped image is used to generate derived 3D data; thus, the input image has a lower size/resolution/quality relative to the set of the training images)).
Regarding claim 9, Gausebeck further teaches wherein generating the output image comprises replacing detected features in the input image with closest or nearest matching features in the prescribed feature space (see paras. [0077] (changing the appearance of the visual features of the 3D model by removing objects from 3D model and integrating new 2D and 3D objects), [0079] (3D model generation component 118 generates a floor plan model by employing identified walls associated with the derived data 116 derived from 2D images)).
Regarding claim 10, Gausebeck further teaches wherein feature space manipulations of the input image to generate the output image result in corresponding pixel level transformations in an image space of the output image (see figure 12, item 1204)).
Regarding claim 11, Gausebeck further teaches, wherein detected and replaced features in the input image to generate the output image comprise texture features (see paras. [0077] (changing the appearance of the visual features of the 3D model by removing objects from 3D model and integrating new 2D and 3D objects), [0079] (3D model generation component 118 generates a floor plan model by employing identified walls associated with the derived data 116 derived from 2D images)).
Regarding claim 12, Gausebeck further teaches wherein detected and replaced features in the input image to generate the output image comprise pixel features (see paras. [0077] (changing the appearance of the visual features of the 3D model by removing objects from 3D model and integrating new 2D and 3D objects), [0079] (3D model generation component 118 generates a floor plan model by employing identified walls associated with the derived data 116 derived from 2D images)).
Regarding claim 13, Gausebeck further teaches wherein the generated output image comprises photorealistic quality (see para. [0079] (3D model generation component 118 generates a floor plan model by employing identified walls associated with the derived data 116 derived from 2D images); para. [0128] (neural network results are not degraded by difference between the training images and real-world images)).
Regarding claim 14, Gausebeck further teaches wherein the generated output image comprises a restored version of the input image (see para. [0128] (neural network results are not degraded by difference between the training images and real-world images).
Regarding claim 15, Gausebeck further teaches wherein the generated output image comprises an upscaled version of the input image (see para. [0114] smaller or cropped image is used to generate derived 3D data; thus, the input image has a lower size/resolution/quality relative to the set of the training images); para. [0128] (neural network results are not degraded by difference between the training images and real-world images); thus, the neural network result is an upscaled version of the input image)).
Regarding claim 16, Gausebeck further teaches wherein the generated output image comprises a better quality or resolution relative to the input image (para. [0128] (neural network results are not degraded by difference between the training images and real-world images)).
Regarding claim 17, Gausebeck further teaches wherein the generated output image comprises a cleaned version of the input image (para. [0128] (neural network results are not degraded by difference between the training images and real-world images)).
Regarding claim 18, Gausebeck further teaches wherein the generated output image comprises a denoised version of the input image (see para. [0111] (reducing noise from derived 3D data); para. [0128] (neural network results are not degraded by difference between the training images and real-world images); thus, the neural network result is a denoised version of the input image)).
Regarding claim 19, it is noted that claim recites similar claim limitations called for in the counterpart claim 1. Thus, the advanced statements as applied to claim 1 above are incorporated hereinafter. Gausebeck further teaches a processor and a memory coupled to the processor and configured to provide the processor with instructions (see paras. [0274] – [0276] (processors, program, software, and memory)).
Regarding claim 20, Regarding claim 19, it is noted that claim recites similar claim limitations called for in the counterpart claim 1. Thus, the advanced statements as applied to claim 1 above are incorporated hereinafter. Gausebeck further teaches a computer program product embodied in a non-transitory computer readable storage medium (see paras. [0274] – [0276] (processors with software, program, software, and memory)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY M DANG whose telephone number is (571)272-7389. The examiner can normally be reached Monday to Friday from 7:00AM to 3:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DMD
3/2026
/DUY M DANG/Primary Examiner, Art Unit 2662