DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-8 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tong et al (US Pub 2022/0172429 A1).
Regarding Claim 1, Tong et al teaches an apparatus wherein a three-dimensional model of a target object is created from point cloud data in which each point represents three-dimensional coordinates (see Paragraph [0046] Returning to FIG. 3, processing continues at operation 302, where, using 2D images 111, image segmentation, 3D reconstruction, and rendering are performed to create a 3D model. Such techniques may include point cloud reconstruction by binarization of 2D images 111, background modeling, foreground detection, image segmentation of 2D images 111, and 3D reconstruction to generate a 3D point cloud having, as discussed herein, 3D coordinates for each point such that each point is deemed to be located at a surface of an object in a scene. ...”; see Paragraph (0046); Figs. 1, 3),
the three-dimensional model is superimposed on an image in which a target object of the three-dimensional model is photographed (projecting the three-dimensional model to an image; see Paragraph [0037] ….” Such reconstructed 2D images 114 may be generated using any suitable technique or techniques that project 3D model 112 to an image plane…”.),
a point cloud to be added to a point cloud that constitutes the three-dimensional model is selected by comparing the target object in the image with the three-dimensional model (see Paragraph 0041, Figs. 3, 4, “….a 3D model is projected to each camera view. The detected bounding box is then used to crop an image region (e.g., a rectangular image region) and the image region of the captured image is compared to the image region of the reconstructed image (for the same camera view) over the bounding box area. The comparison may be applied to all camera views and, in response to any detected image region differences comparing unfavorably to a threshold, an inference is made that object of interest has poor quality in the 3D model, which is reported as 3D model error 115. For example, the 3D model error may have an underlying error in the 3D point cloud used to generate the 3D model (e.g., a missing object in the 3D point cloud). Any suitable response may be made in accordance with the reported error such as inserting the object into the 3D model (using a prior modeling of the object, pre-knowledge of the object, etc.), ….”); and
the three-dimensional model of the target object is created again including the point cloud to be added (see Paragraph 0035, Fig. 1; “…. a corresponding 3D model 112 is generated. 3D model module 102 may generate 3D model 112 using any suitable technique or techniques. In some embodiments, 3D model module 102 performs image segmentation and 3D reconstruction using the corresponding images for a particular time instance (e.g., 36 corresponding images captured from camera array 101) from 2D images 111 to generate a point cloud and subsequent rendering of the point cloud to generate 3D model 112 including texture information.”).
Regarding Claim 2, Tong et al teaches the apparatus wherein a point cloud within a threshold from an approximate line of the three-dimensional model is selected as the point cloud to be added ( see Paragraph [0030]; “ The difference metric for a particular image pair is compared to a threshold and, if it compares unfavorably to the threshold (e.g., it is greater than the threshold), a 3D model error indicator is generated and reported for the object, time instance of the images, image viewpoints, etc. such that the error may be resolved.
[0035] “… In some embodiments, 3D model module 102 performs image segmentation and 3D reconstruction using the corresponding images for a particular time instance (e.g., 36 corresponding images captured from camera array 101) from 2D images 111 to generate a point cloud and subsequent rendering of the point cloud to generate 3D model 112 including texture information.“).
Regarding Claim 3, Tong et al teaches the apparatus wherein the target object in the image is compared with the three-dimensional model using a color of the target object in the image. (see (0051) “…. As shown in FIG. 4, image region comparator 105 receives image content 403 (to image region 431) and image content 404 (corresponding to image region 432) …. Image content 403, 404 may include any suitable image content pertinent to the comparison being performed for image regions 431, 432 such as pixel data (e.g., pixel values in any color space or for only a luma channel… ”) .
Regarding Claim 4, Tong et al teaches the apparatus wherein color information of the target object in the image is assigned to each point superimposed on the target object, and the point cloud to be added is selected by comparing a range of point clouds having the same color information as the target object with the three-dimensional model. (see Paragraph [0051] “… As shown in FIG. 4, image region comparator 105 receives image content 403 (to image region 431) and image content 404 (corresponding to image region 432) …. Image content 403, 404 may include any suitable image content pertinent to the comparison being performed for image regions 431, 432 such as pixel data (e.g., pixel values in any color space or for only a luma channel… ”).
Regarding Claim 5, Tong et al teaches the apparatus wherein color information of the target object in the image is assigned to each point superimposed on the target object (projecting the three-dimensional model to an image; see Paragraph [0037] ….” Such reconstructed 2D images 114 may be generated using any suitable technique or techniques that project 3D model 112 to an image plane…”.), and the point cloud to be added is selected by extending the three-dimensional model until a point cloud having different color information from the target object appears (see Paragraph [0035] … 3D model module 102 performs image segmentation and 3D reconstruction using the corresponding images for a particular time instance (e.g., 36 corresponding images captured from camera array 101) from 2D images 111 to generate a point cloud and subsequent rendering of the point cloud to generate 3D model 112 including texture information.”);
(see (0051) “… As shown in FIG. 4, image region comparator 105 receives image content 403 (to image region 431) and image content 404 (corresponding to image region 432) …. Image content 403, 404 may include any suitable image content pertinent to the comparison being performed for image regions 431, 432 such as pixel data (e.g., pixel values in any color space or for only a luma channel… ”).
Regarding Claim 6, Tong et al teaches the apparatus wherein a size of the target object in the image is acquired by referring to a database storing information on the size of a random target object, and the point cloud to be added is selected by comparing the size of the acquired target object with the three-dimensional model. (SEE Paragraph [0041] “….The resultant highly accurate bounding box (e.g., one camera view has one bounding box) for each object (optionally including only important objects such as the ball in a sporting scene), a 3D model is projected to each camera view. ….. The comparison may be applied to all camera views and, in response to any detected image region differences comparing unfavorably to a threshold, an inference is made that object of interest has poor quality in the 3D model, which is reported as 3D model error 115. For example, the 3D model error may have an underlying error in the 3D point cloud used to generate the 3D model (e.g., a missing object in the 3D point cloud)….”).
Regarding Claim 7, the method Claim 7 is rejected for same reason as the apparatus Claim 1, since claim limitations are same in both claims.
Regarding Claim 8, the CRM Claim 8 is rejected for same reason as the apparatus Claim 1, since claim limitations are same in both claims (the CRM non-transitory computer readable storage medium (machine-readable media) is shown in Paragraph 0098).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY SHANKAR whose telephone number is (571)272-7682. The examiner can normally be reached M-F 9 am- 6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VIJAY SHANKAR
Primary Examiner
Art Unit 2624
/VIJAY SHANKAR/Primary Examiner, Art Unit 2624