DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 were pending for examination in the Application No. 18/364,699 filed August 3rd, 2023. In the remarks and amendments received on November 19th, 2025, claims 1-2, 4, 6-7, 10-12, 14, and 16-17 are amended and claims 5, 15, and 20 are canceled. Accordingly, claims 1-4, 6-14, and 16-19 are currently pending for examination in the application.
Response to Amendment
Applicant’s amendments filed November 19th, 2025, to the Claims have overcome each and every 35 § U.S.C. 112 (b) rejection previously set forth in the Non-Final Office Action mailed August 27th, 2025. Accordingly, the 35 § U.S.C. 112 (b) rejection(s) are withdrawn in response to the remarks and amendments filed. Examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure.
Response to Arguments
Applicant’s arguments filed November 19th, 2025, regarding the rejection(s) of the independent claim(s) have been fully considered but are not persuasive.
The examiner respectfully disagrees with Applicant’s remarks that Pylvaenaeinen does not teach the newly amended claim 1 limitation “wherein an intersection between a plane formed by the given at least three of the 3D points and a ray creates the at least one of the 2.5D points, the ray extended from a sensor location of a camera toward a corresponding pixel of the 2D image captured by the camera” because: (1) the three scan data points of Pylvaenaeinen recited in paragraph [0051] (i.e., “at least three scan data points, for example, L1(t), L2(t), L2(t+1)”) are not “3D points” of a point cloud; and (2) the ray of Pylvaenaeinen depicted in Fig. 6 and recited in paragraph [0061] (i.e., “ray 604 (dashed line) using a scanning system 606”) is entirely blocked by the object block 1 or object block 2, and thus does not teach or suggest “the ray extending from a sensor location of a camera toward a corresponding pixel of the 2D image captured by the camera” (pgs. 8-9 of Applicant’s Remarks).
Regrading Applicant’s first remark of Pylvaenaeinen, the examiner respectfully disagrees that the three scan points “L1(t), L2(t), L2(t+1)” recited in paragraph [0051] are not 3D points of a point cloud. Paragraph [0062] recites the scanned data points are LIDAR data points (e.g., “scanned data points (e.g., LIDAR)”, wherein paragraph [0065] further recites that LIDAR data points are “Three-dimensional (3D) LIDAR points” and paragraph [0058] recites the scanned data points as point clouds (e.g., “distance data points (point clouds)”). Therefore, the “three scan data points, for example, L1(t), L2(t), L2(t+1)” recited in paragraph [0051] are 3D points of a 3D point cloud as the scan data points are captured by LIDAR.
Regrading Applicant’s second remark of Pylvaenaeinen, the examiner respectfully disagrees that Pylvaenaeinen does not teach or suggest “the ray extended from a sensor location of a camera toward a corresponding pixel of the 2D image captured by the camera” because the ray is entirely blocked by the object block 1 or the object block 2. As depicted in Fig. 6 and recited in paragraph [0061], the ray is the laser “ray 604 (dashed line)” is not blocked by any blocks from its connection from its starting point of extension: a sensor location of a camera. The sensor location of a camera is the location of the scanning system (e.g., “scanning system 606”). The laser extends towards a point (e.g., “point 602”), which corresponds to a pixel (e.g., “blocked” pixels; or “missing pixels” recited in paragraph [0062]) of the 2D image (e.g., a “panorama image”; or “closest panorama image” recited in paragraph [0062]) captured by the camera (e.g., an “image capture system”). The claim language “towards a corresponding pixel of the 2D image captured by the camera” does not preclude pixels that are “missing” or “blocked” in the 2D image. Furthermore, the examiner would like to bring to Applicant’s attention that Fig. 6 and paragraph [0061] recite a particular scenario regarding assigning color to 3D scanned data points that correspond to “missing” or “blocked” pixels in 2D images. Pylvaenaeinen further teaches in paragraph [0060] performing the same color correspondence to each 3D scanned data point from corresponding pixels in panorama images (see paragraph [0060]), where the particular scenario depicted in Fig. 6 and paragraph [0061] (i.e., color assignment for 3D scanned points corresponding to pixels occluded or not captured in corresponding panoramic images) is highlighted as a particular problem solved by the overall system of Pylvaenaeinen.
Priority (Previously Presented)
Acknowledgment is made of applicant’s claim for benefit of a prior-filled provisional application under 35 U.S.C. 119(e). The certified copy has been filed as provisional U.S. Application No. 63/428,131, filed on November 28th, 2022.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 6-8, 11-14, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Wohlfeld et al. (Wohlfeld; US 2021/0055420 A1) in view of Pylvaenaeinen et al. (Pylvaenaeinen; US 2016/0217611 A1).
Regarding claim 1, Wohlfeld discloses a computer-implemented method comprising:
retrieving a three-dimensional (3D) point cloud of an environment and a two-dimensional (2D) image of the environment, the 3D point cloud comprising 3D points, the 2D image comprising pixels having resolution information (para(s). [0009] and [0060], recite(s)
[0009] “According to an exemplary embodiment, a method for measuring three-dimensional (3D) data of an area, the method may include generating, with a spherical laser scanner (SLS), a 3D point cloud of an area; capturing, with a plurality of cameras, color photographic images of the area; and adding color data to the 3D point cloud based on the color photographic images. The plurality of cameras may be mounted on the base and spaced apart in a circumferential direction around a pan axis of the SLS.”
[0060] “…The captured high resolution image may be used by controller 50 as the color photographic image used to add color data to the 3D point cloud.”
, where the “color photographic images” comprising a “captured high resolution image” is a 2D image comprising pixels having resolution information (e.g., “high resolution”));
transforming the 3D point cloud to a coordinate system of the 2D image (para(s). [0050] and [0052], recite(s)
[0050] “ …Scan points may also be associated with identifying information such as timestamp or location of the SLS 20 within the coordinate system to facilitate integration with scans taken at other locations or with color photographic images captured by cameras 32.”
[0052] “…Hence, because the position and orientation of the cameras 32 relative to SLS 20 are fixed and known, and because the position and orientation of the SLS 20 at the time of triggering the cameras is known, controller 50 can assign coordinates and orientation to the captured images within the coordinate system of the 3D point cloud. Controller 50 can then project points of the 3D point cloud to pixels of the captured photographic image.…”
, where associating the “scan points” of the 3D point cloud within a “coordinate system to facilitate integration …with color photographic images captured by cameras” is transforming the 3D point cloud to a coordinate system of the 2D image (e.g., “project points of the 3D point cloud to pixels of the captured photographic image”)); and
generating a two and a half dimension (2.5D) point cloud by creating 2.5D points, wherein the generating comprises providing the resolution information to 3D points in a field-of-view captured by the 2D image thereby creating the 2.5D points, the generating further comprising creating at least one of the 2.5D points between(para(s). [0046], [0057], and [0059], recite(s)
[0046] “Base 30 may include one or more two-dimensional (2D) photographic cameras 32 capable of capturing a color photographic image. …”
[0057] “…Controller 50 may also use interpolation methods to add points to the 3D point cloud in post-processing. In an exemplary embodiment, camera 32 may have a higher resolution than the 3D point cloud acquired by SLS 20. In this case, the captured photographic images may be used to assist in the interpolation to add points to the 3D point cloud.”
[0059] “In at least an embodiment, controller 50 may calculate a virtual panoramic image based on the 3D point cloud with the associated color data from the captured photographic images. The panoramic image may be based on the mesh or the interpolated point cloud. …”
, where the “interpolated point cloud” is a 2.5D point cloud generated by providing resolution information to 3D points in a field-of-view captured by the 2D image (e.g., the “add[ed] points to the 3D point cloud” by “interpolation” based on the “higher resolution” captured “photographic images”) and creating one or more of the 2.5D points between the 3D points (e.g., “interpolation” which “add points to the 3D point cloud”))
.
Where Wohlfeld does not specifically disclose
creating one or more of the 2.5D points between a given at least three of the 3D points;
wherein an intersection between a plane formed by the given at least three of the 3D points and a ray creates the at least one of the 2.5D points, the ray extended from a sensor location of a camera toward a corresponding pixel of the 2D image captured by the camera;
Pylvaenaeinen teaches in the same field of endeavor of interpolation in a three-dimensional space using a 3D point cloud and 2D image
creating one or more of the 2.5D points between a given at least three of the 3D points (para(s). [0007], [0051], and [0062], recite(s)
[0007] “Three-dimensional (3D) triangles can be formed by joining the three points of the scan data thereby creating a 3D surface element. …”
[0051] “Given at least three scan data points, for example, L1(t), L2(t), L2(t+1), the surface element TS1 can be aligned (registered) to a panorama image or a part thereof, thereby enabling color assignments for the scan data points, and thus, colored data points for the missed panorama pixel data. These triangular surface elements (TS1 and TS2) can be rasterized to create map pixels and to interpolate the depth on this area defined by the quadrilateral surface element 312. This approach not only provides an improvement when dealing with occlusions, but also avoids some problems with noise in the laser depth measurements. Data from the panorama images are rendered into the map projection (no image data is being discarded). No data is altered in the panorama images-only the map tiles being created. …”
[0062] “Hence, the scanning system 606 facilitates the rendering of the missing pixels in the gap (between location A and Location B) of the panorama image capture system 608 using scan distance data, by rendering scanned data points (e.g., LIDAR) and assigning color based on the closest panorama image to the laser ray origin. …”
, where “three scan data points” are at least three points of 3D points of a point cloud and “interpolat[ing] the depth on this area” is creating one or more 2.5D points between the three points of the 3D points);
wherein an intersection between a plane formed by the given at least three of the 3D points and a ray creates the at least one of the 2.5D points (para(s). [0051] and [0062]—see citations in the preceding limitation immediately above—, where a “triangular surface element” is a plane formed by the given at least three of the 3D data points (i.e., “at least three scan data points”—e.g., “LIDAR”), where the “interpolat[ing] the depth on this area” through “rasteriz[ation]” is creating one or more 2.5D points by at least a ray (e.g., a “laser ray”)), the ray extended from a sensor location of a camera toward a corresponding pixel of the 2D image captured by the camera (para(s). [0060-0062] and Fig. 6, recite(s)
[0060] “For continuous scanning systems that do not record color, to assign a color to a scanned point, the point is projected back to one of the panorama images that captured the scene. However, panorama images are not captured for every point in the scene. Rather, a panorama image may be captured at predetermined distances (e.g., approximately every four meters). Since every detail of the scene geometry is not recovered by the panorama images, it can be difficult to determine which, if any, of the panorama images has the correct color for a given point.”
[0061] “For example, FIG. 6 illustrates a system 600 that captures points that are not colored. Consider the point 602 recorded by the laser ray 604 (dashed line) using a scanning system 606. When continuously scanning three objects (e.g., Block 1, Block 2, and Block 3), the scanning system 606 can recover the surface point 602 on Block 2 (the object behind two closer blocks (Block 1 and Block 3)), but a panorama image capture system 608, when at location A, is blocked by the object Block 1 from capturing the point 602, and when at location B, the panorama image capture system capture system 608 is again blocked from capturing the point 602, this time by the object Block 3. …”
[0062] “Hence, the scanning system 606 facilitates the rendering of the missing pixels in the gap (between location A and Location B) of the panorama image capture system 608 using scan distance data, by rendering scanned data points (e.g., LIDAR) and assigning color based on the closest panorama image to the laser ray origin. Another technique can be to interpolate the missing pixels using points on consecutive panorama images from location A to location B to arrive at the colors of the points in the coverage gap. Yet another technique can be to combine multiple viewpoints from the panoramic imagery to create a less fractured representation of the distance data and map layers.”
PNG
media_image1.png
652
520
media_image1.png
Greyscale
, where “ray 604 (dashed line)” is a ray extending from a sensor location of a camera (e.g., location of “scanning system 606”) towards a point (e.g., “point 602”), which corresponds to a pixel (e.g., “blocked” pixels or “missing pixels”) of the 2D image (e.g., a “panorama image” or “closest panorama image”) captured by the camera (e.g., an “image capture system”)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Wohlfeld to incorporate creating the one or more of the 2.5D points between a given at least three of the 3D points, wherein an intersection between a plane formed by the given at least three of the 3D points and a ray creates the at least one of the 2.5D points, the ray extended from a sensor location of a camera toward a corresponding pixel of the 2D image captured by the camera to improve dealing with occlusions when interpolating between 3D points in a 3D point cloud as taught by Pylvaenaeinen above (see para. [0051] above).
Regarding claim 2, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 1, wherein Pylvaenaeinen further teaches for each of the at least one of the 2.5D points that are generated, the given at least three of the 3D points are closest to a position of a camera having captured the 2D image (para(s). [0062]—see citation in claim 1 above—, where “rendering scanned data points… and assigning color based on the closest panorama image to the laser ray origin” are the given at least three of the 3D points are closest to a position of a camera having captured the 2D image (e.g., the “laser ray origin” is the same position as the camera having captured the 2D image as recited in paras. [0047] and [0049]:
[0047] “…Thus, it can be a consideration to only render the points that were actually captured by laser rays originating from or near the center of the panorama image. …”
[0049] “The surface elements derived from the scan pattern of the scanning system can be exploited to fill the coverage gap in consecutive images of the panorama camera system using local surface geometry for projection of the imagery into different viewpoints. In other words, a scan data point, since it has no color, is projected back to a panorama image (e.g. closest) by inferring an approximate geometric surface element, and then use the GPU (graphical processing unit) and/or standard graphics techniques to render a texture onto that polygonal patch. This not only solves problems with occlusions, but also removes noise. …”
).
Regarding claim 3, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 1, wherein Wohlfeld further discloses each of the 2.5D points maintains a color from the 2D image (para(s). [0060]—see citation in claim 1 limitation “retrieving a…” above—, where “add[ing] color data” using the “color photographic image” is maintaining a color from the 2D image).
Regarding claim 4, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 1, wherein Pylvaenaeinen further teaches linear interpolation is utilized for creating the at least one of the 2.5D points between the given at least three of the 3D points (para(s). [0049], recite(s)
[0049] “The surface elements derived from the scan pattern of the scanning system can be exploited to fill the coverage gap in consecutive images of the panorama camera system using local surface geometry for projection of the imagery into different viewpoints. In other words, a scan data point, since it has no color, is projected back to a panorama image (e.g. closest) by inferring an approximate geometric surface element, and then use the GPU (graphical processing unit) and/or standard graphics techniques to render a texture onto that polygonal patch. This not only solves problems with occlusions, but also removes noise. Millions of points are created from the four corners of the polygon actually measured. (It is assumed the polygon is a flat surface between the points in order to perform linear interpolation.)”
, where the “polygon” includes the three points of a triangle as recited in para. [0007]:
[0007] “Three-dimensional (3D) triangles can be formed by joining the three points of the scan data thereby creating a 3D surface element. …”
).
Regarding claim 6, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 1, wherein Wohlfeld further teaches the at least one of the 2.5D points comprises a resolution of the corresponding pixel in the 2D image (para(s). [0060]—see citation in claim 1 limitation “retrieving a…” above—, where the added color data is from a “captured high resolution image” is the 2.5D point comprising a resolution of the corresponding pixel in the 2D image).
Regarding claim 7, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 1, wherein Wohlfeld further discloses the at least one of the 2.5D points comprises a color of the corresponding pixel in the 2D image (para(s). [0060]—see citation in claim 1 limitation “retrieving a…” above—, where “add[ing] color data” using the “color photographic image” is one 2.5D point comprising a color of the corresponding pixel in the 2D image).
Regarding claim 8, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 1, wherein Wohlfeld further discloses the 3D point cloud comprises a first point density of the 3D points and the 2.5D point cloud comprises a second point density of the 2.5D points greater than the first point density (para(s). [0057] and [0059]—see citation in claim 1 limitation “generating a…” above—, where the “interpolated point cloud” having more points (i.e., “add[ed] points”) than the 3D point cloud is the 2.5D point cloud (i.e., the “interpolated point cloud”) comprising a second point density of the 2.5D points greater than the first point density of the 3D points of the 3D point cloud).
Regarding claim 11, the claim differs from claim 1 in that the claim is in the form of a system comprising:
at least one memory having computer readable instructions; and
at least one processor for executing the computer readable instructions, the computer readable instructions controlling the at least one processor to perform operations comprising the method of claim 1. Wohlfeld further discloses said one or more memories and said one or more processors (para(s). [0007], recite(s)
[0007] “…The controller may include a processor and a memory…”
). Therefore, claim 11 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above).
Regarding claim 12, the claim recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above).
Regarding claim 13, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above).
Regarding claim 14, the claim recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above).
Regarding claim 16, the claim recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above).
Regarding claim 17, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above).
Regarding claim 18, the claim recites similar limitations to claim 8 and is rejected for similar rationale and reasoning (see the analysis for claim 8 above).
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wohlfeld in view of Pylvaenaeinen as applied to claim(s) 8 and 18 above, and further in view of Wang et al. (Wang; US 2017/0103510 A1).
Regarding claim 9, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 8, wherein Wohlfeld further discloses the 3D point cloud was captured by a scanner(para(s). [0009]—see citation in claim 1 limitation “retrieving a…” above—and para(s). [0057}—see citation in claim 1 limitation “generating a…” above—, where the “laser scanner” is a scanner and the resulting captured 3D point cloud has a first point density of a lower density (e.g., a lower resolution) than a “higher resolution” 2D image).
Where Wohlfeld in view of Pylvaenaeinen does not specifically disclose
the 3D point cloud was captured by a scanner using a fast mode having a scan performed resulting in the 3D point cloud at the first point density, the first point density being a lower point density than another scan performed in normal mode of the scanner;
Wang teaches in the same field of endeavor of capturing a 3D point cloud and 2D image for maintaining color
the 3D point cloud was captured by a scanner using a fast mode having a scan performed resulting in the 3D point cloud at the first point density, the first point density being a lower point density than another scan performed in normal mode of the scanner (para(s). [0052], recite(s)
[0052] “At 342, the method 340 may include generating a model of a 3D object. The model may be generated from fusing data collected from a scan of the 3D object. For example, the model may be generated from a 2D RGB image and a 3D point cloud captured from a pre-scan of the 3D object. As described above, a pre-scan may include a low-resolution rapid scan of the 3D object relative to a subsequent high-resolution slower full scan of the 3D object. In this manner, a pre-scan may assist and/or inform a user and/or scan manager in adjusting scan parameters to generate an accurate reconstruction of the 3D object prior to sitting through the entire full scan.”
, where the 3D point cloud scan is “a low-resolution rapid scan of the 3D object relative to a subsequent high-resolution slower full scan of the 3D object” is capturing the 3D point cloud by a scanner using a fast mode (e.g., “rapid scan”) resulting in the 3D pint cloud at the first point density (e.g., “low-resolution”); wherein the first density is a lower point density (e.g., “lower resolution”) than another scan (e.g., “high-resolution”) performed in normal mode (e.g., “slower full scan”)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Wohlfeld in view of Pylvaenaeinen to incorporate capturing the 3D point cloud using a fast mode resulting in the first point density of the 3D point cloud being at a lower point density than another scan performed in a normal mode of the scanner to generate an accurate reconstruction of the environment without having to perform a full scan (e.g., a higher resolution scan) as taught by Vollrath above.
Regarding claim 19, the claim recites similar limitations to claim 9 and is rejected for similar rationale and reasoning (see the analysis for claim 9 above).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wohlfeld in view of Pylvaenaeinen as applied to claim 1 above, further in view of Rowell et al. (Rowell; US 2019/0208181 A1), and further more in view of Metzler et al. (Metzler; US 2022/0020131 A1, previously cited in list of references cited by examiner filed August 27th, 2025).
Regarding claim 10, Wohlfeld in view of Pylvaenaeinen discloses the computer-implemented method of claim 1, wherein Rowell teaches in the same field of endeavor of image capturing at least one of:
the 2D image is a result of a remapping for removing distortion in order to result in the 2.5D point cloud with a homogeneous point density (para(s). [0083], [0116], and [0178], recite(s)
[0083] “ In some embodiments, one or more of the camera modules includes a fish eye lens. Without fish eye specific distortion factors, wide field of view images captured by the fish eye lens will appear heavily distorted when projected on a rectangular display screen. The camera calibration module 221 may generate one or more fish eye specific distortion factors using one or more fish eye distortion modules. The fish eye distortion factors are incorporated into the calibration metadata for undistortion of wide field of view images during projection. In one example, the fish eye distortion factors warp the edges of the wide field of view images to make the images appear captured by a standard field of view lens. In other examples, the fish eye distortion factors apply a scale factor to wide field of view images during projection to project the undistorted portion of the wide field of view images.”
[0116] “The k.sub.1, k.sub.2, . . . , k.sub.n parameters 316 are distortion coefficients that describe the levels of lens distortion, as a function of the radius from the center of the captured image frame to the edge of the frame. In some embodiments, n can be, for example, between 1 and 16, depending on how precise the calibration needs to be and the characteristics of the particular lens. The k.sub.1, k.sub.2, . . . , k.sub.n parameters essentially describe how much distortion an image pixel has as a location of the pixel moves from the center of the image to the edge of the image.”
[0178] “In other embodiments, the camera calibration module 221 corrects distortion and calibration errors using re-calibration data. The camera calibration module 221 may identify a distortion error by selecting pixels including an object known to have a rectangular shape but appears curved in the captured image. To undistort the image, the camera calibration module 221 may generate re-calibration data including a model for correcting distortion. The re-calibration data may then be combined with the distortion coefficients included in the calibration file to obtain updated distortion coefficients. The image rectification module 222 and/or the PPM 224 the project the captured image using the updated distortion coefficients to remove the distortion error. ”
, where “correcting [geometric] distortion” through “image rectification” based on “how much distortion an image pixel has as a location of the pixel moves from the center of the image to the edge of the image” is remapping for removing distortion (e.g., “undistort[ing] the image” using a “model for correcting distortion”);
where the claim limitations following “in order to result in…” is merely an intended use/result limitation and thus will not be interpreted as a functional or structural requirement of the claim (see MPEP § 2114, subsection II)); and
the 2D image is a result of color adjustment using at least one of a color contrast enhancement process(para(s). [0076], recite(s)
[0076] “Image signal processing module 225 embodiments processing image may also perform one or more pre-processing operations that are specific to image data. In one embodiment, the image signal processing module 225 performs pre-processing operations to correct and/or enhance image data. For example, the image signal processing module 225 corrects the field shading, enhances RGB color data, sharpens image resolution, adjusts color contrast levels, adjusts white balance, stabilizes a video sequence, corrects lens distortion, corrects occlusion zones, or performs other image or video sequence corrections or enhancements. The image signal processing module 225 may correct and/or enhance image data by applying one or more interpolation functions or extrapolation functions to image data included in an image or video sequence. In one example, the image signal processing module 225 corrects occlusion zones and color shading by interpolating the RGB color data in the surrounding the occlusion zone or mis-shaded area.”
, where “adjusts color contrast” is a color contrast enhancement process).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Wohlfeld in view of Pylvaenaeinen to incorporate removing distortion and using at least color contrast enhancement processing in the resulting 2D image to correct for distortions caused by different lens types of imaging devices, such as in fisheye cameras, and correct and/or enhance image data prior to image processing as taught by Rowell above.
Where Wohlfeld, as modified by Pylvaenaeinen and Rowell, does not specifically disclose
the 2D image is a result of color adjustment using at least one of …a histogram equalization process from another 2D image;
Metzler teaches in the same field of endeavor of removing distortion in 2D images for colorizing 3D point clouds
the 2D image is a result of color adjustment using at least one of …a histogram equalization process from another 2D image (para(s). [0027] and [0037-0038], recite(s)
[0027] “The enhancement of the sensor image provided by a machine learning technique specifically trained for enhancing the sensor image results in distortions in the enhanced image. The enhanced image has a processed image geometric correctness which is lower than the sensor image geometric correctness of the sensor image. During brightness enhancement by the neural network, for example, edges may be shifted, implying that edge information may be less reliable in the enhanced image as compared to the sensor image. In case the enhanced image is to be used for a subsequent triangulation of some object in the captured scene of interest or for coloring a point cloud, for example, properties such as the position of edges or corners of the object in the sensor image should ideally be maintained after image enhancement, i.e. the position of edges or corners of the object in the sensor image should ideally be the same both before and after image enhancement. State-of-the-art machine learning techniques for image enhancement, however, are only trained for enhancing images, for example increasing the brightness, and not for maintaining metrological properties of enhanced images.”
[0037] “In another embodiment of the method, the geometric correction image is generated using the sensor image by linear combination of color channels of the sensor image, in particular by applying a gamma expansion and/or histogram equalization to the color channels before the linear combination.”
[0038] “…the sensor image may be gamma-expanded. Linearly combining the different color channels to obtain a relative luminance image may improve the image contrast of the linearly combined image as compared to the image contrast present in the individual color channels…”
, where enhancing a “sensor image” through “histogram equalization” to result in an “enhanced image” is generating a resulting 2D image from a color adjustment using at least histogram equalization processing from another 2D image (e.g., a “geometric correction image”)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Wohlfeld, as modified by Pylvaenaeinen and Rowell, to further incorporate using histogram equalization processing from the another 2D image in addition to color contrast enhancement to further improve color contrast in the resulting 2D image for subsequent use in the application of assigning color to a 3D point cloud using the resulting 2D image as taught by Metzler above (see para. [0027] above).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.Z.Y./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666