Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed on August 1, 2025 have been fully considered but they are not persuasive.
Applicant argues that the examiner’s BRI for the term “first image position data” based on para. [0071] of the present disclosure is incorrect. The examiner’s BRI for this term is “position data of each next pixel of sub-image data as it is projected onto the corresponding sub-mesh”. Applicant argues that “‘position data of each next pixel of sub-image data as it is projected onto the corresponding sub- mesh’ is nowhere in the application”.
The examiner disagrees. In determining the BRI for this term, the examiner also analyzed the context in which the term is used in the present disclosure. Para. [0071] reads, in full: “[a]s shown in FIG. 6, the mesh data includes mesh position data for a plurality of sub-meshes, the image data includes first image position data, and the first image position data includes, for example, position data of each pixel. Next, collected image data is processed based on an association between the mesh position data of the mesh data and the first image position data of the image data, so as to obtain processed image data 610.”
Paras. [0072] and [0073] further elaborate on what is meant by “first image position data” by providing an example in which the first image position data of each pixel and an association between the first image position data and the mesh position data are used to perform a process by which sub-images of the image data are “mapped and filled into” the sub-meshes of the mesh data. Para. [0072] discloses that there is a correspondence between the first image position data for each pixel of each sub-image and the mesh position data of each respective sub-mesh that is used to map and fill the sub-meshes with the corresponding sub-image data. Therefore, the examiner’s BRI is consistent with the present disclosure.
With regard to the examiner’s BRI of the term “second image position data”, Applicant argues that the examiner’s BRI is incorrect. The examiner’s BRI for this term, which was based on para. [0077], is “position data of the processed image data after it has been projected onto the mesh”. Applicant argues “paragraph [0077] of the present application states ‘the second image position data includes, for example, position data of four vertices of the processed image data.’ That is, the second image position data relates to the object on the 2D image plane. The text ‘position data of the processed image data after it has been projected onto the mesh’ is nowhere in the application.”
The examiner disagrees. Para. [0077] reads, in full: “[a]s shown in FIG. 7, the processed image data includes, for example, a plurality of processed image data, each processed image data includes second image position data, and the second image position data includes, for example, position data of four vertices of the processed image data.” In determining the BRI for this term, the examiner also analyzed the context in which the term is used in the present disclosure. The description of Fig. 7 follows the description of Fig. 6 and shows the result of performing the process discussed above of with reference to Fig. 6 of mapping and filling the sub-images onto the sub-meshes to obtain “a plurality of processed image data” and then using the mesh position data to concatenate the sub-images for the plurality of the processed image data, as described in para. [0074].
Therefore, the examiner’s BRI that “second image position data” is position data of the processed image data after it has been projected, or mapped, onto the mesh is consistent with the present disclosure. Furthermore, nothing in the present disclosure indicates that the second image position data “refers to pixel coordinates of objects on the 2D image plane”, as contended by Applicant. It should also be noted that the claims do not recite that the second image position data refers to pixel coordinates of objects on the 2D image plane. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues that Ganjineh and Levinson fail to disclose the invention recited in amended independent claim 1. Claim 1 has been amended to incorporate the subject matter of claims 4 and 5, which were rejected in the nonfinal Office Action under 35 U.S.C. 103 as being unpatentable over Ganjineh in view of Levinson. Most of Applicant’s arguments are premised on Applicant’s position that the BRIs for the first and second position data are incorrect, which the examiner disagrees with for the reasons set forth above.
Applicant argues further that the examiner’s BRI for the term “first positional relationship” is also incorrect. Specifically, Applicant argues that “[t]he first positional relationship is a two-dimensional positional relationship between multiple processed image data, not the relationship between image data projected onto the mesh.” Nowhere in the present disclosure is the first positional relationship described explicitly or implicitly in this manner. Also, the claims do not recite such a two-dimensional relationship. As indicated above, while the claims are interpreted in light of the specification, limitations from the specification are not read into the claims.
As indicated in the nonfinal Office Action, the BRI for first positional relationship is based on the context in which the phrase is used in the present disclosure because the phrase is not explicitly defined in the specification. The context in which the term is used in the present disclosure indicates that it is a positional relationship between processed image data, i.e., the image data after meshing has been performed, but before integration has been performed. This is consistent with the present disclosure. Paras. [0070]-[0075] of the present specification describe the “processed image data” being obtained by mapping the sub-images onto the sub-meshes and concatenating the sub-images using the mesh position data. This same portion of the specification describes that the “plurality of processed image data” are obtained “in a similar way”.
After describing this process of obtaining the plurality of processed image data with reference to Fig. 6, the specification reads, “[n]ext, a first positional relationship between the plurality of processed image data is determined, as shown in FIG. 7.” However, the present specification never provides any description of what the first positional relationship is and its meaning cannot be discerned from Fig. 7. The present disclosure merely discloses that it is some positional relationship that exists as a result of mapping the sub-images onto the sub-meshes to produce the plurality of processed image data. Since this meshing process is described with reference to Fig. 6 and the integration process for removing overlapping, or duplicate, processed image data is described later in the present specification with reference to Figs. 7 and 8, the examiner’s BRI that the first positional relationship is a positional relationship between processed image data that exists after meshing has been performed, but before integration has been performed, is correct.
Applicant argues that although Ganjineh discloses the ground mesh being used to generate an orthorectified image of the ground-level features of the road network, Ganjineh does not disclose “a positional relationship of multiple images including the orthorectified road image is determined based on coordinates of the mesh”. The examiner disagrees. Since the ground mesh in Ganjineh is used to generate the orthorectified image of the road network, and since multiple images are mapped onto the mesh, the positional relationship among the images that are mapped onto the mesh are necessarily determined based on the positional coordinates of the mesh. Otherwise, the final orthorectified image generated from the mesh would not be a positionally accurate map.
Applicant argues that Ganjineh does not disclose integrating multiple images based on the first positional relationship of multiple images. The examiner disagrees. The BRI for the integrating limitation provided in the nonfinal Office Action was based on the context in which the limitation is used in the present disclosure because the term is not defined in the present disclosure. Based on the context in which the term is used, it means to combine or join. Para. [0096] of Ganjineh discloses that the plurality of processed image data corresponding to the orthorectified road image resulting from projection of the multiple images onto the mesh are integrated through a process of superposition and blending to produce an orthorectified road image map that can be viewed from any direction. This process of superpositioning and blending the images meets the definition of integrating.
Applicant argues that “the cited portions of Ganjineh only describe how to process data in the case of overlap, and do not describe anything about the case of non-overlap at all. Moreover, there is no causal relationship between the overlap of sampling points and the integration of images, and they cannot be deduced from each other”. The examiner disagrees. In Gajineh et al., after the images have been projected onto the mesh and integrated through superposition such that the processed images and are in the first positional relationship, blending may be performed by averaging pixel values of sample points in the images that overlap. However, if there is no overlapping of sample points and the blending operation does not need to be performed, the images are integrated via superpositioning without performing blending. Ganjineh contemplates this scenario because para. [0096] refers to pixel values being “appropriately” averaged to perform blending, which means that pixel values that do not or should not be averaged are not averaged, in which case integration of the images via superpositioning is performed without blending. This, in turn, means that integrating of the processed images via superpositioning is performed without blending in response to determining that sample points in the processed images do not overlap.
Lastly, Applicant argues that the teachings of Ganjineh and Levinson cannot be properly combined because Levinson is not relied on as teaching any of the subject matter of claims 4 and 5. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). It is not required that both of the prior art references used in a rejection teach all of the claim limitations. Levinson is relied upon for its explicit disclosure of the association between the mesh data and the image data. This association exists in Ganjineh because it is an attribute of meshing, but is not explicitly discussed in Ganhineh.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation (BRI) using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The BRI of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification.
The following terms in the claims have been given the following interpretations in light of the specification:
traffic object: para. [0029]: "a road, a ground, etc.", a "road ground";
processing sensor data: para. [0029]: processing the outputs of sensors;
map data: para. [0029]: image data used as a base map that can be further processed to produce other types of maps;
point cloud device: this element is not explicitly defined in the specification, but the context in which the term is used makes indicates that a point cloud device is an image capture device that collects 3D point cloud data, such as a laser scanner or LiDAR system that captures 3D data points of objects to provide precise locations in space;
sensor data: para. [0045]: the outputs of sensors;
mesh data: this element is not explicitly defined in the specification, but the context in which the term indicates that mesh data is data that is obtained by processing point cloud data with a meshing algorithm to produce a 3D mesh, which is a continuous surface based on the point cloud data;
processed image data: para. [0047]: data that is obtained by processing image data output from sensors based on an association between the image data and the mesh data;
segmentation processing: para. [0050]: includes mesh segmentation methods such as triangular mesh segmentation, polygon mesh segmentation and spline segmentation methods;
association: para. [0051]: an association between data collected by different acquisition devices that have been pre-calibrated such that data collected by them is associated in some way such that the mesh data and the image data are also associated; association in the time dimension is an example of the association;
removing point cloud data: paras. [0062]-[0063]: filtering the point cloud data or applying noise reduction to remove objects, such as trees, buildings and obstacles;
additional object: para. [0063]: an object other than the traffic object, e.g., "a tree, a building, an obstacle, etc."
mesh cutting: para. [0067]: performing a meshing algorithm that transforms the processed point cloud data by triangular, polygonal or spline meshing, for example;
first image position data: para. [0071]: position data of each next pixel of sub-image data as it is projected onto the corresponding sub-mesh;
mesh position data: para. [0071]: position data for each of a plurality of sub-meshes;
concatenating: para. [0074]: using the mesh position data as a reference to combine the sub-images together to obtain the processed image data;
first positional relationship: this limitation is not explicitly defined in the specification, but the context in which the term indicates that it is a positional relationship between processed image data, i.e., the image data after meshing has been performed, but before integration has been performed;
second positional relationship: para. [0080] the positional relationship between image data that remains after meshing and integration have been performed and overlapping image data has been removed;
target image data: para. [0080]: image data that remains after it is determined based on the first positional relationship that there is overlapping image data and at least part of the overlapping image data has been removed;
overlapping: para. [0081]: when an area of one of the plurality of processed image data coincides with an area of one of the other plurality of processed image data;
first image position data: para. [0071]: position data of the next pixel to be projected onto the mesh;
second image position data: para. [0077]: position data of the processed image data after it has been projected onto the mesh;
integrate: this element is not explicitly defined in the specification, but the context in which the term is used indicates that it means to combine or join.
sub-image: paras. [0072]-[0073] image portions that are subsets of the acquired image.
sub-mesh: paras. [0071]-[0073] mesh portions that are interconnected to form the entire mesh. The sub-meshes can be, for example, triangles or polygons in cases where triangular and polygonal meshing algorithms are used to generate the mesh;
performing segmentation processing on the integrated image data according to a preset size, so as to obtain the map data for the traffic object: para.[0086]: it appears from the context that this means that after the processed image data has been integrated, it is transformed into a map of a particular scale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 6-11, 14-18, 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publ. Appl. No. 2020/0098135 A1 to Ganjineh et al. (hereinafter referred to as “Ganjineh et al.”) in view of U.S. Publ. Appl. No. 2019/0295315 A1 to Levinson et al. (hereinafter referred to as “Levinson et al.”).
Regarding claim 1, Ganjineh et al. discloses a method of processing map data (paras. [0014]-[0016]: “the method comprising … generating, using at least some of the obtained images and the associated camera locations, a local map representation representing an area of the road network on which the vehicle is travelling”), the method comprising:
processing sensor data (para. [0180] and Fig. 3, images captured by camera 302) for a traffic object (para. [0186]: “[t]hus, the camera sensors obtain images of the road environment within which the vehicle is currently travelling”; the traffic object is the road on which the vehicle is traveling) to obtain point cloud data for the traffic object (para. [0057]: “[t]he output of the stereo DSO process also generally contains a “key point cloud” for each of the frames that are being processed (also referred to herein as a “sparse point cloud”), wherein the sensor data comprises image data (para. [0055], the images captured by the camera(s) 202, 302 comprises image data);
obtaining mesh data based on the point cloud data (para. [0094]: “[f]or instance, the ground mesh may be generated using a sparse point cloud as output from the DSO process described above. However, alternatively, or additionally, the ground mesh may be generated using a stereo point cloud obtained directly from the stereo images using the pixel depths”);
processing the image data based on an association between the mesh data and the image data, so as to obtain processed image data (para. [0095]: “[t]he ground mesh may in turn be used to generate an orthorectified image of the ground-level features of the road network. Generally speaking, an orthorectified image is a “scale corrected” image, depicting ground features as seen from above in their exact ground positions….”; the orthorectified image constitutes the processed image data and using the ground mesh to process the image data to generate the orthorectified image constitutes processing the image data based on an association between the mesh data and the image data, so as to obtain processed image data);
obtaining the map data for the traffic object based on the processed image data (para. [0096], map data in the form of a height map of the road is obtained based on the orthorectified road image).
wherein the processed image data comprises a plurality of processed image data, and each of the plurality of processed image data comprises image position data (para. [0096] discusses projecting a plurality of images onto the ground mesh to generate the orthorectified road image and states that the projection process can be performed for multiple images),
wherein the obtaining the map data for the traffic object based on the processed image data comprises:
integrating the plurality of processed image data based on image position data of the plurality of processed image data, so as to obtain integrated image data (in Ganjineh et al., para. [0096], the plurality of processed image data corresponding to the orthorectified road image resulting from projection of the multiple images onto the mesh is integrated through a process of superposition and blending to produce the orthorectified road image map; the integration is based on image position data because it is based on the position data of the processed images after the images have been projected onto the mesh), the integrating the plurality of processed image data comprising:
determining a first positional relationship between the plurality of processed image data based on the image position data of the plurality of processed image data (as indicated above, the BRI for the first positional relationship is the relationship between the image data after it has been projected to the mesh; in Ganjineh et al., the images comprising the orthorectified road image have a first positional relationship between them that is determined based on the positional coordinates of the mesh); and
integrating the plurality of processed image data based on the first positional relationship so as to obtain the integrated image data, in response to determining that the first positional relationship indicates that the plurality of processed image data do not have overlapping data (in Gajineh et al., para. [0096], after the images have been projected onto the mesh and integrated through superposition and are in the first positional relationship, blending may be performed by averaging pixel values of sample points that overlap in the camera images that see the same sample point; however, if it is determined that there is no overlap, the images are superimposed without performing blending); and performing a segmentation processing on the integrated image data according to a preset size, so as to obtain the map data for the traffic object (as indicated above, the BRI for this limitation is transforming the processed image data after integration into a map of a particular scale; in Ganjineh et al., para. [0095], the process of generating the orthorectified road image map via image projection onto the meshing and integration transforms the integrated image data into a preset size map: “[t]he ground mesh may in turn be used to generate an orthorectified image of the ground-level features of the road network. Generally speaking, an orthorectified image is a “scale corrected” image, depicting ground features as seen from above in their exact ground positions, preferably in which distortion caused by camera and flight characteristics and relief displacement has been removed using photogrammetric techniques. An orthorectified image is a kind of aerial photograph that has been geometrically corrected (“orthorectified”) such that the scale of the photograph is uniform, meaning that the photograph can be considered equivalent to a map”).
Ganjineh et al. does not explicitly state that the image data is processed based on an “association” between the mesh data and the image data, but there must be an association between the mesh data and the image data that the processing is based on in Ganjineh et al. because Ganjineh et al. discloses in para. [0096] that part of the processing of the image data includes projecting image features from the image data onto the ground mesh. See also para. [0229] of Ganjineh et al. discussing projecting image data onto the mesh. If there were no association between the mesh data and the image data, the features extracted from the image data would not be able to be projected onto the proper positions in the mesh.
Levinson et al., in the same field of endeavor, explicitly discloses, in para. [0080], the association between the mesh data and the image data. In particular, Levinson et al. discloses surfels corresponding to specific image data projected onto specific polygons of the mesh based on a spatial association between the mesh and the image data: “spatial information (e.g., image data projected onto a mesh, individual “surfels” (e.g., polygons associated with individual color and/or intensity))….” See also para. [0104] of Levinson et al. explicitly discussing the association between each pixel of image data and the polygons of the 3D mesh.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the map generation system and method of Ganjineh et al. based on the teachings of Levinson et al. to explicitly associate the mesh data with the image data as taught by Levinson et al. to ensure mapping accuracy in Ganjineh et al. when projecting extracted features of image data onto the mesh. One of ordinary skill in the art would have been motivated to make the modification to improve the accuracy of projecting image data onto the mesh to improve the accuracy of the map generated in Ganjineh et al. The modification could have been made by one of ordinary skill in the art before the effective filing data of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results.
Regarding claim 2, Ganjineh et al. does not explicitly disclose that the mesh data comprises position data for a plurality of sub-meshes. As indicated above, the BRI for the term “sub-mesh” is that it is portion of a mesh, where the portions of the mesh are interconnected to form the mesh, such as a triangular or polygonal portions in cases where triangular and polygonal meshing algorithms are used to generate the mesh, respectively. As is well known in the art, generating a mesh from point cloud data involves performing a meshing algorithm such as a triangular or polygonal meshing algorithm that segments the point cloud data into geometric sub-meshes such as triangles or polygons. As is also known in the art, since point cloud data includes position data defining the positions of the points in 3D space, mesh data generated from point cloud data also includes position data defining positions of the vertices of the sub-meshes in 3D.
In Levinson et al., the 3D mesh component 124, Fig. 1, converts point cloud data collected by LiDAR into mesh data (para. [0031]). The mesh data comprises mesh position data (para. [0024], “mesh coordinates”) for a plurality of sub-meshes (para. [0024], e.g., vertices, polygons), and the image data comprises first position data (para. [0024], the region identification component Fig. 1, 116 determines “image coordinates”), and wherein the processing the image data based on an association between the mesh data and the image data so as to obtain processed image data comprises:
determining, from the image data, a plurality of sub-image data corresponding to the plurality of sub-meshes one by one, based on an association between the mesh position data for the plurality of sub-meshes and further image position data (Levinson et al., para. [0030], the 3D mapping component 122 can “project image data onto the corresponding location on the 3D mesh. In some instances, the 3D mapping component 122 can map a plurality of images onto the 3D mesh, with individual images represented as a channel of the 3D mesh, such that individual images can be “stacked” on the 3D mesh for subsequent processing, such as blending or duplicating”). In Levinson et al., the further image position data is the position of the sub-image currently being projected onto the corresponding sub-mesh during the projection process.
Regarding the concatenating step recited in claim 2, as indicated above, the BRI for this term is that it means using the position data of the mesh to connect, or link, the projected sub-images together to obtain the processed image data. In Ganjineh et al., after the sub-image data has been projected onto the mesh, the images are “superimposed and blended together” (para. [0096]), which constitutes concatenating the plurality of sub-images by using the mesh position data for the plurality of the sub-meshes as a reference since the mesh position data is used during projection of the image data onto the mesh and therefore is also used during superimposition and blending of the images together.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the map generation system and method of Ganjineh et al. based on the teachings of Levinson et al. to combine the concatenating process of Ganjineh et al. with the sub-image-to-sub-mesh projection of Levinson et al. during the process of Ganjineh et al. of projecting images onto the mesh to generate the orthorectified road map in Ganjineh et al. One of ordinary skill in the art would have been motivated to make the modification to improve the accuracy of the orthorectified road map generated in Ganjineh et al. The modification could have been made by one of ordinary skill in the art before the effective filing data of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results.
Regarding claim 3, Ganjineh et al. discloses that wherein the point cloud data comprises the point cloud data for the traffic object and point cloud data for an additional object (para. [0228], the point cloud data includes point cloud data of the traffic object, i.e., the road, as well as point cloud data for additional objects, e.g., cars, trees and buildings), and
wherein the obtaining mesh data based on the point cloud data comprises:
removing the point cloud data for the additional object from the point cloud
data to obtain the point cloud data for the traffic object (as indicated above, the BRI for removing point cloud data for the additional object is filtering the point cloud data or applying noise reduction to remove objects, such as trees, buildings and obstacles; Ganjineh et al., para. [0228]: “[i]n embodiments, the point cloud, e.g. either the stereo point cloud or the DSO point cloud, can be filtered, for example, by using one or more of: a normal filter (to remove points indicative of cars, trees and buildings that were incorrectly classified by the semantic segmentation”; see also para. [0096] of Ganjiney et al. discussing applying an image mask when generating the orthorectified road image map to remove “extraneous or unwanted features”); and
performing a mesh cutting based on the point cloud data for the traffic object, so as to obtain the mesh data (as indicated above, the BRI for mesh cutting is performing a meshing algorithm that transforms the processed point cloud data by triangular, polygonal or spline meshing, for example; Ganjineh et al. does not explicitly discuss the type of meshing algorithm that is performed to performing a mesh cutting based on the point cloud data for the traffic object, so as to obtain the mesh data transform the point cloud into the mesh; Levinson et al. explicitly discloses performing a mesh cutting based on the point cloud data for the traffic object, so as to obtain the mesh data, para. [0049]: “[a]s can be understood, the 3D mesh can be represented by any number of polygons (e.g., triangles, squares, rectangles, etc.), and is not limited to any particular shape.”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the map generation system and method of Ganjineh et al. based on the teachings of Levinson et al. to perform one of the types of mesh cutting processes discussed in Levinson et al. on the point cloud data collected in Ganjineh et al. since Ganjineh et al. performs some type of mesh cutting process to generate the mesh and Levinson et al. teaches that any suitable mesh cutting process can be used for this purpose. One of ordinary skill in the art would have been motivated to make the modification in order to choose a mesh cutting process that is most suitable for the types of 3D objects being represented. The modification could have been made by one of ordinary skill in the art before the effective filing data of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results.
Regarding claim 6, Ganjineh et al. discloses that the integrating the plurality of processed image data based on image position data of the plurality of processed image data so as to obtain integrated image data comprises:
removing at least part of the plurality of processed image data to obtain a plurality of target image data corresponding to the plurality of processed image data one by one, in response to determining that the first positional relationship indicates that the plurality of processed image data have the overlapping data (in Ganjineh et al., para. [0096], after the images have been projected onto the mesh and integrated through superposition and are in the first positional relationship, blending is performed by averaging pixel values of sample points in response to determining that they overlap in the camera images that see the same sample point);
determining a second positional relationship between the plurality of target image data based on image position data of the plurality of target image data (as indicated above, the BRI for second positional relationship is the relationship between image data after it has been projected onto the mesh and integrated while removing overlapping image data; the BRI for target image data is the image data remaining after integration and removal of overlapping image data; in Ganjineh et al., after image projection onto the mesh, superposition and blending, if necessary, Ganjineh et al. has a determined second positional relationship between the remaining projected image data comprising the resulting orthorectified road image map, para. [0096]) and
integrating the plurality of target image data based on the second positional relationship, so as to obtain the integrated image data (the BRI for this limitation is that the integrating process is performed again based on the second positional relationship on the remaining processed image data after the removal of the overlapping image data; Ganjineh et al., para. [0096], teaches that the process of image projection onto the mesh, integration by superposition and removal by averaging overlapping pixel values can be performed multiple times: “[t]his can be done for multiple different images, and the resulting projections from the various different perspectives can then be superposed and blended together”).
Regarding claim 7, Ganjineh et al. discloses that the sensor data further comprises initial point cloud data collected by a point cloud device (para. [0288], sparse point cloud obtained from, e.g., stereo image sensors), and wherein any two or three selected from: the pose data, the point cloud data, and/or the image data, are associated with each other based on a time information and a position information (Ganjineh et al., para. [0050]).
Regarding claim 8, Ganjineh et al. discloses wherein the point cloud data comprises the point cloud data for the traffic object and point cloud data for an additional object (para. [0228], the point cloud data includes point cloud data of the traffic object, i.e., the road, as well as point cloud data for additional objects, e.g., cars, trees and buildings), and
wherein the obtaining mesh data based on the point cloud data comprises:
removing the point cloud data for the additional object from the point cloud
data to obtain the point cloud data for the traffic object (Ganjineh et al., para. [0228]: “[i]n embodiments, the point cloud, e.g. either the stereo point cloud or the DSO point cloud, can be filtered, for example, by using one or more of: a normal filter (to remove points indicative of cars, trees and buildings that were incorrectly classified by the semantic segmentation”; see also para. [0096] of Ganjiney et al. discussing applying an image mask when generating the orthorectified road image map to remove “extraneous or unwanted features”); and
performing a mesh cutting based on the point cloud data for the traffic object, so as to obtain the mesh data (as indicated above, the BRI for mesh cutting is performing a meshing algorithm that transforms the processed point cloud data by triangular, polygonal or spline meshing, for example; Ganjineh et al. does not explicitly discuss the type of meshing algorithm that is performed to performing a mesh cutting based on the point cloud data for the traffic object, so as to obtain the mesh data transform the point cloud into the mesh; Levinson et al. explicitly discloses performing a mesh cutting based on the point cloud data for the traffic object, so as to obtain the mesh data, para. [0049]: “[a]s can be understood, the 3D mesh can be represented by any number of polygons (e.g., triangles, squares, rectangles, etc.), and is not limited to any particular shape.”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the map generation system and method of Ganjineh et al. based on the teachings of Levinson et al. to perform one of the types of mesh cutting processes discussed in Levinson et al. on the point cloud data collected in Ganjineh et al. since Ganjineh et al. performs some type of mesh cutting process to generate the mesh and Levinson et al. teaches that any suitable mesh cutting process can be used for this purpose. One of ordinary skill in the art would have been motivated to make the modification in order to choose a mesh cutting process that is most suitable for the types of 3D objects being represented. The modification could have been made by one of ordinary skill in the art before the effective filing data of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results.
Regarding claim 9, the rejection of claim 1 applies mutatis mutandis to claim 9 to the extent that limitations recited in claim 9 are also recited in claim 1 and addressed above with reference to claim 1. The only elements that are recited in claim 9 that are not also recited in claim 1 are at least one processor and memory communicatively coupled to the processor storing instructions executable by the processor. Ganjineh et al. discloses at least one processor for carrying out the steps recited in claims 1 and 9 (para. [0124]) and memory for storing instructions that are executable by the processor (para. [0133] and claim 15).
Regarding claim 10, the rejection of claim 2 applies mutatis mutandis to claim 10.
Regarding claim 11, the rejection of claim 3 applies mutatis mutandis to claim 11.
Regarding claim 14, the rejection of claim 6 applies mutatis mutandis to claim 14.
Regarding claim 15, the rejection of claim 7 applies mutatis mutandis to claim 15.
Regarding claim 16, to the extent that limitations recited in claim 16 are also recited in claim 1 and addressed above with reference to claim 1, the rejection of claim 1 applies mutatis mutandis to claim 16. The only elements that are recited in claim 16 that are not also recited in claim 1 are a non-transitory computer-readable medium having computer instructions thereon for carrying out the steps recited in claims 1 and 16, which is disclosed in Ganjineh in para. [0133] and claim 15.
Regarding claim 17, the rejection of claim 2 applies mutatis mutandis to claim 17.
Regarding claim 18, the rejection of claim 3 applies mutatis mutandis to claim 18.
Regarding claim 21, the rejection of claim 6 applies mutatis mutandis to claim 21.
Regarding claim 22, the rejection of claim 7 applies mutatis mutandis to claim 22.
Regarding claim 23, the rejection of claim 8 applies mutatis mutandis to claim 23.
Regarding claim 24, the rejection of claim 8 applies mutatis mutandis to claim 24.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL J SANTOS whose telephone number is (571)272-2867. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL J. SANTOS/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667