Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Applicant’s amendment filed on December 15, 2025 is acknowledged. Currently claims 1-11 and 13-21 are pending. Claims 1-3, 6, 8, and 17 have been amended. Claim 12 is cancelled. Claims 18-21 are added.
Response to Amendments
Applicant’s remarks and amendments filed December 15, 2025, have been entered. Applicant’s arguments regarding the 35 U.S.C. 112(f) interpretation, 35 U.S.C. 112(b) rejection, and 35 U.S.C. 101 rejection previously set forth in the Non-Final Office Action mailed September 24, 2025, are persuasive. Accordingly, 35 U.S.C. 112(f) interpretation, 35 U.S.C. 112(b) rejection, and 35 U.S.C. 101 rejection is withdrawn in response.
Response to Arguments
Applicant’s arguments, filed December 15, 2025, with respect to the 102(a)(2) rejection of claims 1-3, 6, 9-11, 14, and 17-18, and 21 have been fully considered and are not persuasive. On pages 8-10 of the Applicant’s remarks, Applicant alleges the following:
The provision of aerial images is not equivalent to provision of orthophoto maps. In particular, orthophoto maps are an output of the method disclosed in Knopp and not an input.
The segmentation component of claim 1 segments the orthophoto maps using the generated polygons to approximate “parts” of the orthophoto maps, whereas Knopp teaches a footprint polygon that is merely a boundary element defining an outer extent of the map which is functionally different than the segmentation component of claim 1.
Applicant highlights that claim 1 recites “at least two input orthophoto maps” relating to an area and Knopp does not disclose two orthophoto maps showing different objects in the same area.
The examiner respectfully disagrees with the above arguments and has provided a response below:
While the examiner agrees that the provision of aerial images is not equivalent to provision of orthophoto maps, the examiner asserts that Knopp teaches input orthophoto maps. The Office Action cites Knopp’s method for producing a digital orthophoto, ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein orthophoto maps are digital orthophotos produced from aerial images). Knopp teaches digital orthophotos that are produced from a series of aerial photographs which, under the broadest reasonable interpretation, are equivalent to orthophoto maps. Further, in response to Applicant’s argument that orthophoto maps are an output of the method disclosed in Knopp and not an input, the examiner has provided the following citation of Knopp, ([0122] “The resultant product is a new digital orthophoto that has been produced with all new digital imagery, or pixels. The new digital orthophoto fulfills all the mapping characteristics of an orthophoto image product. Additional post processing can include partitioning the image product into individual tiles and file formats according to customer specifications. Then, in step 29, the final orthophoto data are formatted for packaging in a media suitable for the intended application for the digital orthophoto.” wherein a new digital orthophoto requires an inputted digital orthophoto). See further analysis of claim 1 below.
Additionally, the examiner asserts that, under the broadest reasonable interpretation, Knopp teaches the segmentation component that is configured for generating at least one or a plurality of polygon(s) for the at least two orthophoto maps relating to the area, each polygon approximating a part of the corresponding input orthophoto map as disclosed in claim 1 ([0363] “Then, the boundary cone is intersected with the DEM (or Multi-DEM), block 306, to provide a world footprint polygon. In similar fashion, the world coordinate line for all edge pixels is projected onto the DEM to determine the point of intersection expressed in world coordinates… A three-dimension world coordinate polygon is generated consisting of the DEM intersection points from all the pixels on the edge of the image. This polygon is consistent with the DEM and forms the boundary, in world space, of the image footprint.” wherein a segmentation component is block 306 and a polygon relating to an area is a world footprint polygon) ([0335] “In block 226, for each image, the image pixels from around the exterior edge of the image are mathematically "projected" onto the DEM to generate a detailed, three-dimensional polygon footprint.” wherein each polygon approximating a part of the corresponding input orthophoto map is equivalent to a polygon footprint projected onto a DEM). Under the broadest reasonable interpretation, a plurality of polygons that relate to an area and approximate a part of corresponding maps are world footprint polygons that are project onto a DEM. The examiner respectfully suggests Applicant amend the claims to specify that the disclosed “polygon(s)” are configured “based on the determined groups, such as the groups determined by the post-processing component” as taught in Applicant’s specifications.
Lastly, in response to Applicant’s argument that Knopp does not disclose two orthophoto maps showing different objects in the same area, the examiner contends that Applicant has not specifically disclosed such in the claims. Therefore, at least two orthophoto maps, as stated in the claims, under the broadest reasonable interpretation are disclosed by Knopp ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos). The examiner respectfully suggests Applicant amend the claims to specify the “at least two orthophoto maps” are distinctly different and show different objects in the same area.
Applicant’s arguments with respect to the 103 rejection of claims 4-5, 7-8, 12-13, and 15-16 and 19, have been fully considered and are not persuasive. On pages 10-11 of the Applicant’s remarks, Applicant alleges that a person having ordinary skill in the art would have no teaching, suggestion, or motivation to combine the disclosures of Knopp and Croxford because introducing a complex, computationally intensive machine learning process of Croxfrod, to analyze the large, raw aerial image files would dramatically increase processing time and computational overhead, thereby destroying the efficiency and economic advantages of the photogrammetry workflow disclosed in Knopp.
The examiner respectfully disagrees. The examiner asserts that a person of ordinary skill in the art would have motivation to combine Knopp and Croxford because the object detection artificial neural network of Croxford can be used to process the series of aerial photographs of Knopp in a more efficient manner. The Office Action cites the following of Croxford regarding the object detection artificial neural network: ([0110] “The learned object motion data for the given object may be used to update the object class characteristic data, e.g. indicative of the probability of change characteristic, of the entire class of objects to which the recognized object belongs. This can also be scaled up, such that a plurality of objects belonging to the same class may be observed in the same or different environments to produce learned object motion data for that class of objects.”) ([0058] “Performing object detection and/or object recognition may involve the use of one or more trained artificial neural networks (ANNs) or support vector machines (SVMs).” wherein an optimization algorithm is an artificial neural network).
The learned object motion data of the object detection artificial neural network of Croxford can enhance the accuracy of classifying identified objects within the aerial images of Knopp, thereby aiding with the processing of the series of aerial photographs. Therefore, the examiner’s motivation for combining Knopp and Croxford remains.
Regarding added claims 18-21, see the rejections below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 6, 9-11, 14, and 17-18, and 21 are rejected under 35 U.S.C. 102(a)(2) as being unpatentable of Knopp et al., US 20050031197 A1, (hereinafter “Knopp”).
Regarding claim 1, Knopp teaches a system configured for analysis of aerial images, comprising a data-processing system, the data-processing system comprising at least one processor and at least one memory, the processor and/or memory being configured to realize a data- processing storage component ([0129] “The computer system 30 includes a processor 32 and memory 34. The memory 34 can include read only memory (ROM) and random access memory (RAM).” wherein a data-processing system is the computer system and a data-processing storage component is the memory), a segmentation component ([0363] “Then, the boundary cone is intersected with the DEM (or Multi-DEM), block 306, to provide a world footprint polygon.” wherein a segmentation component is block 306), a projection component ([0335] “In block 226, for each image, the image pixels from around the exterior edge of the image are mathematically "projected" onto the DEM to generate a detailed, three-dimensional polygon footprint.” wherein a projection component is block 226) and an error minimizing component ([0118] “In block 24, the imagery data are orthorectified on a frame by frame basis to remove topographic relief induced image displacements. In one embodiment, the rectification process utilizes Multi-DEMs organized in a priority order, often in the order of accuracy. In contrast, conventional rectification processes rectify images onto a single DEM.” wherein an error minimizing component is the rectification process of block 24),
wherein the data-processing storage component is configured for providing at least two input orthophoto maps and at least two input digital elevation models relating to an area ([0129] “The computer system 30 includes a processor 32 and memory 34. The memory 34 can include read only memory (ROM) and random access memory (RAM).” wherein a data-processing storage component is the memory) ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images) ([0122] “The resultant product is a new digital orthophoto that has been produced with all new digital imagery, or pixels. The new digital orthophoto fulfills all the mapping characteristics of an orthophoto image product. Additional post processing can include partitioning the image product into individual tiles and file formats according to customer specifications. Then, in step 29, the final orthophoto data are formatted for packaging in a media suitable for the intended application for the digital orthophoto.” wherein a new digital orthophoto requires an inputted digital orthophoto) ([0014] “The invention further provides a method for providing digital elevation model data for use in producing a digital orthophoto of a project area. The method comprises acquiring digital elevation model data from at least first and second sources; prioritizing the digital elevation model data acquired from the first and second sources;”),
wherein the segmentation component is configured for generating at least one or a plurality of polygon(s) for the at least two orthophoto maps relating to the area, each polygon approximating a part of the corresponding input orthophoto map ([0363] “Then, the boundary cone is intersected with the DEM (or Multi-DEM), block 306, to provide a world footprint polygon. In similar fashion, the world coordinate line for all edge pixels is projected onto the DEM to determine the point of intersection expressed in world coordinates… A three-dimension world coordinate polygon is generated consisting of the DEM intersection points from all the pixels on the edge of the image. This polygon is consistent with the DEM and forms the boundary, in world space, of the image footprint.” wherein a segmentation component is block 306 and a polygon relating to an area is a world footprint polygon) ([0335] “In block 226, for each image, the image pixels from around the exterior edge of the image are mathematically "projected" onto the DEM to generate a detailed, three-dimensional polygon footprint.” wherein each polygon approximating a part of the corresponding input orthophoto map is equivalent to a polygon footprint projected onto a DEM) ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images) ([0122] “The resultant product is a new digital orthophoto that has been produced with all new digital imagery, or pixels. The new digital orthophoto fulfills all the mapping characteristics of an orthophoto image product. Additional post processing can include partitioning the image product into individual tiles and file formats according to customer specifications. Then, in step 29, the final orthophoto data are formatted for packaging in a media suitable for the intended application for the digital orthophoto.” wherein a new digital orthophoto requires an inputted digital orthophoto), and
wherein the error minimizing component is configured for minimizing positional errors on at least one or a plurality of parts of the at least two orthophoto maps ([0118] “In block 24, the imagery data are orthorectified on a frame by frame basis to remove topographic relief induced image displacements. In one embodiment, the rectification process utilizes Multi-DEMs organized in a priority order, often in the order of accuracy. In contrast, conventional rectification processes rectify images onto a single DEM.” wherein an error minimizing component is the rectification process of block 24 and positional errors are topographic relief) ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images).
Regarding claim 2, Knopp teaches the system according to claim 1, wherein the projection component is configured for projecting the polygon(s) on the corresponding input digital elevation model of the area ([0129] “The computer system 30 includes a processor 32 and memory 34. The memory 34 can include read only memory (ROM) and random access memory (RAM).” wherein a data-processing system is the computer system) ([0335] “In block 226, for each image, the image pixels from around the exterior edge of the image are mathematically "projected" onto the DEM to generate a detailed, three-dimensional polygon footprint.” wherein a projection component is block 226) ([0014] “The invention further provides a method for providing digital elevation model data for use in producing a digital orthophoto of a project area. The method comprises acquiring digital elevation model data from at least first and second sources; prioritizing the digital elevation model data acquired from the first and second sources;”).
Regarding claim 3, Knopp teaches the system according to claim 1, wherein the projection component is configured for determining for each vertex of the corresponding polygon(s) at least one coordinate corresponding to the projection of vertices on the corresponding input digital elevation model and a reference value for each projection of the vertices of each polygon to the digital elevation models ([0335] “In block 226, for each image, the image pixels from around the exterior edge of the image are mathematically "projected" onto the DEM to generate a detailed, three-dimensional polygon footprint. These three-dimensional values are then transformed from the coordinate system of the DEM to obtain the associated planimetric coordinates of the polygon vertices expressed in the coordinate system associated with the map projection desired for the final orthophoto. The result is a two-dimensional bounding polygon expressed in the map coordinate system.” wherein a projection component is block 226 and a reference value is the three-dimensional values) ([0014] “The invention further provides a method for providing digital elevation model data for use in producing a digital orthophoto of a project area. The method comprises acquiring digital elevation model data from at least first and second sources; prioritizing the digital elevation model data acquired from the first and second sources;”).
Regarding claim 6, Knopp teaches the system according to claim 5, wherein the processor and/or the memory are configured to realize a transformation component, wherein the transformation component is configured for transforming at least one orthophoto map and at least one digital elevation model based on the transformation determined by the error minimizing component ([0235] “The 2D transformation is operator selectable and includes various Degrees Of Freedom (DOF). In one embodiment, the implementation provides a two DOF transform (shift only), a three DOF transform (shift and rotation), a four DOF transform (shift, rotation and scale), and an eight DOF (a standard photogrammetric rectification transformation). The operator individually controls which components are included in the transformation and adapts the transformation based on experience and/or to particular data configurations. In one embodiment, the user is allowed to pick what transformation parameters will be solved for, allowing, for instance, to remove scale, and/or rotation, and/or shift from the solution.” wherein a transformation component is the operator and the error minimizing component is the rectification transformation) ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein orthophoto maps are digital orthophotos produced from aerial images) ([0014] “The invention further provides a method for providing digital elevation model data for use in producing a digital orthophoto of a project area. The method comprises acquiring digital elevation model data from at least first and second sources; prioritizing the digital elevation model data acquired from the first and second sources;”).
Regarding claim 9, the claim recites similar limitations to claim 1 but in the form of a method. Therefore, claim 9 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above).
Regarding claim 10, the claim recites similar limitations to claim 2 but in the form of a method. Therefore, claim 10 recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above).
Regarding claim 11, the claim recites similar limitations to claim 3 but in the form of a method. Therefore, claim 11 recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above).
Regarding claim 14, the claim recites similar limitations to claim 6 but in the form of a method. Therefore, claim 14 recites similar limitations to claim 6 and is rejected for similar rationale and reasoning (see the analysis for claim 6 above).
Regarding claim 17, Knopp teaches a non-transitory computer readable storage medium storing instructions which, when is executed by a processor of a computer, cause the computer to carry out the steps of the method according to claim 9 ([0130] “Memory 34 also can include ROM and RAM for read only or temporary data storage.” wherein a non-transitory computer readable storage medium is a ROM).
Regarding claim 18, Knopp teaches the system according to claim 1, wherein each part of the at least two orthophoto maps corresponds to an object in the area ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images) ([0166] “The control points are measurements of the same feature in images for which coordinate reference data are available.” wherein an object is the feature).
Regarding claim 21, Knopp teaches the system according to claim 1, wherein the at least two input orthophoto maps and/or the at least two input digital elevation models relate to the same area at different points in time ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images) ([0122] “The resultant product is a new digital orthophoto that has been produced with all new digital imagery, or pixels. The new digital orthophoto fulfills all the mapping characteristics of an orthophoto image product. Additional post processing can include partitioning the image product into individual tiles and file formats according to customer specifications. Then, in step 29, the final orthophoto data are formatted for packaging in a media suitable for the intended application for the digital orthophoto.” wherein a new digital orthophoto requires an inputted digital orthophoto) ([0014] “The invention further provides a method for providing digital elevation model data for use in producing a digital orthophoto of a project area. The method comprises acquiring digital elevation model data from at least first and second sources; prioritizing the digital elevation model data acquired from the first and second sources;” wherein both the orthophoto maps and digital elevation models are directed to a project area; wherein it is obvious to one of ordinary skill in the art that the orthophoto map and digital elevation model be acquired at different points as Knopp teaches the digital elevation model data is used to produce the digital orthophoto maps).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4-5, 7-8, 13, and 15-16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable of Knopp et al., US 20050031197 A1, (hereinafter “Knopp”) in view of Croxford et al., US 20210304514 A1, (hereinafter “Croxford”).
Regarding claim 4, Knopp teaches the system according to claim 1, wherein the error minimizing component is configured for ([0118] “In block 24, the imagery data are orthorectified on a frame by frame basis to remove topographic relief induced image displacements. In one embodiment, the rectification process utilizes Multi-DEMs organized in a priority order, often in the order of accuracy. In contrast, conventional rectification processes rectify images onto a single DEM.” wherein an error minimizing component is the rectification process of block 24)([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein orthophoto maps are digital orthophotos produced from aerial images).
Knopp does not specifically disclose a machine learning algorithm, wherein the wherein the machine learning algorithm is configured for performing a nearest neighbour analysis step, wherein the nearest neighbour analysis step comprises assigning a corresponding object.
However, Croxford teaches a machine learning algorithm ([0058] “Performing object detection and/or object recognition may involve the use of one or more trained artificial neural networks (ANNs) or support vector machines (SVMs).” wherein a machine learning algorithm is an artificial neural network), wherein the wherein the machine learning algorithm is configured for performing a nearest neighbour analysis step, wherein the nearest neighbour analysis step comprises assigning a corresponding object ([0110] “The machine learning process may implement a model that includes, but is not limited to, one or more of: an artificial neural network (ANN; described above), a support vector machine (SVM), a relevance vector machine (RVM), k-nearest neighbors (k-NN), a decision tree, or a Bayesian network.”) ([0110] “The learned object motion data for the given object may be used to update the object class characteristic data, e.g. indicative of the probability of change characteristic, of the entire class of objects to which the recognized object belongs. This can also be scaled up, such that a plurality of objects belonging to the same class may be observed in the same or different environments to produce learned object motion data for that class of objects.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a machine learning algorithm of Croxford to execute the aerial image analysis method of Knopp to improve the accuracy and efficiency of aerial image analysis while maintaining high quality.
Regarding claim 5, Knopp teaches the system according to claim 1, wherein the data-processing system ([0129] “The computer system 30 includes a processor 32 and memory 34. The memory 34 can include read only memory (ROM) and random access memory (RAM).” wherein a data-processing system is the computer system) is further configured for providing ([0235] “The 2D transformation is operator selectable and includes various Degrees Of Freedom (DOF). In one embodiment, the implementation provides a two DOF transform (shift only), a three DOF transform (shift and rotation), a four DOF transform (shift, rotation and scale), and an eight DOF (a standard photogrammetric rectification transformation). The operator individually controls which components are included in the transformation and adapts the transformation based on experience and/or to particular data configurations. In one embodiment, the user is allowed to pick what transformation parameters will be solved for, allowing, for instance, to remove scale, and/or rotation, and/or shift from the solution.” wherein a transformation component is the operator and the error minimizing component is the rectification transformation) ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images).
Knopp does not specifically disclose object-class data indicating at least one or a plurality of object-class(es) and an optimization algorithm.
However, Croxford teaches object-class data indicating at least one or a plurality of object-class(es) and an optimization algorithm ([0110] “The machine learning process may implement a model that includes, but is not limited to, one or more of: an artificial neural network (ANN; described above), a support vector machine (SVM), a relevance vector machine (RVM), k-nearest neighbors (k-NN), a decision tree, or a Bayesian network.”) ([0110] “The learned object motion data for the given object may be used to update the object class characteristic data, e.g. indicative of the probability of change characteristic, of the entire class of objects to which the recognized object belongs. This can also be scaled up, such that a plurality of objects belonging to the same class may be observed in the same or different environments to produce learned object motion data for that class of objects.”) ([0058] “Performing object detection and/or object recognition may involve the use of one or more trained artificial neural networks (ANNs) or support vector machines (SVMs).” Wherein an optimization algorithm is an artificial neural network).
The motivation for combining Knopp and Croxford is the same motivation as used for claim 4.
Regarding claim 7, Knopp teaches the system according to claim 1, wherein the segmentation component is configured for ([0363] “Then, the boundary cone is intersected with the DEM (or Multi-DEM), block 306, to provide a world footprint polygon. In similar fashion, the world coordinate line for all edge pixels is projected onto the DEM to determine the point of intersection expressed in world coordinates… A three-dimension world coordinate polygon is generated consisting of the DEM intersection points from all the pixels on the edge of the image. This polygon is consistent with the DEM and forms the boundary, in world space, of the image footprint.” wherein a segmentation component is block 306) ([0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images).
Knopp does not specifically disclose classes and at least one convolution neural network.
However, Croxford teaches classes and at least one convolution neural network ([0110] “The learned object motion data for the given object may be used to update the object class characteristic data, e.g. indicative of the probability of change characteristic, of the entire class of objects to which the recognized object belongs.”) ([0058] “Performing object detection and/or object recognition may involve the use of one or more trained artificial neural networks (ANNs) or support vector machines (SVMs).” Wherein a convolution neural network is an artificial neural network).
The motivation for combining Knopp and Croxford is the same motivation as used for claim 4.
Regarding claim 8, Knopp in view of Croxford teaches the system according to claim 7, wherein the processor and/or the memory are configured to realize a post-processing component (Knopp - [0129] “The computer system 30 includes a processor 32 and memory 34. The memory 34 can include read only memory (ROM) and random access memory (RAM).”), wherein the post- processing component is configured for assigning a first class to a connected plurality of portions to which no class is assigned, if the connected plurality is enclosed by connected portions to which the first class is assigned and for removing excessive vertices of the polygon(s) (Croxford - [0060] “The output of the neuron may then depend on the input, a weight, a bias and an activation function. The output of some neurons is connected to the input of other neurons, forming a directed, weighted graph in which vertices (corresponding to neurons) or edges (corresponding to connections) of the graph are associated with weights, respectively. The weights may be adjusted throughout training of the ANN for a particular purpose, altering the output of individual neurons and hence of the ANN as a whole.” wherein a post-processing component is the ANN and removing excessive vertices is adjusting the weights to affect the outputted vertices) (Croxford - [0110] “The learned object motion data for the given object may be used to update the object class characteristic data, e.g. indicative of the probability of change characteristic, of the entire class of objects to which the recognized object belongs. This can also be scaled up, such that a plurality of objects belonging to the same class may be observed in the same or different environments to produce learned object motion data for that class of objects.”).
The motivation for combining Knopp and Croxford is the same motivation as used for claim 4.
Regarding claim 13, the claim recites similar limitations to claim 5 but in the form of a method. Therefore, claim 13 recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above).
Regarding claim 15, the claim recites similar limitations to claim 7 but in the form of a method. Therefore, claim 15 recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above).
Regarding claim 16, the claim recites similar limitations to claim 8 but in the form of a method. Therefore, claim 16 recites similar limitations to claim 8 and is rejected for similar rationale and reasoning (see the analysis for claim 8 above).
Regarding claim 19, Knopp in view of Croxford teaches the system according to claim 4, wherein the error minimizing component is further configured for estimating the distance between a reference point, such as a centroid, of the at least one object of one of the orthophoto maps and reference points, such as centroids, of every object of the same class of the one of the other orthophoto maps (Knopp - [0118] “In block 24, the imagery data are orthorectified on a frame by frame basis to remove topographic relief induced image displacements. In one embodiment, the rectification process utilizes Multi-DEMs organized in a priority order, often in the order of accuracy. In contrast, conventional rectification processes rectify images onto a single DEM.” wherein an error minimizing component is the rectification process of block 24) (Knopp - [0166] “There are two kinds of image measurements, namely, absolute or control points and relative or tie points. The control points are measurements of the same feature in images for which coordinate reference data are available. The reference data can include known coordinates, derived from the location of another georeferenced image, or be derived by evaluation (often involving interpolation) of a surface elevation model. The tie points are common points on two or more adjacent images. The tie points are measurements of the same feature in different images without reference data.” wherein the disclosed reference points are the control and tie points respectively) (Knopp - [0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein orthophoto maps are digital orthophotos produced from aerial images).
The motivation for combining Knopp and Croxford is the same motivation as used for claim 4.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable of Knopp et al., US 20050031197 A1, (hereinafter “Knopp”) in view of Croxford et al., US 20210304514 A1, (hereinafter “Croxford”) in further view of Jin et al., US 9245201 B1, (hereinafter “Jin”).
Regarding claim 20, Knopp in view of Croxford teaches the system according to claim 4, wherein the error minimizing component is configured for generating (Knopp - [0118] “In block 24, the imagery data are orthorectified on a frame by frame basis to remove topographic relief induced image displacements. In one embodiment, the rectification process utilizes Multi-DEMs organized in a priority order, often in the order of accuracy. In contrast, conventional rectification processes rectify images onto a single DEM.” wherein an error minimizing component is the rectification process of block 24) (Knopp - [0015] “A method for producing a digital orthophoto for a project area, the method comprising the steps of using an uncalibrated camera to obtain a series of aerial photographs of the project area for providing digital imagery representing a block of overlapping images of the project area;” wherein at least two orthophoto maps are digital orthophotos produced from aerial images) (Knopp - [0166] “There are two kinds of image measurements, namely, absolute or control points and relative or tie points. The control points are measurements of the same feature in images for which coordinate reference data are available. The reference data can include known coordinates, derived from the location of another georeferenced image, or be derived by evaluation (often involving interpolation) of a surface elevation model. The tie points are common points on two or more adjacent images. The tie points are measurements of the same feature in different images without reference data.” wherein reference points are tie points).
Knopp in view of Croxford does not specifically disclose generating candidate matching pairs of objects based on a similarity measure.
However, Jin teaches generating candidate matching pairs of objects based on a similarity measure ([Col. 8, lines 6-15] “As described above with reference to step 230 of FIG. 2, the present invention utilizes radiometric filtering of candidate tie points. To do so, the minimum matching score value is used to automatically filter tie points based on radiometric criteria. For automatic tie point generation, the windows centered at the tie point location are used as matching templates. Depending on the similarity measure chosen, the normalized cross-correlation or normalized mutual information between the template in the sensed image and the template in the reference image is computed as the matching score.” wherein candidate matching pairs of objects are candidate tie points).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to generate candidate matching pairs of objects based on a similarity measure of Jin for the orthophoto maps of Knopp in view of Croxford to enhance the accuracy of filtering out any incorrect discrepancies that do not align with the actual ground features of the images.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA PEARSON whose telephone number is (703)-756-5786. The examiner can normally be reached Monday - Friday 8:00 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)- 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMANDA H PEARSON/Examiner, Art Unit 2666
/MING Y HON/Primary Examiner, Art Unit 2666