DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1, 3-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Re claim 1 The claim recites
the limitation of generate an overhead feature map from an overhead image of a geographic area; as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, generating a feature map in the context of this claim encompasses the user mentally picturing a feature map in the mind.
the limitation of generate an observed ground-view feature map from a ground-view image captured by a camera within the geographic area, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, generating a feature map in the context of this claim encompasses the user mentally picturing a feature map in the mind.
the limitation of
for each of a plurality of candidate poses of the camera, project the overhead feature map to a ground view defined by the respective candidate pose, resulting in a projected ground-view feature map for each candidate pose, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, projecting in the context of this claim encompasses the user mentally imagining what features would look like from a ground view at a particular pose. Further this element could also be in the mathematical concepts abstract ideas grouping and projecting is a mathematical operation.
the limitation of for each projected ground-view feature map, determine a feature difference between the observed ground-view feature map and that projected ground-view feature map, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining a difference in the context of this claim encompasses the user mentally determining the difference between the feature map.
the limitation of and determine an estimated pose of the camera based on the feature differences., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining in the context of this claim encompasses the user mentally determining the the estimated pose based on the differences.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – using a processor and memory to perform the steps. The processor and memory are recited at a high-level of generality (i.e., as a generic processor and memory performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor and memory to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Re claim 3 the limitation of wherein each feature difference is based on a subtraction operation between the respective projected ground-view feature map and the observed ground-view feature map., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, feature difference in the context of this claim encompasses the user mentally subtracting the features. Further this element could also be in the mathematical concepts abstract ideas grouping a subtracting is a mathematical operation.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 4 the limitation of select the candidate poses from a location probability map indicating relative probabilities that the camera is located at a plurality of locations in the geographic area., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, selecting in the context of this claim encompasses the user mentally selecting the candidate poses.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 5 the limitation of select a preset number of locations having the greatest relative probabilities from the location probability map as the candidate poses., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, “selecting” in the context of this claim encompasses the user mentally selecting the predetermined number of candidate poses.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 6 the limitation of determine the estimated pose as a weighted average of the candidate poses, with weights for the candidate poses based on the feature differences, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining in the context of this claim encompasses the user mentally calculating the weighted average.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 7 the limitation of determine the weight for each candidate pose based on the feature differences, with the weight for each candidate pose being greater as the feature difference for the respective candidate pose is smaller, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining in the context of this claim encompasses the user mentally calculating the weighted average.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 8 the limitation of wherein the instructions further include instructions to determine the weights for the candidate poses taking the feature differences as inputs., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determine in the context of this claim encompasses the user mentally determine the weights.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – using a processor and memory to perform the steps; and using a machine-learning algorithm. The processor and memory are recited at a high-level of generality (i.e., as a generic processor and memory performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using generic computer components. The machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. Theis does little more than generally link the abstract idea to the field of machine learning. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor and memory to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. The machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. Mere instructions to apply an exception using a generic computer component and a generic machine-learning algorithm cannot provide an inventive concept. The claim is not patent eligible.
Re claim 9 claim 9 contains the same abstract idea as claim 8.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – using a processor and memory to perform the steps; and using a machine-learning algorithm with inputs including the feature difference for the respective candidate pose, a maximum of the feature differences, and a minimum of the feature differences. The processor and memory are recited at a high-level of generality (i.e., as a generic processor and memory performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using generic computer components. The machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. This does little more than generally link the abstract idea to the field of machine learning. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor and memory to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. The machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. Mere instructions to apply an exception using a generic computer component and a generic machine-learning algorithm cannot provide an inventive concept. The claim is not patent eligible.
Re claim 10 the limitation of wherein the machine-learning algorithm outputs a score for each candidate pose, the weights being a SoftMax of the scores falls into the mathematical concepts abstract idea grouping. Softmax is a mathematical function. And the claim merely recites performing the mathematical function on weights.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 11 the limitation of before determining the feature differences, normalize the observed ground-view feature map by a measure of total illumination in the observed ground-view feature map., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, normalizing in the context of this claim encompasses the user mentally performing a normalization to the mental features.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 12 the limitation of determining the feature difference for each candidate pose, normalize the projected ground-view feature map for the respective candidate pose by a measure of total illumination in that projected ground-view feature map., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, normalizing in the context of this claim encompasses the user mentally performing a normalization to the mental features.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 13, the claim contains the same abstract idea as claim 1. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – deternming a pose using the Slam algorithm. The processor and memory are recited at a high-level of generality (i.e., as a generic processor and memory performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using generic computer components. Furthermore, the Slam algorithm is known algorithm (See paragraph 38 of applicant’s specification. Combination of a computer processor with a known algorithm for determine a pose does not place any meaningful limits on the claim as this is just using a known algorithm to generate an intermediate value. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor and memory to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Furthermore, the Slam algorithm is known algorithm (See paragraph 38 of applicant’s specification. Mere instructions to apply an exception using a generic computer component combined with executing a known algorithm to generate an intermediate result cannot provide an inventive concept. The claim is not patent eligible.
Re claim 14 the limitation of wherein the candidate poses consist of the first candidate pose and a plurality of second candidate poses, and the instructions further include instructions to select the second candidate poses from a location probability map indicating relative probabilities that the camera is located at a plurality of locations in the geographic area, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, selecting poses in the context of this claim encompasses the user mentally selecting the poses.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 15 The claim recites
the limitation of generating an overhead feature map from an overhead image of a geographic area; as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, generating a feature map in the context of this claim encompasses the user mentally picturing a feature map in the mind.
the limitation of generating an observed ground-view feature map from a ground-view image captured by a camera within the geographic area, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, generating a feature map in the context of this claim encompasses the user mentally picturing a feature map in the mind.
the limitation of or each of a plurality of candidate poses of the camera, projecting the overhead feature map to a ground view defined by the respective candidate pose, resulting in a projected ground-view feature map for each candidate pose, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, projecting in the context of this claim encompasses the user mentally imagining what features would look like from a ground view at a particular pose. Further this element could also be in the mathematical concepts abstract ideas grouping and projecting is a mathematical operation.
the limitation of for each projected ground-view feature map, determining a feature difference between the observed ground-view feature map and that projected ground-view feature map, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining a difference in the context of this claim encompasses the user mentally determining the difference between the feature map.
the limitation of and determining an estimated pose of the camera based on the feature differences., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining in the context of this claim encompasses the user mentally determining the estimated pose based on the differences.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
The claim does contain additional elements that integrate the abstract idea into a practical application or that constitute significantly more that the abstract idea because the claim does not contain additional elements
Re claim 16 the limitation of determining the estimated pose as a weighted average of the candidate poses, with weights for the candidate poses based on the feature differences, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining in the context of this claim encompasses the user mentally calculating the weighted average.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 17 the limitation of determining the weight for each candidate pose based on the feature differences, with the weight for each candidate pose being greater as the feature difference for the respective candidate pose is smaller, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determining in the context of this claim encompasses the user mentally calculating the weighted average.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Re claim 18 the limitation of wherein the instructions further include instructions to determine the weights for the candidate poses taking the feature differences as inputs., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, determine in the context of this claim encompasses the user mentally determine the weights.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements using a machine-learning algorithm. The machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. Theis does little more than generally link the abstract idea to the field of machine learning. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. Mere instructions to apply an exception use a machine-learning algorithm cannot provide an inventive concept. The claim is not patent eligible.
Re claim 19 claim 19 contains the same abstract idea as claim 18.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements –and using a machine-learning algorithm with inputs including the feature difference for the respective candidate pose, a maximum of the feature differences, and a minimum of the feature differences. The machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. This does little more than generally link the abstract idea to the field of machine learning. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the machine-learning algorithm is recited at a high-level of generality (i.e., as a generic machine-learning algorithm taking an input and creating an output) such that it amounts no more than mere instructions to apply the exception using generic machine-learning algorithm. Mere instructions to apply an exception using a generic machine-learning algorithm cannot provide an inventive concept. The claim is not patent eligible.
Re claim 20 the limitation of wherein the machine-learning algorithm outputs a score for each candidate pose, the weights being a SoftMax of the scores falls into the mathematical concepts abstract idea grouping. Softmax is a known mathematical function, and the claim merely recites performing the mathematical function on weights.
The analysis with respect to significantly more and integration into an practical application is not significantly changed from the claim which this claim depends.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shi et al “Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization Using Satellite Image” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17010-17020
Re claim 15 Shi et al discloses A method comprising:
generating an overhead feature map from an overhead image of a geographic area (See figure 2 note that features are extracted from a ground view image);
generating an observed ground-view feature map from a ground-view image captured by a camera within the geographic area, the camera oriented at least partially horizontally while capturing the ground-view image (see figure 2 caption note that features are extracted from a ground view image see also abstract and figure 1 note the image are of a ground view camera which are intended to be matched to a satellite image of the same area);
for each of a plurality of candidate poses of the camera, projecting the overhead feature map to a ground view defined by the respective candidate pose, resulting in a projected ground-view feature map for each candidate pose; (see figure 2 caption note that the satellite features are mapped to the ground view domain stating with an initial camera pose than iterated through additional camera poses to determine the correct camera posed see section 3 and section 4.1 each iteration could be considered an candidate pose);
for each projected ground-view feature map, determining a feature difference between the observed ground-view feature map and that projected ground-view feature map (see figure 2 and section 4.3 especially equation 5 note that for multiple iteration the difference between the projected satellite features and ground features are minimized to determined the camera pose of the ground camera)
and determining an estimated pose of the camera based on the feature differences. (see figure 2 and section 4.3 especially equation 5 note that for multiple iteration the difference between the projected satellite features and ground features are minimized to determined the camera pose of the ground camera).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3 and 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shi et al “Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization Using Satellite Image” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17010-17020 in view of Pavone US 2024/0199068
Re claim 1 Shi et al discloses
generate an overhead feature map from an overhead image of a geographic area (See figure 2 note that features are extracted from a ground view image);
generate an observed ground-view feature map from a ground-view image captured by a camera within the geographic area, the camera oriented at least partially horizontally while capturing the ground-view image (see figure 2 caption note that features are extracted from a ground view image see also abstract and figure 1 note the image are of a ground view camera which are intended to be matched to a satellite image of the same area);
for each of a plurality of candidate poses of the camera, project the overhead feature map to a ground view defined by the respective candidate pose, resulting in a projected ground-view feature map for each candidate pose; (see figure 2 caption note that the satellite features are mapped to the ground view domain stating with an initial camera pose than iterated through additional camera poses to determine the correct camera posed see section 3 and section 4.1 each iteration could be considered an candidate pose);
for each projected ground-view feature map, determine a feature difference between the observed ground-view feature map and that projected ground-view feature map (see figure 2 and section 4.3 especially equation 5 note that for multiple iterations the difference between the projected satellite features and ground features are minimized to determine the camera pose of the ground camera)
and determine an estimated pose of the camera based on the feature differences. (see figure 2 and section 4.3 especially equation 5 note that for multiple iteration the difference between the projected satellite features and ground features are minimized to determine the camera pose of the ground camera).
While Shi is clearly intended to use a computer, Shi does not expressly disclose A computer comprising a processor and a memory, the memory storing instructions executable by the processor to perform the method. Pavone discloses A computer comprising a processor and a memory, the memory storing instructions executable by the processor to perform the method (see paragraph 58). The motivation to combine is to implement the method using a computer including a processor and memory (see paragraph 58). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention Pavone and Shi to reach the aforementioned advantage.
Re claim 2 Shi discloses that the method is intended to be used in autonomous driving. However, Shi does not expressly disclose wherein the instructions further include instructions to actuate at least one of a propulsion system, a brake system, or a steering system of a vehicle based on the estimated pose, the vehicle including the camera. Pavone further discloses wherein the instructions further include instructions to actuate at least one of a propulsion system, a brake system, or a steering system of a vehicle based on the estimated pose (see abstract “For example the estimated object pose may be used to provide collision-free motion generation for a real-world or virtual device (e.g., a robot, an autonomous machine, or a semi-autonomous machine” see also paragraph 157 and 156 note that autonomous driving includes steering braking and propulsion ), the vehicle including the camera (see figure 1a element 109 paragraph 57)). The motivation to combine is “For example the estimated object pose may be used to provide collision-free motion generation for a real-world or virtual device (e.g., a robot, an autonomous machine, or a semi-autonomous machine)” see abstract. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shi and Pavone to reach the aforementioned advantage.
Re claim 3 Shi disclose wherein each feature difference is based on a subtraction operation between the respective projected ground-view feature map and the observed ground-view feature map (see figure 2 and section 4.3 especially equation 5 note that for multiple iteration the subtraction between the projected satellite features and ground features are minimized to determine the camera pose of the ground camera).
Re claim 11 Shi discloses wherein the instructions further include instructions to, before determining the feature differences, normalize the observed ground-view feature map by a measure of total illumination in the observed ground-view feature map. (see section 4 first paragraph “The features at each level are L2 normalized to increase their robustness for cross-view matching” note that the equation in applicants’ specification (see paragraph 46 of the specification) the equation used for normalization corresponds to L2 normalization)
Re claim 12 Shi discloses wherein the instructions further include instructions to, before determining the feature difference for each candidate pose, normalize the projected ground-view feature map for the respective candidate pose by a measure of total illumination in that projected ground-view feature map (see section 4 first paragraph “The features at each level are L2 normalized to increase their robustness for cross-view matching” note that the equation in applicants specification (see paragraph 46 of the specification) the equation used for normalization corresponds to L2 normalization ).
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shi et al “Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization Using Satellite Image” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17010-17020 in view of Pavone US 2024/0199068 in further view of Hoiem et al US 2021/0183080
Shi and Pavone disclose all of the elements of claim 1 and wherein the candidate poses include a first candidate pose (see Shi section 3 first paragraph and section 1 first paragraph note that given an initial pose estimate the pose is refined). they do not expressly disclose and the instructions further include instructions to determine the first candidate pose by executing an algorithm for simultaneous localization and mapping (SLAM). Hoiem discloses determine the instructions further include instructions to determine the first candidate pose by executing an algorithm for simultaneous localization and mapping (SLAM) (see paragraph 58 note that slam is used to generate an initial pose estimate). The motivation to combine is the method mat “t can complement the SLAM/VO method” see section 2 last paragraph of Shi and paragraph 31 of Hoiem “simultaneous localization and mapping (“SLAM”), which takes advantage of the sequence information and the small motion between frames to efficiently match features in subsequent frames”. Therefore, one of ordinary skill in the art could have easily used Slam to generate the initial pose of Shi as described in Hoiem. Therefore, it would have obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shi Pavone and Hoiem.
Cited Art
The following is a listing of prior art considered relevant to the application but not cited in the above rejection.
Gou et al US 20250182325 A1 discloses The present teaching is directed to estimating 3D camera pose based on 2D features detected from a 2D image. Virtual 3D camera poses are generated with respect to a 3D model for a target organ and associated anatomical structures. Virtual 2D images are created by projecting the 3D model from perspectives determined based on the virtual 3D camera poses. Each virtual 2D image includes 2D projected target organ and/or 2D structures of some 3D anatomical structures visible from a corresponding perspective. 2D feature/camera pose mapping models are then accordingly obtained based on 2D features extracted from the virtual 2D images and the corresponding virtual 3D camera poses, where the 2D features include a 2D ridge line projected from a 3D ridge on the target organ represented in the 3D model. (See abstract)
Owechko US 20180061123 A1 discloses Examples include methods, systems, and articles for localizing a vehicle relative to an imaged surface configuration. Localizing the vehicle may include selecting pairs of features in an image acquired from a sensor supported by the vehicle having corresponding identified pairs of features in a reference representation of the surface configuration. A three-dimensional geoarc may be generated based on an angle of view of the sensor and the selected feature pair in the reference representation. In some examples, a selected portion of the geoarc disposed a known distance of the vehicle away from the portion of the physical surface configuration may be determined. Locations where the selected portions of geoarcs for selected feature pairs overlap may be identified. In some examples, the reference representation may be defined in a three-dimensional space of volume elements (voxels), and voxels that are included in the highest number of geoarcs may be determined. (see abstract).
Shi et al Where Am I Looking At? Joint Location and Orientation Estimation by Cross-View Matching 2020 discloses Cross-view geo-localization is the problem of estimating the position and orientation (latitude, longitude and azimuth angle) of a camera at ground level given a large-scale database of geo-tagged aerial (eg., satellite) images. Existing approaches treat the task as a pure location estimation problem by learning discriminative feature descriptors, but neglect orientation alignment. It is well-recognized that knowing the orientation between ground and aerial images can significantly reduce matching ambiguity between these two views, especially when the ground-level images have a limited Field of View (FoV) instead of a full field-of-view panorama. Therefore, we design a Dynamic Similarity Matching network to estimate cross-view orientation alignment during localization. In particular, we address the cross-view domain gap by applying a polar transform to the aerial images to approximately align the images up to an unknown azimuth angle. Then, a two-stream convolutional network is used to learn deep features from the ground and polar-transformed aerial images. Finally, we obtain the orientation by computing the correlation between cross-view features, which also provides a more accurate measure of feature similarity, improving location recall. Experiments on standard datasets demonstrate that our method significantly improves state-of-the-art performance. Remarkably, we improve the top-1 location recall rate on the CVUSA dataset by a factor of 1.5x for panoramas with known orientation, by a factor of 3.3x for panoramas with unknown orientation, and by a factor of 6x for 180-degree FoV images with unknown orientation. (See abstract.
Lentsch et al SliceMatch: Geometry-Guided Aggregation for Cross-View Pose Estimation discloses This work addresses cross-view camera pose estimation, i.e., determining the 3-Degrees-of-Freedom camera pose of a given ground-level image w.r.t. an aerial image of the local area. We propose SliceMatch, which consists of ground and aerial feature extractors, feature aggregators, and a pose predictor. The feature extractors extract dense features from the ground and aerial images. Given a set of candidate camera poses, the feature aggregators construct a single ground descriptor and a set of pose-dependent aerial descriptors. Notably, our novel aerial feature aggregator has a cross-view attention module for ground-view guided aerial feature selection and utilizes the geometric projection of the ground camera's viewing frustum on the aerial image to pool features. The efficient construction of aerial descriptors is achieved using precomputed masks. SliceMatch is trained using contrastive learning and pose estimation is formulated as a similarity comparison between the ground descriptor and the aerial descriptors. Compared to the state-of-the-art, SliceMatch achieves a 19% lower median localization error on the VIGOR benchmark using the same VGG16 backbone at 150 frames per second, and a 50% lower error when using a ResNet50 backbone. (see abstract).
Shi et al Accurate 3-DoF Camera Geo-Localization via Ground-to-Satellite Image Matching IEEE 2022 discloses We address the problem of ground-to-satellite image geo-localization, that is, estimating the camera latitude, longitude and orientation (azimuth angle) by matching a query image captured at the ground level against a large-scale database with geotagged satellite images. Our prior arts treat the above task as pure image retrieval by selecting the most similar satellite reference image matching the ground-level query image. However, such an approach often produces coarse location estimates because the geotag of the retrieved satellite image only corresponds to the image center while the ground camera can be located at any point within the image. To further consolidate our prior research findings, we present a novel geometry-aware geo-localization method. Our new method is able to achieve the fine-grained location of a query image, up to pixel size precision of the satellite image, once its coarse location and orientation have been determined. Moreover, we propose a new geometry-aware image retrieval pipeline to improve the coarse localization accuracy. Apart from a polar transform in our conference work, this new pipeline also maps satellite image pixels to the ground-level plane in the ground-view via a geometry-constrained projective transform to emphasize informative regions, such as road structures, for cross-view geo-localization. Extensive quantitative and qualitative experiments demonstrate the effectiveness of our newly proposed framework. We also significantly improve the performance of coarse localization results compared to the state-of-the-art in terms of location recalls. (see abstract).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T MOTSINGER whose telephone number is (571)270-1237. The examiner can normally be reached 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN T MOTSINGER/Primary Examiner, Art Unit 2673