Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed on 6 May, 2024.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 12 April, 2024 and 28 October, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for a patent.
(b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States.
Claims 1, 15, and 18 are rejected under pre-AIA 35 U.S.C. 102(a)(2) as being anticipated by Zhou et al (U.S. Patent Publication No. 2022/0164595 A1, hereinafter “Zhou”).
Regarding claim 1, Zhou teaches a feature map generation method, performed by a computer device (¶ 0003: The present disclosure provides a technical solution for vehicle localization, more specifically a method for vehicle localization, an apparatus for vehicle localization, an electronic device and a computer readable storage medium.), comprising:
obtaining a plurality of image frames photographed for a target scene (Figure 1; ¶ 0069: During the capturing, the capturing device (for example, a capturing vehicle) may travel in an area including the external environment 105 and capture a video or a set of images (including the reference image 140 of the external environment 105) of this area during traveling.), separately extracting image feature points from each image frame of the plurality of image frames, and determining corresponding feature descriptors based on a position in a corresponding image at which the extracted image feature points are located (¶ 0071: In some embodiments, the computing device 120 or another entity (for example, another computing device) may have generated and stored a set of keypoints, a set of reference descriptors and a set of spatial coordinates in association for each reference image in the set of reference images of the external environment 105.; ¶ 0073: To obtain the set of reference descriptors 147 associated with the set of keypoints 143, the computing device 120 may first determine a reference descriptor map of the reference image 140 and then obtain, from the reference descriptor map, a plurality of reference descriptors (namely, the set of reference descriptors 147) corresponding to respective keypoints of the set of keypoints 143.);
forming image feature points with a matching relationship in the image feature points of the each image frame into a feature point set (¶ 0084: Referring back to FIG. 2, at block 230, after obtaining the plurality of candidate poses 155 by offsetting the predicted pose 150, and assuming that the vehicle 110 is in the plurality of candidate poses 155, respectively, the computing device 120 may determine a plurality of sets of image descriptors 165 corresponding to the set of spatial coordinates 145, and the plurality of sets of image descriptors 165 belong to the image descriptor map 160.; ¶ 0086: Thereafter, in the image descriptor map 160 of the captured image 130, the computing device 120 may determine an image descriptor 715 corresponding to the projection point 710 to obtain an image descriptor of the set of image descriptors 165-1. Likewise, for other spatial coordinates in the set of spatial coordinates 145, the computing device 120 may determine image descriptors corresponding to these spatial coordinates and thus obtain the set of image descriptors 165-1.);
determining a representative feature point from the feature point set (¶ 0071: After acquiring the reference image 140 of the external environment 105, the computing device 120 may obtain the set of spatial coordinates 145 and the set of reference descriptors 147 corresponding to the set of keypoints 143 in the reference image 140.), and calculating a difference between a feature descriptor corresponding to a remaining image feature point in the feature point set and a feature descriptor corresponding to the representative feature point (¶ 0088: Referring back to FIG. 2, at block 240, the computing device 120 may determine a plurality of similarities 170 between the plurality of sets of image descriptors 165 and the set of reference descriptors 147. In other words, for a set of image descriptors among the plurality of sets of image descriptors 165, the computing device 120 may determine a similarity between the set of image descriptors and the set of reference descriptors 147, thereby determining a similarity of the plurality of similarities 170.);
determining a position error of the feature point set based on the difference (¶ 0092: Subsequent to determining the plurality of differences associated with the plurality of descriptor pairs between the first set of image descriptors 165-1 and the set of reference descriptors 147, the computing device 120 may determine, based on the plurality of differences, a similarity between the first set of image descriptors 165-1 and the set of reference descriptors 147, namely the first similarity 170-1 of the plurality of similarities 170.), iteratively updating the remaining image feature point in the feature point set based on the position error (¶ 0081: For example, if the computing device 120 iteratively updates the predicted pose 150 using the example method 200, the predetermined offset units and the predetermined maximum offset ranges may be reduced gradually in the iterations. ; ¶ 0095: After determining the respective probabilities of the plurality of candidate poses 155 being the real pose, the computing device 120 may determine, from the plurality of candidate poses 155 and their respective probabilities, an expected pose of the vehicle 110 as the updated predicted pose 180. As such, all the candidate poses 155 are accounted for the ultimately updated predicted pose 180 according to respective probabilities, so as to enhance the accuracy of the updated predicted pose 180.), and obtaining an updated feature point set based on an iteration stop condition being satisfied (¶ 0066: Consequently, the computing device 120 may update the predicted pose 150 based on the captured image 130, in order to obtain an updated predicted pose 180 with accuracy greater than the predetermined threshold for use in applications requiring high localization accuracy.); and
determining a space feature point corresponding to the updated feature point set based on a position in the corresponding image at which each image feature point in the updated feature point set is located (¶ 0085: FIG. 7 illustrates a schematic diagram of determining the first set of image descriptors 165-1 by projecting the set of spatial coordinates 145 onto the captured image 130 on the assumption that the vehicle 110 is in the first candidate pose 155-1, according to embodiments of the present disclosure… The computing device 120 may then determine related projection parameters or data for projecting the set of spatial coordinates 145 onto the captured image 130 when the vehicle 110 is in the first candidate pose 155-1. For example, the projection parameters or data may include, but are not limited to, a conversion relation between the coordinate system of the vehicle 110 and the coordinate system of the imaging device of the vehicle 110, a conversion relation between the coordinate system of the vehicle 110 and the spatial coordinate system, various parameters of the imaging device of the vehicle 110, and the like.), and generating a feature map based on the space feature point, the feature map positioning a to-be-positioned moving device in the target scene (¶ 0138: In addition, as discussed above, the trained feature extraction model 310 may be used to generate the localization map 1130, and the generated localization map 1130 may be applied to the real-time localization of the vehicle 110.).
Regarding claim 15, claim 15 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance Zhou’s further teaching on:
At least one memory configured to store program code (¶ 0005: The memory stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to:); and
At least one processor configured to read the program code and operate as instructed by the program code (¶ 0005: The memory stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to:)…
Regarding claim 18, claim 18 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance Zhou’s further teaching on:
A non-transitory computer-readable storage medium storing computer code which, when executed by a at least one processor, causes the at least one processor to at least (¶ 0005: The memory stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to; ¶ 0006: According to a third aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions.):
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al (U.S. Patent Publication No. 2022/0164595 A1, hereinafter “Zhou”) in view of secondary art Pourian et al (U.S. Patent Publication No. 2019/0156145 A1, hereinafter “Pourian”).
Regarding claim 10, Zhou teaches the feature map generation method according to claim 1.
Zhou does not explicitly teach wherein obtaining the plurality of image frames photographed for the target scene comprises: obtaining a plurality of original image frames photographed for the target scene by a fisheye camera, and performing distortion correction on the plurality of original image frames to obtain the plurality of image frames photographed for the target scene.
However, Pourian does teach wherein obtaining the plurality of image frames photographed for the target scene comprises: obtaining a plurality of original image frames photographed for the target scene by a fisheye camera (¶ 0021: For example, given two corresponding fisheye images taken across different views of the same scene, it is valuable to determine attached sets of features or feature points between the two fisheye images.), and performing distortion correction on the plurality of original image frames to obtain the plurality of image frames photographed for the target scene (¶ 0030: As shown, fisheye images 121, 122 are received by lens distortion correction module 101, in which fisheye images 121, 122 go through a lens distortion correction based on characteristics of the cameras used to attain fisheye images 121, 122.).
Zhou and Pourian are considered to be analogous art as both pertain to scenery feature point identification. Therefore, it would have been obvious to one of ordinary skill in the art to combine the method of vehicle localization (as taught by Zhou) and the keypoint detection and matching in fisheye images (as taught by Pourian) before the effective filing date of the claimed invention. The motivation for this combination of references would be the system of Pourian uses geometry-aware feature matching for fisheye images along with image based feature matching and a symmetry test thus improving feature matching performance (See ¶ 0026).
This motivation for the combination of Zhou and Pourian is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Allowable Subject Matter
Claims 2 – 9, 11 – 14, 16, 17, 19 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Komuro et al (U.S. Patent Publication No. 2023/0039143 A1) teaches a device for estimating an own-position of a moving body by matching a feature extracted from an acquired image with a database in which position information and the feature are associated with each other in advance.
Chen et al (U.S. Patent Publication No. 2020/0342626 A1) teaches a method for camera localization which includes predicting a location of a camera when shooting a target image according to location information of the camera when shooting a history image before the target image is shot to obtain predicted location information of the camera; filtering out at least one feature point that is currently not observable by the camera in the environment map according to the predicted location information of the camera, location information of each feature point and viewing-angle area information of each feature point in the environment map; and matching the feature point in the target image with remaining feature points in the environment map after the filtering to obtain a feature point correspondence, and determining location information of the camera according to the feature point correspondence.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JONES whose telephone number is (703)756-4573. The examiner can normally be reached Monday - Friday 8:00-5:00 EST, off Every Other Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW B. JONES/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667