DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Korea. It is noted, however, that applicant has not filed a certified copy of the Korean application as required by 37 CFR 1.55. Electronic retrieval failed on 19 September 2024.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 6, 8-9 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (hereinafter Wang) WO2022204854A1 in view of Ok et al. (hereinafter Ok) (“Simultaneous Tracking and Rendering: Real-time Monocular Localization for MAVs”).
Regarding claim 1: Wang discloses a relative pose identification unit configured to identify estimated relative pose information between a plurality of images (Please refer to FIG. 5 . FIG. 5 is a schematic flowchart of determining a pose relationship between multiple frames of driving images according to an embodiment of the present application. As shown in Figure 5: The first step: image feature detection; the image feature detection is to detect the features in the driving image, identify the road surface features, environmental features, obstacle features, etc. included in the image, so as to facilitate the selection of the above features. Feature points. The second step: image feature point matching; image feature point matching is to match the same feature points between different driving images to determine the pose relationship between the driving images. The third step: depth estimation; it is to estimate the distance between the object and the vehicle camera, so as to determine the size of each unit pixel in the camera coordinate system on the driving image corresponding to the world coordinate system after the shooting. The fourth step is to match the pixel point and the world point to the pose estimation; that is, to match the pixel point in each driving image with the point in the world coordinate system to determine the same world point (in the world coordinate system) shot at two different shooting times. The pose relationship between the points), and then determine the pose relationship between the corresponding pixel points. Step 5: Reprojection error optimization; compare the pose relationship between multiple frames of driving images, optimize the results, reduce the error, and finally output the pose relationship. In this way, the pose relationship between the multiple frames of traveling images is determined by the feature points between the multiple frames of traveling images, which can improve the accuracy of determining the pose relationship, thereby improving the acquisition efficiency of the blind spot images. , p. 18. first full par.) acquired in a chronological order in a real space (The dynamic frame selection module is equivalent to the computer vision system shown in FIG. 1 above or the computer system 212 shown in FIG. 1 above. The dynamic frame selection module can be used to filter out the driving images that can fill the blind area image from one or more frames of historical driving images, and the specific screening method can refer to the relevant description of the following method embodiments. Not described for the time being. For example: responsible for the multi-frame images with known relative pose relationship and the current blind spot position, firstly arrange the multi-frame images in chronological order, and obtain the dynamic splicing range of the current blind spot for the different frame images that have been sorted through fast edge judgment , so as to filter out the target frame image to stitch the blind area image., p. 16, fifth full par.);
Wang fails to specifically address and a user pose estimating unit configured to: acquire a three-dimensional space model constructed using spatial information including at least one of inertial information, depth information, and image information about the real space; generate estimated pose candidate information based on the acquired three-dimensional space model; associate the identified estimated pose candidate information and the estimated relative pose information with each other; and estimate the user pose based on the association result.
Ok discloses and a user pose estimating unit configured to: acquire a three-dimensional space model (a laptop, p. 4525, Section III) constructed using spatial information including at least one of inertial information, depth information, and image information about the real space (We use OpenGL to generate synthetic keyframe image Is and associated depth image Id using a colored triangle mesh., p. 4524, Section II B); generate estimated pose candidate information based on the acquired three-dimensional space model (Fig. 7); associate the identified estimated pose candidate information and the estimated relative pose information with each other (Fig. 7); and estimate the user pose based on the association result (While we do not have a ground-truth trajectory for this sequence, the similarity between the camera images and renderings at the pose estimates indicate successful tracking., p. 4527, Section III F).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include and a user pose estimating unit configured to: acquire a three-dimensional space model constructed using spatial information including at least one of inertial information, depth information, and image information about the real space; generate estimated pose candidate information based on the acquired three-dimensional space model; associate the identified estimated pose candidate information and the estimated relative pose information with each other; and estimate the user pose based on the association result in order to provide successful tracking as taught by Ok (p. 4527, Section III F).
Regarding claim 6: Wang in view of Ok satisfy all the elements of claim 1. Wang further discloses at least one of a depth measurement device, an image acquisition device, a wireless communication device, an inertial device (inertial measurement unit (IMU) 224, p. 10, par. 4), or a position information measurement device.
Wang fails to specifically address wherein the spatial information.
Ok discloses wherein the spatial information (We use OpenGL to generate synthetic keyframe image Is and associated depth image Id using a colored triangle mesh., p. 4524, Section II B).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the spatial information in order to generate synthetic keyframe image Is and associated depth image Id using a colored triangle mesh as taught by Ok (p. 4524, Section II B).
Regarding claim 8: Wang in view of Ok satisfy all the elements of claim 1. Wang further discloses selection of the above features. Feature points. The second step: image feature point matching; image feature point matching is to match the same feature points between different driving images to determine the pose relationship between the driving images. The third step: depth estimation; it is to estimate the distance between the object and the vehicle camera, so as to determine the size of each unit pixel in the camera coordinate system on the driving image corresponding to the world coordinate system after the shooting. The fourth step is to match the pixel point and the world point to the pose estimation; that is, to match the pixel point in each driving image with the point in the world coordinate system to determine the same world point (in the world coordinate system) shot at two different shooting times. The pose relationship between the points), and then determine the pose relationship between the corresponding pixel points. Step 5: Reprojection error optimization; compare the pose relationship between multiple frames of driving images, optimize the results, reduce the error, and finally output the pose relationship. In this way, the pose relationship between the multiple frames of traveling images is determined by the feature points between the multiple frames of traveling images, which can improve the accuracy of determining the pose relationship, thereby improving the acquisition efficiency of the blind spot images. , p. 18. first full par.).
Wang fails to specifically address discloses wherein the estimated relative pose information is generated by: estimating a relative pose from the plurality of images based on a 3D local map constructed using a local feature as keypoint information between the plurality of images.
Ok discloses wherein the estimated relative pose information is generated by: estimating a relative pose from the plurality of images based on a 3D local map constructed using a local feature as keypoint information (An image and a depth map is essentially a dense 3D point-cloud and the role of the keyframe pose is only to segment out a small portion of the mesh, bounded by the keyframe camera frustum, to use for tracking., Section II C) between the plurality of images (Given an undistorted monocular camera image It at each time step t, we are interested in finding the current camera pose TW t ∈ SE(3) with respect to the world frame of reference W. We leverage a geometrically accurate prior map M of the environment and occasionally render a synthetic camera image Is and a corresponding depth map Id at a desired keyframe pose TW k in the world., Section II).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the estimated relative pose information is generated by: estimating a relative pose from the plurality of images based on a 3D local map constructed using a local feature as keypoint information between the plurality of images in order to provide more accuracy in future estimates as taught by ok (p. 4525, Section II C).
Regarding claim 9: The structural elements of apparatus claim 1 perform all of the steps of method claim 9. Thus, claim 9 is rejected for the same reasons discussed in the rejection of claim 1.
Regarding claim 14: The structural elements of apparatus claim 8 perform all of the steps of method claim 14. Thus, claim 14 is rejected for the same reasons discussed in the rejection of claim 8.
Allowable Subject Matter
Claims 2-5, 7 and 10-13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLOTTE M BAKER whose telephone number is (571)272-7459. The examiner can normally be reached Mon - Fri 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER MEHMOOD can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLOTTE M BAKER/Primary Examiner, Art Unit 2664
25 February 2026