Prosecution Insights
Last updated: April 19, 2026
Application No. 18/413,649

APPARATUS AND METHOD FOR ESTIMATING USER POSE IN THREE-DIMENSIONAL SPACE

Non-Final OA §103
Filed
Jan 16, 2024
Examiner
BAKER, CHARLOTTE M
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Korea University Research And Business Foundation
OA Round
1 (Non-Final)
93%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
991 granted / 1067 resolved
+30.9% vs TC avg
Minimal -0% lift
Without
With
+-0.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
15 currently pending
Career history
1082
Total Applications
across all art units

Statute-Specific Performance

§101
21.6%
-18.4% vs TC avg
§103
24.7%
-15.3% vs TC avg
§102
27.4%
-12.6% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1067 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Korea. It is noted, however, that applicant has not filed a certified copy of the Korean application as required by 37 CFR 1.55. Electronic retrieval failed on 19 September 2024. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, 8-9 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (hereinafter Wang) WO2022204854A1 in view of Ok et al. (hereinafter Ok) (“Simultaneous Tracking and Rendering: Real-time Monocular Localization for MAVs”). Regarding claim 1: Wang discloses a relative pose identification unit configured to identify estimated relative pose information between a plurality of images (Please refer to FIG. 5 . FIG. 5 is a schematic flowchart of determining a pose relationship between multiple frames of driving images according to an embodiment of the present application. As shown in Figure 5: The first step: image feature detection; the image feature detection is to detect the features in the driving image, identify the road surface features, environmental features, obstacle features, etc. included in the image, so as to facilitate the selection of the above features. Feature points. The second step: image feature point matching; image feature point matching is to match the same feature points between different driving images to determine the pose relationship between the driving images. The third step: depth estimation; it is to estimate the distance between the object and the vehicle camera, so as to determine the size of each unit pixel in the camera coordinate system on the driving image corresponding to the world coordinate system after the shooting. The fourth step is to match the pixel point and the world point to the pose estimation; that is, to match the pixel point in each driving image with the point in the world coordinate system to determine the same world point (in the world coordinate system) shot at two different shooting times. The pose relationship between the points), and then determine the pose relationship between the corresponding pixel points. Step 5: Reprojection error optimization; compare the pose relationship between multiple frames of driving images, optimize the results, reduce the error, and finally output the pose relationship. In this way, the pose relationship between the multiple frames of traveling images is determined by the feature points between the multiple frames of traveling images, which can improve the accuracy of determining the pose relationship, thereby improving the acquisition efficiency of the blind spot images. , p. 18. first full par.) acquired in a chronological order in a real space (The dynamic frame selection module is equivalent to the computer vision system shown in FIG. 1 above or the computer system 212 shown in FIG. 1 above. The dynamic frame selection module can be used to filter out the driving images that can fill the blind area image from one or more frames of historical driving images, and the specific screening method can refer to the relevant description of the following method embodiments. Not described for the time being. For example: responsible for the multi-frame images with known relative pose relationship and the current blind spot position, firstly arrange the multi-frame images in chronological order, and obtain the dynamic splicing range of the current blind spot for the different frame images that have been sorted through fast edge judgment , so as to filter out the target frame image to stitch the blind area image., p. 16, fifth full par.); Wang fails to specifically address and a user pose estimating unit configured to: acquire a three-dimensional space model constructed using spatial information including at least one of inertial information, depth information, and image information about the real space; generate estimated pose candidate information based on the acquired three-dimensional space model; associate the identified estimated pose candidate information and the estimated relative pose information with each other; and estimate the user pose based on the association result. Ok discloses and a user pose estimating unit configured to: acquire a three-dimensional space model (a laptop, p. 4525, Section III) constructed using spatial information including at least one of inertial information, depth information, and image information about the real space (We use OpenGL to generate synthetic keyframe image Is and associated depth image Id using a colored triangle mesh., p. 4524, Section II B); generate estimated pose candidate information based on the acquired three-dimensional space model (Fig. 7); associate the identified estimated pose candidate information and the estimated relative pose information with each other (Fig. 7); and estimate the user pose based on the association result (While we do not have a ground-truth trajectory for this sequence, the similarity between the camera images and renderings at the pose estimates indicate successful tracking., p. 4527, Section III F). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include and a user pose estimating unit configured to: acquire a three-dimensional space model constructed using spatial information including at least one of inertial information, depth information, and image information about the real space; generate estimated pose candidate information based on the acquired three-dimensional space model; associate the identified estimated pose candidate information and the estimated relative pose information with each other; and estimate the user pose based on the association result in order to provide successful tracking as taught by Ok (p. 4527, Section III F). Regarding claim 6: Wang in view of Ok satisfy all the elements of claim 1. Wang further discloses at least one of a depth measurement device, an image acquisition device, a wireless communication device, an inertial device (inertial measurement unit (IMU) 224, p. 10, par. 4), or a position information measurement device. Wang fails to specifically address wherein the spatial information. Ok discloses wherein the spatial information (We use OpenGL to generate synthetic keyframe image Is and associated depth image Id using a colored triangle mesh., p. 4524, Section II B). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the spatial information in order to generate synthetic keyframe image Is and associated depth image Id using a colored triangle mesh as taught by Ok (p. 4524, Section II B). Regarding claim 8: Wang in view of Ok satisfy all the elements of claim 1. Wang further discloses selection of the above features. Feature points. The second step: image feature point matching; image feature point matching is to match the same feature points between different driving images to determine the pose relationship between the driving images. The third step: depth estimation; it is to estimate the distance between the object and the vehicle camera, so as to determine the size of each unit pixel in the camera coordinate system on the driving image corresponding to the world coordinate system after the shooting. The fourth step is to match the pixel point and the world point to the pose estimation; that is, to match the pixel point in each driving image with the point in the world coordinate system to determine the same world point (in the world coordinate system) shot at two different shooting times. The pose relationship between the points), and then determine the pose relationship between the corresponding pixel points. Step 5: Reprojection error optimization; compare the pose relationship between multiple frames of driving images, optimize the results, reduce the error, and finally output the pose relationship. In this way, the pose relationship between the multiple frames of traveling images is determined by the feature points between the multiple frames of traveling images, which can improve the accuracy of determining the pose relationship, thereby improving the acquisition efficiency of the blind spot images. , p. 18. first full par.). Wang fails to specifically address discloses wherein the estimated relative pose information is generated by: estimating a relative pose from the plurality of images based on a 3D local map constructed using a local feature as keypoint information between the plurality of images. Ok discloses wherein the estimated relative pose information is generated by: estimating a relative pose from the plurality of images based on a 3D local map constructed using a local feature as keypoint information (An image and a depth map is essentially a dense 3D point-cloud and the role of the keyframe pose is only to segment out a small portion of the mesh, bounded by the keyframe camera frustum, to use for tracking., Section II C) between the plurality of images (Given an undistorted monocular camera image It at each time step t, we are interested in finding the current camera pose TW t ∈ SE(3) with respect to the world frame of reference W. We leverage a geometrically accurate prior map M of the environment and occasionally render a synthetic camera image Is and a corresponding depth map Id at a desired keyframe pose TW k in the world., Section II). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the estimated relative pose information is generated by: estimating a relative pose from the plurality of images based on a 3D local map constructed using a local feature as keypoint information between the plurality of images in order to provide more accuracy in future estimates as taught by ok (p. 4525, Section II C). Regarding claim 9: The structural elements of apparatus claim 1 perform all of the steps of method claim 9. Thus, claim 9 is rejected for the same reasons discussed in the rejection of claim 1. Regarding claim 14: The structural elements of apparatus claim 8 perform all of the steps of method claim 14. Thus, claim 14 is rejected for the same reasons discussed in the rejection of claim 8. Allowable Subject Matter Claims 2-5, 7 and 10-13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLOTTE M BAKER whose telephone number is (571)272-7459. The examiner can normally be reached Mon - Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER MEHMOOD can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLOTTE M BAKER/Primary Examiner, Art Unit 2664 25 February 2026
Read full office action

Prosecution Timeline

Jan 16, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602905
A Computer Software Module Arrangement, a Circuitry Arrangement, an Arrangement and a Method for Improved Object Detection Adapting the Detection through Shifting the Image
2y 5m to grant Granted Apr 14, 2026
Patent 12585654
Dynamic Vision System for Robot Fleet Management
2y 5m to grant Granted Mar 24, 2026
Patent 12579900
UAV PERCEPTION VALIDATION BASED UPON A SEMANTIC AGL ESTIMATE
2y 5m to grant Granted Mar 17, 2026
Patent 12548331
TECHNIQUES TO PERFORM TRAJECTORY PREDICTIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12543924
MEDICAL SUPPORT SYSTEM, MEDICAL SUPPORT DEVICE, AND MEDICAL SUPPORT METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
93%
Grant Probability
93%
With Interview (-0.2%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 1067 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month