Prosecution Insights
Last updated: April 19, 2026
Application No. 18/691,587

MAP CREATION DEVICE, MAP CREATION METHOD, AND MAP CREATION PROGRAM

Non-Final OA §103
Filed
Mar 13, 2024
Examiner
LANTZ, KARSTEN FOSTER
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Aisin Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
19
Total Applications
across all art units

Statute-Specific Performance

§103
73.8%
+33.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT PCT/JP2022/040838. Priority to JP2021-178338 with a priority date of 10/29/2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDSs dated 3/13/2024 and 4/11/2025 have been considered and placed in the application file. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4, and 5 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2016 0037154 A1, (Hung et al.) in view of US Patent Publication 2021 0232151 A1, (Liu et al.) and US Patent Publication 2017 0277197 A1, (Liao et al.). Claim 1 Regarding Claim 1, Hung et al. teach a map creation device comprising: an optimal value calculating part that calculates, by iterative computation, an optimal value of the homography matrix from the initial value and a luminance value of each pixel included in a road-surface region specified in the plurality of images; ("Referring to FIG. 5b, the present invention may not only use the corresponding spatial coordinates of the feature points and the photographed images but also the minimize formula m.sub.y−Hm.sub.l to get the optimal solution of the matrix H (Homography)," par. 49) a camera position and orientation calculating part that calculates an amount of change in camera position and an amount of change in camera orientation of the in-vehicle camera by resolving the optimal value ("After getting the optimal solution of the matrix H to correct the cameras 31, 32, and 33, the image processing system 1 may obtain not only the positions of cameras 31, 32, and 33 in the vehicle 30 but also the extrinsic parameters of the cameras," par. 49). Hung et al. do not explicitly teach all of an image obtaining part that obtains a plurality of images from an in-vehicle camera that is mounted on a vehicle and photographs a surrounding of the vehicle, the plurality of images being obtained by photographing different locations; an odometry information calculating part that calculates odometry information indicating an amount of movement of the vehicle; an initial value calculating part that calculates an initial value of a homography matrix between the plurality of images from the odometry information of the vehicle; and a three-dimensional position calculating part that calculates three-dimensional positions of features in the plurality of images from the amount of change in camera position and the amount of change in camera orientation. However, Liu et al. teach an image obtaining part that obtains a plurality of images from an in-vehicle camera that is mounted on a vehicle and photographs a surrounding of the vehicle, the plurality of images being obtained by photographing different locations; ("the processor of the robotic device may be configured to receive a first image frame from an image sensor, receive a second image frame from the image sensor," par. 25) ("the term “robotic device” refers any of various types of robotic vehicles," par. 28) an odometry information calculating part that calculates odometry information indicating an amount of movement of the vehicle; ("the pre-processor(s) 404 may output processed measurements relating to position and orientation of the robotic device (e.g., acceleration, velocity, odometry information, etc.),” par. 81) an initial value calculating part that calculates an initial value of a homography matrix between the plurality of images from the odometry information of the vehicle ("the processor of the robotic device may be configured to receive a first image frame from an image sensor, receive a second image frame from the image sensor, generate homograph computation values based on the first and second image frames, and generate a homography matrix based on the homograph computation values," par. 25). [AltContent: textbox (Figure 2 shows the pathing of sensor data and other components for the use in object detection and navigation. )]Liao et al. teach a three-dimensional position calculating part that calculates three-dimensional positions of features in the plurality of images from the amount of PNG media_image1.png 298 634 media_image1.png Greyscale change in camera position and the amount of change in camera orientation ("The navigation application 118 detects a plurality of matching feature points in a first matching image pair, and determines a plurality of corresponding object points in three-dimensional (3D) space from the first image pair. The navigation application 118 tracks the plurality of feature points from the first image pair to a second image pair, and determines the plurality of corresponding object points in 3D space from the second image pair," par. 46). Therefore, taking the teachings of Hung et al., Liu et al., and Liao et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the optimal homography matrix solution as taught by Hung et al. to use the image receiving and processing as taught by Liu et al. and the 3D position calculating as taught by Liao et al. The suggestion/motivation for doing so would have been that, “To support VSLAM calculations, the robotic device may be equipped with sensors that gather movement or distance information useful for employing VSLAM techniques. The robotic device may use such movement or distance information to determine a distance between captured images or frames, and use such distance information in conjunction with the homography matrix to estimate the dimensions and scale of the objects in the frames. This in turn allows the robotic device to determine its pose with a higher degree of precision and accuracy than if the pose is determined solely based on captured images.” as noted by the Liu et al. disclosure in paragraph [0033], which also motivates combination because the combination would predictably have a higher accuracy as there is a reasonable expectation that the device uses movement or distance information in combination with a homography matrix calculation; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. The rejection of apparatus claim 1 above applies mutatis mutandis to the corresponding limitations of method claim 4 and non-transitory computer readable medium claim 5 while noting that the rejection above cites to both device and method disclosures. Claims 4 and 5 are mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 4 Regarding Claim 4, Hung et al. teach a map creation method comprising: calculating, by iterative computation, an optimal value of the homography matrix from the initial value and a luminance value of each pixel included in a road-surface region specified in the plurality of images; ("Referring to FIG. 5b, the present invention may not only use the corresponding spatial coordinates of the feature points and the photographed images but also the minimize formula m.sub.y−Hm.sub.l to get the optimal solution of the matrix H (Homography)," par. 49) calculating an amount of change in camera position and an amount of change in camera orientation of the in-vehicle camera by resolving the optimal value ("After getting the optimal solution of the matrix H to correct the cameras 31, 32, and 33, the image processing system 1 may obtain not only the positions of cameras 31, 32, and 33 in the vehicle 30 but also the extrinsic parameters of the cameras," par. 49). Hung et al. do not explicitly teach all of obtaining a plurality of images from an in-vehicle camera that is mounted on a vehicle and photographs a surrounding of the vehicle, the plurality of images being obtained by photographing different locations; calculating odometry information indicating an amount of movement of the vehicle; calculating an initial value of a homography matrix between the plurality of images from the odometry information of the vehicle; and calculating three-dimensional positions of features in the plurality of images from the amount of change in camera position and the amount of change in camera orientation. However, Liu et al. teach obtaining a plurality of images from an in-vehicle camera that is mounted on a vehicle and photographs a surrounding of the vehicle, the plurality of images being obtained by photographing different locations; ("the processor of the robotic device may be configured to receive a first image frame from an image sensor, receive a second image frame from the image sensor," par. 25) ("the term “robotic device” refers any of various types of robotic vehicles," par. 28) calculating odometry information indicating an amount of movement of the vehicle; ("the pre-processor(s) 404 may output processed measurements relating to position and orientation of the robotic device (e.g., acceleration, velocity, odometry information, etc.) calculating an initial value of a homography matrix between the plurality of images from the odometry information of the vehicle ("the processor of the robotic device may be configured to receive a first image frame from an image sensor, receive a second image frame from the image sensor, generate homograph computation values based on the first and second image frames, and generate a homography matrix based on the homograph computation values," par. 25). Liao et al. teach calculating three-dimensional positions of features in the plurality of images from the amount of change in camera position and the amount of change in camera orientation ("The navigation application 118 detects a plurality of matching feature points in a first matching image pair, and determines a plurality of corresponding object points in three-dimensional (3D) space from the first image pair. The navigation application 118 tracks the plurality of feature points from the first image pair to a second image pair, and determines the plurality of corresponding object points in 3D space from the second image pair," par. 46). Hung et al., Liu et al, and, Liao et al. are combined as per claim 1. Claim 5 Regarding Claim 5, Hung et al. teach calculate, by iterative computation, an optimal value of the homography matrix from the initial value and a luminance value of each pixel included in a road-surface region specified in the plurality of images; ("Referring to FIG. 5b, the present invention may not only use the corresponding spatial coordinates of the feature points and the photographed images but also the minimize formula m.sub.y−Hm.sub.l to get the optimal solution of the matrix H (Homography)," par. 49) calculate an amount of change in camera position and an amount of change in camera orientation of the in-vehicle camera by resolving the optimal value ("After getting the optimal solution of the matrix H to correct the cameras 31, 32, and 33, the image processing system 1 may obtain not only the positions of cameras 31, 32, and 33 in the vehicle 30 but also the extrinsic parameters of the cameras," par. 49). Hung et al. do not explicitly teach all of a map creation program stored on a non-transitory computer readable medium, the map creation program configured to cause for causing a computer to: obtain plurality of images from an in-vehicle camera that is mounted on a vehicle and photographs a surrounding of the vehicle, the plurality of images being obtained by photographing different locations; calculate odometry information indicating an amount of movement of the vehicle; calculate an initial value of a homography matrix between the plurality of images from the odometry information of the vehicle; and calculate three-dimensional positions of features in the plurality of images from the amount of change in camera position and the amount of change in camera orientation. However, Liu et al. teach a map creation program stored on a non-transitory computer readable medium, the map creation program configured to cause for causing a computer to: ("Further aspects may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a robotic device to perform operations of the methods summarized above," par. 11) obtain a plurality of images from an in-vehicle camera that is mounted on a vehicle and photographs a surrounding of the vehicle, the plurality of images being obtained by photographing different locations; ("the processor of the robotic device may be configured to receive a first image frame from an image sensor, receive a second image frame from the image sensor," par. 25) ("the term “robotic device” refers any of various types of robotic vehicles," par. 28) calculate odometry information indicating an amount of movement of the vehicle; ("the pre-processor(s) 404 may output processed measurements relating to position and orientation of the robotic device (e.g., acceleration, velocity, odometry information, etc.) calculate an initial value of a homography matrix between the plurality of images from the odometry information of the vehicle ("the processor of the robotic device may be configured to receive a first image frame from an image sensor, receive a second image frame from the image sensor, generate homograph computation values based on the first and second image frames, and generate a homography matrix based on the homograph computation values," par. 25). Liao et al. teach calculate three-dimensional positions of features in the plurality of images from the amount of change in camera position and the amount of change in camera orientation ("The navigation application 118 detects a plurality of matching feature points in a first matching image pair, and determines a plurality of corresponding object points in three-dimensional (3D) space from the first image pair. The navigation application 118 tracks the plurality of feature points from the first image pair to a second image pair, and determines the plurality of corresponding object points in 3D space from the second image pair," par. 46). Hung et al., Liu et al, and, Liao et al. are combined as per claim 1. 2nd Claim Rejections - 35 USC § 103 Claims 2 and 3 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2016 0037154 A1, (Hung et al.) in view of US Patent Publication 2021 0232151 A1, (Liu et al.), US Patent Publication 2017 0277197 A1, (Liao et al.), and US Patent Publication 2023 0010175 A1, (Kato) Claim 2 Regarding claim 2, Hung et al., Liu et al, and Liao et al. teach the map creation device according to claim 1 as noted above. Hung et al. also teach a use determining part that determines to use the amount of change in camera position and the amount of change in camera orientation ("After getting the optimal solution of the matrix H to correct the cameras 31, 32, and 33, the image processing system 1 may obtain not only the positions of cameras 31, 32, and 33 in the vehicle 30 but also the extrinsic parameters of the cameras," par. 49). Liao et al. teach when an error is less than a threshold value, the error being represented by an angle between the estimated value of the road surface's normal vector and a value of a road surface's normal vector that is determined in advance by calibration of the in-vehicle camera ("By only updating the stored vehicle pose when the rotation angle or translation exceed the minimum threshold, the accumulation of errors is minimized in vehicle pose transformations calculated between image pairs. If the rotation angle and translation do not exceed the minimum threshold, the vehicle transformation is discarded," par. 33). Hung et al., Liu et al, and, Liao et al. do not explicitly teach all of wherein the camera position and orientation calculating part further calculates an estimated value of a road surface's normal vector by resolving the optimal value, the road surface's normal vector being a vector in a normal direction of a road surface viewed from the in-vehicle camera. However, Kato teaches wherein the camera position and orientation calculating part further calculates an estimated value of a road surface's normal vector by resolving the optimal value, the road surface's normal vector being a vector in a normal direction of a road surface viewed from the in-vehicle camera ("the information processing device calculates the normal vector of the plane which approximates the road surface based on the position information of the feature drawn on the road surface," par. 28). Therefore, taking the teachings of Hung et al., Liu et al., Liao et al. and Kato as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the optimal homography matrix solution as taught by Hung et al. to use the image receiving and processing as taught by Liu et al., the 3D position calculating as taught by Liao et al., and the road surface normal vector calculation as taught by Kato. The suggestion/motivation for doing so would have been that, “Thereby, the information processing device can suitably calculate at least one of the pitch angle or the roll angle of the moving body based on the relation between the calculated normal vector and the orientation of the moving body.” as noted by the Kato et al. disclosure in paragraph [0028], which also motivates combination because the combination would predictably have additional capability as there is a reasonable expectation that the device use will use this information to calculate the orientation of the vehicle; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 3 Regarding claim 3, Hung et al., Liu et al., Liao et al., and Kato teach the map creation device according to claim 2 as noted above. Liao et al. teach wherein the use determining part determines not to use the amount of change in camera position and the amount of change in camera orientation, when the error is greater than or equal to the threshold value ("The rotation angle and translation are determined from the vehicle pose transformation. If the rotation angle or translation exceed a minimum threshold, the stored vehicle pose is updated," par. 32). Hung et al., Liu et al., Liao et al., and Kato are combined as per claim 2. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2014 0192145 A1 to Anguelov et al. discloses estimating the orientation of a panoramic camera mounted on a vehicle. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARSTEN LANTZ whose telephone number is (571)272-4564. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.F.L./Examiner, Art Unit 2664 Date: 1/29/2026 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Mar 13, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month