Prosecution Insights
Last updated: April 19, 2026
Application No. 18/205,241

VEHICLE AND CONTROL METHOD THEREOF

Final Rejection §103
Filed
Jun 02, 2023
Examiner
BONANSINGA, AARON TIMOTHY
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
19 granted / 25 resolved
+14.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
29 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Arguments Applicant’s arguments (see remarks), filed 11/28/2025, with respect to claims 1-2 and 4-20 have been fully considered but respectfully are unpersuasive. On page 8, applicant argues “Applicant respectfully submits that Choi, Sagong, Lee, Aoki, Singh, Lipchin, Kasami, Kasami, Sakano, and Tam, taken individually or combined, fail to teach or suggest inventive features of the presently claimed invention, wherein the setting of the area around the vanishing point as the template includes: determining a reliability of the template based on a variance value of the template; and changing a position of the template, based on the reliability of the template being low, as is called for by claims 1 and 15”. In response, the Office respectfully disagrees. Based on the breadth of the claim language, the prior art by CHOI (US 20210256720 A1) explicitly teaches wherein the setting of the area around the vanishing point as the template (Fig. 10. Paragraph [0107]-CHOI discloses the vanishing point extractor 310 (see FIG. 1) (specifically, the matching point extractor 313 (see FIG. 1)) may identify an area similar to the first template corresponding to each sample point, within the search area corresponding to each sample point) includes: determining a reliability of the template based on a variance value of the template (Fig. 10. Paragraph [0107]-CHOI discloses the vanishing point extractor 310 may perform patch matching to identify an area similar to the first template. An area may be determined to be “similar” to the first template in response to a determination that the pixels of the area match the pixels of the first template within a particular confidence and/or margin (e.g., at least a 90% match between the pixels of the area and the pixels of the first template)). CHOI fails to explicitly teach changing a position of the template, based on the reliability of the template being low. However, AOKI explicitly teaches However, AOKI explicitly teaches and changing a position of the template (Fig. 1. Paragraph [0108]-AOKI discloses Step 314: the vertical offset correction unit 132 reads the vertical moving amount ΔY1 of the reference image from the geometry correction information storage unit 115, adds a correction amount ΔYc of a vertical offset to the vertical moving amount ΔY1 of the reference image, and stores as the vertical moving amount ΔY1 (ΔY1=ΔY1+ΔYc) of the reference image in the geometry correction information storage unit 115. In addition, a value obtained by adding the correction amount ΔYc of the vertical offset to the vertical moving amount of the reference image is stored as the vertical moving amount ΔY1 of the reference image in the geometry correction modification information storage unit 122. In paragraph [0109]-AOKI discloses the geometry correction unit 125 corrects the vertical offset between the benchmark image (first image) and the reference image (second image) by the correction amount ΔYc (=ΔY1) of the vertical offset.), based on the reliability of the template being low (Fig. 1. Paragraph [0101]-AOKI discloses Step 312: the vertical offset correction unit 132 sets the difference between the vertical moving amounts of the benchmark image and the reference image to the vertical offset of the benchmark image and the reference image. The vertical offset correction unit 132 calculates the quadratic approximation 603 from the vertical offset (the vertical offset of the maximum number of valid regions 602) at which the number of valid regions of the parallax image is maximum using the data of the number of valid regions of the predetermined range in the number of valid regions 601 indicating an evaluation value of reliability of the parallax image in the vertical offset. The value of the vertical offset where the quadratic approximation of the number of valid regions becomes a maximum value is calculated, and is set to the correction amount ΔYc of the vertical offset. The evaluation value pertaining to reliability of the parallax image indicates a coincidence in the vertical direction between the benchmark image and the reference image (wherein normalized Cross Correlation (NCC) is used, reliability may be based on template/image region size, and a horizontal/vertical moving amount ΔX2 is calculated such that the vanishing point on the geometry-corrected benchmark image becomes the design value). Please also read paragraph [0046, 0052, 0066-0070, 0075-0076]). On page 9, applicant argues “Accordingly, Aoki fails to disclose determining the reliability of a template based on a variance value, nor does it disclose adjusting the position of the template on the basis of such reliability, as is presently claimed.”. In response, the Office respectfully disagrees for the reasons stated above and below. On page 10, applicant argues “For at least these reasons, Applicant respectfully submits that Choi, Sagong, Lee, Aoki, Singh, Lipchin, Kasami, Kasami, Sakano, and Tam, do neither anticipate nor render obvious independent claim/claims 1 and 15.”. In response, the Office respectfully disagrees for the reasons stated above and below. On page 10-11, applicant argues “Applicant submits that claims 2 and 4-14, which depend from independent claim 1, and claims 16-20, which depend from independent claim 15, are thus likewise allowable over the cited art for at least the same reasons noted above.”. In response, the Office respectfully disagrees for the reasons stated above and below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI. Regarding claim 1, CHOI explicitly teaches a control method (Fig. 1. Paragraph [0053]-CHOI discloses FIG. 1 is a block diagram showing a vanishing point extraction device. In paragraph [0055]-CHOI discloses the vanishing point extraction device 10 is a device that captures an image, analyzes the captured image, and extracts the vanishing point of the image based on the analysis result. In paragraph [0145]-CHOI discloses the vehicle controller 410 may control one or more elements of the host vehicle 400 to control driving and/or navigation of the host vehicle 400 through the surrounding environment based on the determined vanishing point of the second image. Please also see Fig. 20 and paragraph [0147-0148]) of a vehicle (Fig. 3, #400 called a host vehicle. Paragraph [0084]-CHOI discloses FIG. 3 is a diagram showing a host vehicle 400 including the vanishing point extraction device 10 of FIG. 1 and/or FIG. 2), the control method comprising: setting, as a template, an area around a vanishing point (Fig. 5. Paragraph [0060]-CHOI discloses when the current image (also referred to herein as a second image IMG2) is received from the image sensor 100 (e.g., in response to such receipt), the vanishing point extractor 310 may extract the vanishing point of the current image using the previous image (also referred to herein as a first image IMG1), which may be an image previously generated by the image sensor 100. Further in paragraph [0090]-CHOI discloses the vanishing point extractor 310 may identify an area of pixels corresponding to the vanishing point VP1 of the first image IMG1 and the object OB1 of the first image IMG1 using the information on the first image IMG1. As shown in FIG. 4, the area corresponding to the object OB1 of the first image IMG1 may be implemented in the form of a bounding box. Please also read paragraph [0076-0077]]) in a previous frame (Fig. 1, #IMG1 called the first image. Paragraph [0060]-CHOI discloses the image sensor 100 may generate the current image (e.g., second image IMG2) subsequently to generating the previous image (e.g., first image IMG1) (wherein the #VP1 is associated with the first image #IMG1 or the previous frame and #VP2 is associated with the second image #IMG2 or the current frame)) of an image input from a camera (Fig. 1, 2 and 20, #100 and #510 called an image sensor. Paragraph [0054]. Further in paragraph [0090]-LEE discloses the first image IMG1 is an image obtained (e.g., generated by the image sensor 100) by photographing the front of the host vehicle 400 (see FIG. 3) and/or generating an image of at least a portion of the surrounding environment that is proximate to a front of the host vehicle 400 (e.g., in front of the host vehicle 400) and may include an object OB1 recognized as a vehicle); determining, by a controller (Fig. 1-2 and 20, #300, #530 and #550 called processor. Paragraph [0140]-CHOI discloses example embodiments shown in FIG. 19 may be performed by the processor 300 of FIG. 1, the processor 300 of FIG. 2, the processor 530 of FIG. 20, the main processor 550 of FIG. 20, and/or any parts thereof, or the like), a matching area matching with the template by performing template matching in a current frame (Fig. 4. Paragraph [0076]-CHOI discloses the matching point extractor 313 may obtain a plurality of first templates respectively corresponding to the plurality of sample points SP1 to SPn from the first image IMG1. In paragraph [0077]-CHOI discloses the matching point extractor 313 may compare the plurality of first templates of the first image IMG1 and the second image IMG2, identify an area similar to at least one of the plurality of first templates in (e.g., of) the second image IMG2, and obtain at least one area identified above from the second image IMG2 as a second template (wherein second image #IMG2 is the current frame). The matching point extractor 313 may extract at least one of matching points MP1 to MPm from at least one second template. The matching point extractor 313 may obtain a central point (e.g., center point) of each of the second templates as a matching point. Further in paragraph [0078]-CHOI discloses templates, or the like that are determined to be “similar” may refer to separate areas of one or more images that have a determined correlation that is greater than a correlation value threshold); determining, by the controller (Fig. 1-2 and 20, #300, #530 and #550 called processor. Paragraph [0140]-CHOI discloses example embodiments shown in FIG. 19 may be performed by the processor 300 of FIG. 1, the processor 300 of FIG. 2, the processor 530 of FIG. 20, the main processor 550 of FIG. 20, and/or any parts thereof, or the like), an amount of position change of the vanishing point (Fig. 4-7 and 15, 17-18, #VP1 and #VP2 called vanishing points. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the average coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2) based on an amount of position change between the template and the matching area (Fig. 4. Paragraph [0076]-CHOI discloses the matching point extractor 313 may obtain a plurality of first templates respectively corresponding to the plurality of sample points SP1 to SPn from the first image IMG1. In paragraph [0077]-CHOI discloses the matching point extractor 313 may compare the plurality of first templates of the first image IMG1 and the second image IMG2, identify an area similar to at least one of the plurality of first templates in (e.g., of) the second image IMG2, and obtain at least one area identified above from the second image IMG2 as a second template (wherein second image #IMG2 is the current frame). The matching point extractor 313 may extract at least one of matching points MP1 to MPm from at least one second template); wherein the setting of the area around the vanishing point as the template (Fig. 10. Paragraph [0107]-CHOI discloses the vanishing point extractor 310 (see FIG. 1) (specifically, the matching point extractor 313 (see FIG. 1)) may identify an area similar to the first template corresponding to each sample point, within the search area corresponding to each sample point) includes: determining a reliability of the template based on a variance value of the template; (Fig. 10. Paragraph [0107]-CHOI discloses the vanishing point extractor 310 may perform patch matching to identify an area similar to the first template. An area may be determined to be “similar” to the first template in response to a determination that the pixels of the area match the pixels of the first template within a particular confidence and/or margin (e.g., at least a 90% match between the pixels of the area and the pixels of the first template)). Although CHOI explicitly teaches estimating, by the controller, the amount of position change of the vanishing point (Fig. 4. Paragraph [0076]-CHOI discloses the matching point extractor 313 may obtain a plurality of first templates respectively corresponding to the plurality of sample points SP1 to SPn from the first image IMG1. In paragraph [0077]-CHOI discloses the matching point extractor 313 may compare the plurality of first templates of the first image IMG1 and the second image IMG2, identify an area similar to at least one of the plurality of first templates in (e.g., of) the second image IMG2, and obtain at least one area identified above from the second image IMG2 as a second template (wherein second image #IMG2 is the current frame). The matching point extractor 313 may extract at least one of matching points MP1 to MPm from at least one second template. In paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the average coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2). CHOI fails to explicitly teach estimating, by the controller, a change amount in a pose of the camera based on the amount of position change of the vanishing point; and estimating, by the controller, a pose of the vehicle depending on the change amount in pose of the camera. However, SAGONG discloses estimating, by the controller (Fig. 1, #120 called a processor. Paragraph [0050]), a change amount in a pose (Fig. 1. Paragraph [0051]-SAGONG discloses the sensor 110 generates sensing data. The sensor 110 generates sensing data by sensing information used for estimating a position. In paragraph [0052]-SAGONG discloses the IMU is also referred to as “inertial measurer”. The IMU measures a change in pose) of the camera (Fig. 1, #110 called a sensor. Paragraph [0050]. Further in paragraph [0057]-SAGONG discloses the camera sensor included in the sensor 110 may be attached to the target); and estimating, by the controller (Fig. 1. Paragraph [0049]- SAGONG discloses FIG. 1 is a block diagram illustrating an example of a position), a pose of the vehicle (Fig. 1. Paragraph [0049]- SAGONG discloses FIG. 1 is a block diagram illustrating an example of a position estimation apparatus 100. In the description herein related to the position estimation apparatus 100, the term “target” refers to an object of which a pose and position are estimated by the position estimation apparatus 100. The position estimation apparatus 100 may be mounted on a vehicle. The target may be the vehicle. Please also read paragraph [0025]) depending on the change amount in pose (Fig. 1. Paragraph [0051]-SAGONG discloses the sensor 110 generates sensing data. The sensor 110 generates sensing data by sensing information used for estimating a position [0052]-SAGONG discloses the IMU is also referred to as “inertial measurer”. The IMU measures a change in pose. In paragraph the camera sensor makes a same motion as a motion of the target, the image data sensed by the camera sensor includes visual information matching the position and pose of the target) of the camera (Fig. 1, #110 called a sensor. Paragraph [0050]. Further in paragraph [0057]-SAGONG discloses the camera sensor included in the sensor 110 may be attached to the target). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI of having a control method of a vehicle, with the teachings of SAGONG of having estimating, by the controller, a change amount in a pose of the camera; and estimating, by the controller, a pose of the vehicle depending on the change amount in pose of the camera. Wherein CHOI’s method having estimating, by the controller, a change amount in a pose of the camera based on the amount of position change of the vanishing point; and estimating, by the controller, a pose of the vehicle depending on the change amount in pose of the camera. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and vanishing point detection, since both CHOI and SAGONG concern vanishing point detection. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAGONG provides systems and methods that improve vanishing-point detection and lane boundary detection model. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAGONG et al. (US 20200125859 A1), Abstract and Paragraph [0109]. CHOI fails to explicitly teach and changing a position of the template, based on the reliability of the template being low. However, AOKI explicitly teaches and changing a position of the template (Fig. 1. Paragraph [0095]-AOKI discloses Step 306: the vertical offset correction unit 132 sets a minimum value ΔY0 in the predetermined range to the vertical moving amount ΔY1 (=ΔY0) of the reference image of the geometry correction modification information storage unit 122 with a vertical moving amount ΔY1 of the reference image read in Step 301 as the center. Herein, in the second and later operations, the vertical offset correction unit 132 sets a value further increased by ΔY from the minimum value in the predetermined range with the vertical moving amount of the reference image as the center to the vertical moving amount ΔY1 (=ΔY1+ΔY) of the reference image of the geometry correction modification information storage unit 122. In paragraph [0096]-AOKI discloses Step 307: with the similar operation to Step 203, the geometry correction unit 125 geometrically corrects the benchmark image and the reference image after capturing, and moves the reference image in the vertical direction by the predetermined interval ΔY in the predetermined range. Please also read paragraph [0052, 0075-0076 and 0108-0109]), based on the reliability of the template being low (Fig. 1. Paragraph [0098]-AOKI discloses Step 309: the parallax image evaluation unit 131 counts the number of regions determined as valid in the parallax image. In paragraph [0099]-AOKI discloses Step 310: in a case where the vertical moving amount of the reference image is all set for every predetermined interval ΔY in the predetermined range, and Step 306 to Step 309 are performed, the process proceeds to Step 311. If not, the process proceeds to Step 306. Further in paragraph [0068]-AOKI discloses the parallax calculation unit 126 determines whether the parallax is valid or invalid by the following two determination methods. In a case where it is determined that a minimum value of the obtained SAD is equal to or more than a threshold using the retrieved image 404 on the reference image 402 at the same height as the template image 403, the parallax calculation unit 126 determines that the template 403 and the retrieved image 405 are not matched and the parallax of the region is invalid. In a case where the minimum value of the SAD is less than the threshold, it is determined that the parallax is valid. Please also read paragraph [0070]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG having a control method of a vehicle, with the teachings of AOKI of having changing a position of the template, based on the reliability of the template being low. Wherein CHOI’s method having changing a position of the template, based on the reliability of the template being low. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and the correction of images, since both CHOI and AOKI concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while AOKI provides systems and methods that improve the correction images. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and AOKI et al. (US 20200068186 A1), Abstract and Paragraph [0147]. Regarding claim 15, CHOI explicitly teaches a vehicle (Fig. 3, #400 called a host vehicle. Paragraph [0084]-CHOI discloses FIG. 3 is a diagram showing a host vehicle 400 including the vanishing point extraction device 10 of FIG. 1 and/or FIG. 2), comprising: a camera (Fig. 1, 2 and 20, #100 and #510 called an image sensor. Paragraph [0054]) configured to photograph an area around of the vehicle (Fig. 4. Paragraph [0090]-CHOI discloses the first image IMG1 is an image obtained (e.g., generated by the image sensor 100) by photographing the front of the host vehicle 400 (see FIG. 3) and/or generating an image of at least a portion of the surrounding environment that is proximate to a front of the host vehicle 400 (e.g., in front of the host vehicle 400) and may include an object OB1 recognized as a vehicle.); and a controller electrically connected to the camera (Fig. 1-2 and 20, #300, #530 and #550 called processor. Paragraph [0140]-CHOI discloses example embodiments shown in FIG. 19 may be performed by the processor 300 of FIG. 1, the processor 300 of FIG. 2, the processor 530 of FIG. 20, the main processor 550 of FIG. 20, and/or any parts thereof, or the like), wherein the controller is configured to: set, as a template, an area around a vanishing point (Fig. 5. Paragraph [0060]-CHOI discloses when the current image (also referred to herein as a second image IMG2) is received from the image sensor 100 (e.g., in response to such receipt), the vanishing point extractor 310 may extract the vanishing point of the current image using the previous image (also referred to herein as a first image IMG1), which may be an image previously generated by the image sensor 100. Further in paragraph [0090]-CHOI discloses the vanishing point extractor 310 may identify an area of pixels corresponding to the vanishing point VP1 of the first image IMG1 and the object OB1 of the first image IMG1 using the information on the first image IMG1. As shown in FIG. 4, the area corresponding to the object OB1 of the first image IMG1 may be implemented in the form of a bounding box. Please also read paragraph [0076-0077]]) in a previous frame (Fig. 1, #IMG1 called the first image. Paragraph [0060]-CHOI discloses the image sensor 100 may generate the current image (e.g., second image IMG2) subsequently to generating the previous image (e.g., first image IMG1) (wherein the #VP1 is associated with the first image #IMG1 or the previous frame and #VP2 is associated with the second image #IMG2 or the current frame)) of an image input from the camera (Fig. 4, #100 called an image sensor. Paragraph [0054]. Further in paragraph [0090]-CHOI discloses the first image IMG1 is an image obtained (e.g., generated by the image sensor 100) by photographing the front of the host vehicle 400 (see FIG. 3) and/or generating an image of at least a portion of the surrounding environment that is proximate to a front of the host vehicle 400 (e.g., in front of the host vehicle 400) and may include an object OB1 recognized as a vehicle.), determine a matching area matching with the template by performing template matching in a current frame (Fig. 4. Paragraph [0076]-CHOI discloses the matching point extractor 313 may obtain a plurality of first templates respectively corresponding to the plurality of sample points SP1 to SPn from the first image IMG1. In paragraph [0077]-CHOI discloses the matching point extractor 313 may compare the plurality of first templates of the first image IMG1 and the second image IMG2, identify an area similar to at least one of the plurality of first templates in (e.g., of) the second image IMG2, and obtain at least one area identified above from the second image IMG2 as a second template (wherein second image #IMG2 is the current frame). The matching point extractor 313 may extract at least one of matching points MP1 to MPm from at least one second template. The matching point extractor 313 may obtain a central point (e.g., center point) of each of the second templates as a matching point. Further in paragraph [0078]-CHOI discloses templates, or the like that are determined to be “similar” may refer to separate areas of one or more images that have a determined correlation that is greater than a correlation value threshold), determine an amount of position change of the vanishing point (Fig. 4-7 and 15, 17-18, #VP1 and #VP2 called vanishing points. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the average coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2) based on an amount of position change between the template and the matching area (Fig. 4. Paragraph [0076]-CHOI discloses the matching point extractor 313 may obtain a plurality of first templates respectively corresponding to the plurality of sample points SP1 to SPn from the first image IMG1. In paragraph [0077]-CHOI discloses the matching point extractor 313 may compare the plurality of first templates of the first image IMG1 and the second image IMG2, identify an area similar to at least one of the plurality of first templates in (e.g., of) the second image IMG2, and obtain at least one area identified above from the second image IMG2 as a second template (wherein second image #IMG2 is the current frame). The matching point extractor 313 may extract at least one of matching points MP1 to MPm from at least one second template). wherein the controller is further configured to: determine a reliability of the template based on a variance value of the template (Fig. 10. Paragraph [0107]-CHOI discloses the vanishing point extractor 310 (see FIG. 1) (specifically, the matching point extractor 313 (see FIG. 1)) may identify an area similar to the first template corresponding to each sample point, within the search area corresponding to each sample point. The vanishing point extractor 310 may perform patch matching to identify an area similar to the first template. An area may be determined to be “similar” to the first template in response to a determination that the pixels of the area match the pixels of the first template within a particular confidence and/or margin (e.g., at least a 90% match between the pixels of the area and the pixels of the first template)); Although CHOI explicitly teaches the amount of position change of the vanishing point (Fig. 4. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the average coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2). CHOI fails to explicitly teach estimate a change amount in a pose of the camera based on the amount of position change of the vanishing point, and estimate a pose of the vehicle depending on the change amount in pose of the camera. However, SAGONG discloses estimate a change amount in a pose of the camera (Fig. 1. Paragraph [0051]-SAGONG discloses the sensor 110 generates sensing data. The sensor 110 generates sensing data by sensing information used for estimating a position. In paragraph [0052]-SAGONG discloses the IMU is also referred to as “inertial measurer”. The IMU measures a change in pose) of the camera (Fig. 1, #110 called a sensor. Paragraph [0050]. Further in paragraph [0057]-SAGONG discloses the camera sensor included in the sensor 110 may be attached to the target) based on the amount of position change of the vanishing point (Fig. 1. Paragraph [00]-SAGONG discloses FIG. 2 illustrates a situation in which an initial pose of the initial localization information has a rotation error Δθ with respect to an actual pose. In paragraph [0071]-SAGONG discloses the first rotation reference point 510 is determined from map data 501 based on initial localization information. Based on the initial localization information, an initial position and an initial pose of a target on the map data 501 is determined. Also, the first rotation reference point 510 corresponds to a vanishing point of lane boundary lines captured by an image sensor at the initial position and the initial pose of the target. In paragraph [0072]-SAGONG discloses when the initial localization information, for example, the initial position and the initial pose, has the aforementioned position error and pose error, position/pose parameter correction 520 is performed on the first rotation reference point 510), and estimate a pose of the vehicle (Fig. 1. Paragraph [0049]- SAGONG discloses FIG. 1 is a block diagram illustrating an example of a position estimation apparatus 100. In the description herein related to the position estimation apparatus 100, the term “target” refers to an object of which a pose and position are estimated by the position estimation apparatus 100. The position estimation apparatus 100 may be mounted on a vehicle. The target may be the vehicle. Please also read paragraph [0025]) depending on the change amount in pose (Fig. 1. Paragraph [0051]-SAGONG discloses the sensor 110 generates sensing data. The sensor 110 generates sensing data by sensing information used for estimating a position [0052]-SAGONG discloses the IMU is also referred to as “inertial measurer”. The IMU measures a change in pose. In paragraph the camera sensor makes a same motion as a motion of the target, the image data sensed by the camera sensor includes visual information matching the position and pose of the target) of the camera (Fig. 1, #110 called a sensor. Paragraph [0050]. Further in paragraph [0057]-SAGONG discloses the camera sensor included in the sensor 110 may be attached to the target). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI of having a vehicle, with the teachings of SAGONG of having estimate a change amount in a pose of the camera, and estimate a pose of the vehicle depending on the change amount in pose of the camera. Wherein CHOI’s vehicle having estimate a change amount in a pose of the camera based on the amount of position change of the vanishing point, and estimate a pose of the vehicle depending on the change amount in pose of the camera. The motivation behind the modification would have been to obtain a vehicle that improves autonomous driving and vanishing point detection, since both CHOI and SAGONG concern vanishing point detection. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAGONG provides systems and methods that improve vanishing-point detection and lane boundary detection model. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAGONG et al. (US 20200125859 A1), Abstract and Paragraph [0109]. CHOI fails to explicitly teach and change a position of the template, based on the reliability of based on the reliability of the template being low. However, AOKI explicitly teaches and change a position of the template (Fig. 1. Paragraph [0095]-AOKI discloses Step 306: the vertical offset correction unit 132 sets a minimum value ΔY0 in the predetermined range to the vertical moving amount ΔY1 (=ΔY0) of the reference image of the geometry correction modification information storage unit 122 with a vertical moving amount ΔY1 of the reference image read in Step 301 as the center. Herein, in the second and later operations, the vertical offset correction unit 132 sets a value further increased by ΔY from the minimum value in the predetermined range with the vertical moving amount of the reference image as the center to the vertical moving amount ΔY1 (=ΔY1+ΔY) of the reference image of the geometry correction modification information storage unit 122. In paragraph [0096]-AOKI discloses Step 307: with the similar operation to Step 203, the geometry correction unit 125 geometrically corrects the benchmark image and the reference image after capturing, and moves the reference image in the vertical direction by the predetermined interval ΔY in the predetermined range. Please also read paragraph [0052, 0075-0076 and 0108-0109]), based on the reliability of based on the reliability of the template being low (Fig. 1. Paragraph [0098]-AOKI discloses Step 309: the parallax image evaluation unit 131 counts the number of regions determined as valid in the parallax image. In paragraph [0099]-AOKI discloses Step 310: in a case where the vertical moving amount of the reference image is all set for every predetermined interval ΔY in the predetermined range, and Step 306 to Step 309 are performed, the process proceeds to Step 311. If not, the process proceeds to Step 306. Further in paragraph [0068]-AOKI discloses the parallax calculation unit 126 determines whether the parallax is valid or invalid by the following two determination methods. In a case where it is determined that a minimum value of the obtained SAD is equal to or more than a threshold using the retrieved image 404 on the reference image 402 at the same height as the template image 403, the parallax calculation unit 126 determines that the template 403 and the retrieved image 405 are not matched and the parallax of the region is invalid. In a case where the minimum value of the SAD is less than the threshold, it is determined that the parallax is valid. Please also read paragraph [0070]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG of having a vehicle, with the teachings of AOKI of having and change a position of the template, based on the reliability of based on the reliability of the template being low. Wherein CHOI’s vehicle having and change a position of the template, based on the reliability of based on the reliability of the template being low. The motivation behind the modification would have been to obtain a vehicle that improves autonomous driving and the correction of images, since both CHOI and AOKI concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while AOKI provides systems and methods that improve the correction images. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and AOKI et al. (US 20200068186 A1), Abstract and Paragraph [0147]. Claims 2 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI and in further view of AKITA et al. (US 20100027844 A1), hereinafter referenced as AKITA. Regarding claim 2, CHOI in view of SAGONG explicitly teach the control method of claim 1, CHOI in view of SAGONG fails to explicitly teach wherein the setting of the area around the vanishing point as the template further includes changing a position of the template based on a variance value of the template, and wherein the variance value of the template is determined based on a degree of contrast of the image. However, AKITA explicitly teaches wherein the setting of the area around the vanishing point as the template further includes changing a position of the template based on a variance value of the template (Fig. 6. Paragraph [0142]-AKITA discloses the optical flow calculation unit 4 sets a template around the feature point and searches for an area having high correlativity with this template. In paragraph [0143]-AKITA discloses in the normalized cross correlation technique, a template image T (i, j) having (M.sub.T.times.N.sub.T) pixels is moved on the pixels present within a search area, thereby to search a point in the template image which point has the maximum correlativity coefficient (correlativity value). Further in paragraph [0150] The grouping unit 5 effects grouping of optical flows belonging to one moving object, under predetermined conditions. In paragraph [0151]-AKITA discloses the first condition or requirement is that extension lines L of the optical flow converge at the single point, i.e. the vanishing point FOE (mfoe) within the predetermined constraint range. Please also read paragraph [0115, 0129]), and wherein the variance value of the template is determined based on a degree of contrast of the image (Fig. 6. Paragraph [0030]-AKITA discloses the optical flow calculation unit calculates the optical flow with utilizing the correlativity value between two images. There sometimes occurs variation in the image quality, e.g. contrast of image captured in an environment where the image is captured. Therefore, variation occurs also in the correlativity value in the calculation of the optical flow. As the correlativity value tends to decrease in association with reduction in image quality, optical flow reliability can be calculated, based upon this correlativity value. Since the recognition reliability is calculated based on the result of at least one image processing operation, the moving object recognizing apparatus can calculate the recognition reliability with use of the optical flow reliability). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG of having a control method of a vehicle, with the teachings of AKITA of having wherein the setting of the area around the vanishing point as the template further includes changing a position of the template based on a variance value of the template, and wherein the variance value of the template is determined based on a degree of contrast of the image. Wherein CHOI’s method having wherein the setting of the area around the vanishing point as the template further includes changing a position of the template based on a variance value of the template, and wherein the variance value of the template is determined based on a degree of contrast of the image. The motivation behind the modification would have been to obtain a method of that improves the tracking of vehicles and camera calibration, since both CHOI and AKITA concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while AKITA provides systems and methods that improves object recognition and the safety of vehicles. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and AKITA (US 20100027844 A1), Abstract and Paragraph [0002 and 0071]. Regarding claim 16, CHOI in view of SAGONG explicitly teach the vehicle of claim 15, CHOI in view of SAGONG fail to explicitly teach wherein the controller is configured to change a position of the template based on the variance value of the template, and wherein the variance value of the template is determined based on a degree of contrast of the image. However, AKITA explicitly teaches wherein the controller is configured to change a position of the template based on the variance value of the template, and wherein the variance value of the template is determined based on a degree of contrast of the image (Fig. 5. Paragraph [0066]-LEE discloses in operation 420, the vanishing point estimation apparatus may determine a bounding box corresponding to a rear side of the target vehicle. In paragraph [0067]-LEE discloses in operation 430, the vanishing point estimation apparatus may track positions of the objects in a world coordinate system by associating the objects detected in operation 420 and current position coordinates of the objects estimated from images of previous time points that precede the current time point (wherein “associating” means matching an object detected in an image of a current time point to an object estimated from an image of a previous time point). In paragraph [0068]-LEE discloses the vanishing point estimation apparatus may predict positions of second bounding boxes corresponding to the estimated current position coordinates. In paragraph [0070]-LEE discloses in operation 430, the vanishing point estimation apparatus may match first bounding boxes corresponding to the objects and the second bonding boxes (wherein the positions of the objects are tracked based on matching). Further in paragraph [0071]-LEE discloses in operation 440, the vanishing point estimation apparatus may estimate a vanishing point for each of the objects based on the positions of the objects tracked in operation 430. In paragraph [0073]-LEE disclose in operation 450, the vanishing point estimation apparatus may output the vanishing point estimated for each of the objects in operation 440 (wherein the objects are projected onto the image of the current time point). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG of having a vehicle, with the teachings of AKITA of having wherein the controller is configured to change a position of the template based on the variance value of the template, and wherein the variance value of the template is determined based on a degree of contrast of the image. Wherein CHOI’s vehicle having wherein the controller is configured to change a position of the template based on the variance value of the template, and wherein the variance value of the template is determined based on a degree of contrast of the image. The motivation behind the modification would have been to obtain a vehicle that improves the tracking of vehicles and camera calibration, since both CHOI and AKITA concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while AKITA provides systems and methods that improves object recognition and the safety of vehicles. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and AKITA (US 20100027844 A1), Abstract and Paragraph [0002 and 0071]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI and in further view of AKITA et al. (US 20100027844 A1), hereinafter referenced as AKITA and in further view of SINGH et al. (US 20220388535 A1), hereinafter referenced as SINGH. Regarding claim 6, CHOI in view of SAGONG and in further view of AOIKI and in further view of AKITA explicitly teach the control method of claim 2, CHOI in view of SAGONG fail to explicitly teach wherein the changing of the position of the template includes moving the template upward, based on a driving direction of the vehicle not being recognized or the vehicle going straight. However, SINGH explicitly teaches wherein the changing of the position of the template includes moving the template upward, based on a driving direction of the vehicle not being recognized or the vehicle going straight (Fig. 5. Paragraph [0046]-SINGH returning briefly to FIG. 5, translation 506 of bounding box 504 based on motion data of vehicle 110 may not exactly position translated bounding box 508 on object 502. If DNN 200 fails to detect an object 502 in cropped image 600 based on translated bounding box 508, the bounding box 504 can be incrementally moved left or right and up or down, starting at the location of first bounding box 504 and moving in the direction of the latitudinal and longitudinal motion data indicated by vehicle 110 sensors 116). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of AKITA of having a control method of a vehicle, with the teachings of SINGH of having wherein the changing of the position of the template includes moving the template upward, based on a driving direction of the vehicle not being recognized or the vehicle going straight. Wherein CHOI’s method having changing the position of the template, based on the reliability of the template being low. The motivation behind the modification would have been to obtain a method of that improves autonomous driving, camera calibration and DNN operations, since both CHOI and SINGH concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SINGH provides systems and methods that improves the training, training data generation and operation of DNNs. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SINGH et al. (US 20220388535 A1), Abstract and Paragraph [0060 and 0065]. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI and in further view of AKITA et al. (US 20100027844 A1), hereinafter referenced as AKITA and in further view of LIPCHIN et al. (US 20210019914 A1), hereinafter referenced as LIPCHIN. Regarding claim 4, CHOI in view of SAGONG and in further view of AOKI and in further view of AKITA explicitly teach the control method of claim 2, CHOI in view of SAGONG explicitly teach wherein the changing of the position of the template includes moving the template according to a slope of a horizontal line based on a roll angle at which the camera is mounted. However, LIPCHIN explicitly teaches wherein the changing of the position of the template (Fig. 6. Paragraph [0117]-LIPCHIN discloses the video analytics module 224 detects a person 302 and identifies a bounding box 408 that encloses the person 302 in the image, which is depicted in FIGS. 4A and 4B. The automatic camera calibration method takes multiple bounding boxes 408 as input and determines the following camera 108 parameters: tilt θ, height h.sub.C, roll ρ (if it is not already defined) and focal length f (if it is not already defined). Further in paragraph [0118] FIG. 4A shows an example bounding box 408 that the video analytics module 224 generates in response to detecting a person 302. In the 2D model depicted in FIG. 3A, this bounding box 408 circumscribes a projection of the rectangle 304 into the image plane (x,y). The projected rectangle 304 is accordingly represented as a trapezoid 410. Please also read paragraph [0126, 0129 and 0163]) includes moving the template according to a slope of a horizontal line based on a roll angle (Fig. 7. Paragraph [0118]-LIPCHIN discloses in the top and bottom sides 414a,b of the trapezoid 410 are parallel to the horizon line 402. In FIG. 4A, the horizon line 402 has a negative slope, although in different embodiments such as those depicted in FIG. 7, the horizon line 402 may have zero slope or a positive slope. A pair of lines 412 extend from the left and right sides of the trapezoid 410 and converge below the bounding box 408 at a vanishing point 406. FIG. 5A is a detailed view of the bounding box 408 and trapezoid 410, with various vertices highlighted as discussed further below. In at least some example embodiments, the horizon line 402 is assumed to be horizontal so that roll angle can be approximated by either ρ=0 or 180° depending on a chosen system of coordinates. In paragraph [0131]-LIPCHIN discloses in the computational graph 1611, the video analytics module 224 obtains all three camera parameters (tilt, roll, and focal length) (block 1600), and from them determines the horizon line 402 (Equation (4)) and vanishing point 406 (Equation (3)) (block 1604)) at which the camera is mounted (Fig. 1, #108 called a video capture device or a camera. Paragraph [0071]-LIPCHIN discloses referring now to FIG. 1, therein illustrated is a block diagram of connected devices of a video capture and playback system 100. In paragraph [0072]-LIPCHIN discloses the video capture and playback system 100 includes at least one video capture device 108 being operable to capture a plurality of images and produce image data representing the plurality of captured images. In paragraph [0073]-LIPCHIN discloses each video capture device 108 includes at least one image sensor 116 for capturing a plurality of images. The video capture device 108 may be a digital video camera and the image sensor 116 may output captured light as a digital data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of AKITA of having a control method of a vehicle, with the teachings of LIPCHIN of having wherein the changing of the position of the template includes moving the template according to a slope of a horizontal line based on a roll angle at which the camera is mounted.. Wherein CHOI’s method having wherein the changing of the position of the template includes moving the template according to a slope of a horizontal line based on a roll angle at which the camera is mounted. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and calibration accuracy, since both CHOI and LIPCHIN concern vehicle detection and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while LIPCHIN provides systems and methods that improves calibration accuracy. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and LIPCHIN et al. (US 20210019914 A), Abstract and Paragraph [0211]. Claims 5 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI and in further view of AKITA et al. (US 20100027844 A1), hereinafter referenced as AKITA and in further view of LIPCHIN et al. (US 20210019914 A1), hereinafter referenced as LIPCHIN and in further view of KASAMI (US 20170263129 A1), hereinafter referenced as KASAMI. Regarding claim 5, CHOI in view of SAGONG and in further view of AOKI and in further view of AKITA and in further view of LIPCHIN explicitly teach the control method of claim 4, CHOI in view of SAGONG fail to explicitly teach wherein the changing of the position of the template includes moving the template according to the slope of the horizontal line in a direction opposite to a driving direction of the vehicle. However, KASAMI explicitly teaches wherein the changing of the position of the template includes moving the template according to the slope of the horizontal line in a direction opposite to a driving direction of the vehicle (Fig. 9. Paragraph [0094]-KASAMI discloses as illustrated in FIG. 9, the searching unit 120 moves a two-dimensional information template 211, which is the search target, within the taken image 200 in which the search is to be performed. The searching unit 120 moves the two-dimensional information template 211 in predetermined units in the horizontal direction within the taken image 200, and further moves the two-dimensional information template 211 in predetermined units in the vertical direction within the taken image 200 (wherein the horizontal direction is in a direction opposite to the vehicle). Further in paragraph [0106]-KASAMI discloses in the state in which the two-dimensional information template 213 has moved to the position illustrated in (d) in FIG. 11, the portion 214b representing the remaining portion after clipping the two-dimensional information template 213 according to the boundary line 220 substantially matches with the image 411a, and the degree of similarity S becomes the highest. Please also see Fig. 10-11). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of AKITA and in further view of LIPCHIN of having a control method of a vehicle, with the teachings of KASAMI of having wherein the changing of the position of the template includes moving the template according to the slope of the horizontal line in a direction opposite to a driving direction of the vehicle. Wherein CHOI’s method having wherein the changing of the position of the template includes moving the template according to the slope of the horizontal line in a direction opposite to a driving direction of the vehicle. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and vehicle identification, since both CHOI and KASAMI concern vehicle detection and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while KASAMI provides systems and methods that improves the ability to identify a position and travel direction of a surrounding vehicle. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and KASAMI et al. (US 20210019914 A), Abstract. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI and in further view of LUI et al. (US 20190103026 A1), hereinafter referenced as LIU. Regarding claim 7, CHOI in view of SAGONG and in further view of AOKI explicitly teach the control method of claim 1, CHOI in view of SAGONG fail to explicitly teach wherein the setting of the area around the vanishing point as the template includes changing a size of the template based on a speed of the vehicle. However, LIU explicitly teaches wherein the setting of the area around the vanishing point as the template (Fig. 2B. Paragraph [0027]-LIU discloses FIGS. 2A and 2B are diagrams illustrating cropping a portion of an image based on a vanishing point. The collision warning system 100 may determine a vanishing point 200 of the image frame based on an intersection of the dashed lines shown in FIG. 2A. In paragraph [0028]-LIU discloses FIG. 2B is a diagram of a cropped image frame 210 captured from a forward-facing field of view of a vehicle 140. FIG. 2B shows cropped portions using a first scale 220 and a second scale 230. The scales may vary in size, dimensions, or other attributes. In paragraph [0029]-LIU discloses bounding box 240 indicates a portion of the image frame 210 including a detected vehicle in the same lane. The bounding box 250 indicates another portion of the image frame 210 including another detected vehicle in an adjacent lane. Bounding boxes may vary in size or dimension based on the corresponding vehicle or another type of detected object or the location of the object relative to the vehicle 140. In paragraph [0030]-LIU discloses the collision warning system 100 may center a scale for cropping about the vanishing point 215 of the image frame 210. The collision warning system 100 selectively tracks one or more objects with corresponding bounding boxes that overlap with the vanishing point of the image frame, which indicates that the tracked objects are likely in the same lane as the vehicle) includes changing a size of the template based on a speed (Fig. 2B. Paragraph [0021]-LIU discloses the client device 110 may include various sensors including one or more image sensors and motion sensors. The motion sensors can capture motion data such as an IMU stream, acceleration, velocity, or bearing of the client device 110, e.g., and by extension, the vehicle 140 to which the client device 110 is positioned or attached. Further in paragraph [0053]-LIU discloses example trajectory features include short-term velocity, mid-term velocity, long-term velocity, bounding box (“B-box”) based velocity (e.g., based on rate of change of the bounding box size as an object becomes closer or further)) of the vehicle (Fig. 2B. Paragraph [0029]-LIU discloses a bounding box increases or decreases in size as the “bounded” detected object moves closer toward or further away from the vehicle 140, respectively. In paragraph [0044]-LIU discloses tracker 330 may scale the bounding box 630 based on a vanishing point 640 or projected lines intersecting the vanishing point 640 in the 2D image frame. The tracker 330 may predict how the bounding box 630 may change as the detected object moves closer toward a provider's vehicle 140. In paragraph [0046]-LIU discloses the tracker 330 determines matches based on an affinity score, which accounts for similarity in appearance (e.g., relative position or dimensions in image frames) between a detected object and trajectory, as well as motion consistency between the detected object and trajectory. Using this process, the tracker 330 may match one or more features across multiple frames. Additionally in paragraph [0058]-LIU discloses the collision warning system 100 updates dimensions (or size) of the bounding box based on motion of the object). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI of having a control method of a vehicle, with the teachings of LIU of having wherein the setting of the area around the vanishing point as the template includes changing a size of the template based on a speed of the vehicle. Wherein CHOI’s method having wherein the setting of the area around the vanishing point as the template includes changing a size of the template based on a speed of the vehicle. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and vehicle tracking, since both CHOI and LIU concern vehicle detection and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while LIU provides systems and methods that improves the vehicle tracking efficiency. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and LIU et al. (US 20190103026 A1), Abstract and Paragraph [0030]. Regarding claim 8, CHOI in view of SAGONG and in further view of AOKI explicitly teach the control method of claim 1, CHOI in view of SAGONG fail to explicitly teach wherein the determining of the matching area includes performing the template matching using a normalized cross correlation matching. However, LIU explicitly teaches wherein the determining of the matching area includes performing the template matching using a normalized cross correlation matching (Fig. 2B. Paragraph [0040]-LIU discloses the object detector 320 detects objects in cropped portions of image frames of input image data determined by the cropping engine 310. Additionally, the object detector 320 may perform object detection using image parameters received from the cropping engine 310 including image type or resolution, camera or video frame rate, scale used for cropping, etc. The object detector 320 may use one or more types of object detection models or techniques including, normalized cross-correlation (NCC)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI of having a control method of a vehicle, with the teachings of LIU of having wherein the determining of the matching area includes performing the template matching using a normalized cross correlation matching. Wherein CHOI’s method having wherein the determining of the matching area includes performing the template matching using a normalized cross correlation matching. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and vehicle tracking, since both CHOI and LIU concern vehicle detection and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while LIU provides systems and methods that improves the vehicle tracking efficiency. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and LIU et al. (US 20190103026 A1), Abstract and Paragraph [0030]. Regarding claim 9, CHOI in view of SAGONG and in further view of AOKI explicitly teach the control method of claim 8, CHOI in view of SAGONG fail to explicitly teach wherein the determining of the matching area includes performing the template matching using the normalized cross correlation matching in the current frame consecutive from the previous frame. However, LIU explicitly teaches wherein the determining of the matching area includes performing the template matching using the normalized cross correlation matching (Fig. 2B. Paragraph [0040]-LIU discloses the object detector 320 detects objects in cropped portions of image frames of input image data determined by the cropping engine 310. The object detector 320 may use one or more types of object detection models or techniques including, normalized cross-correlation (NCC)) in the current frame consecutive from the previous frame (Fig. 2B. Paragraph [0046]-LIU discloses using synchronous detection and tracking, the tracker 330 determines matches (e.g., data association) between one or more detected objects and one or more trajectories at a given frame. The tracker 330 determines matches based on an affinity score, which accounts for similarity in appearance (e.g., relative position or dimensions in image frames) between a detected object and trajectory, as well as motion consistency between the detected object and trajectory. The tracker 330 may determine motion consistency based on a measure of intersection over a union of the detected object and trajectory in a constant position-based motion model and a constant velocity-based motion model. The tracker 330 may match one or more features across multiple frames, where the features follow a given predicted trajectory. Example features may include objects such as vehicles, people, or structures, as well as low-level features such as a corner or window of a building, or a portion of a vehicle). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI of having a control method of a vehicle, with the teachings of LIU of having wherein the determining of the matching area includes performing the template matching using the normalized cross correlation matching in the current frame consecutive from the previous frame. Wherein CHOI’s method having wherein the determining of the matching area includes performing the template matching using the normalized cross correlation matching in the current frame consecutive from the previous frame. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and vehicle tracking, since both CHOI and LIU concern vehicle detection and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while LIU provides systems and methods that improves the vehicle tracking efficiency. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and LIU et al. (US 20190103026 A1), Abstract and Paragraph [0030]. Claims 10-14 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI and in further view of SAKANO et al. (US 20140085469 A1), hereinafter referenced as SAKANO. Regarding claim 10, CHOI in view of SAGONG and in further view of AOKI explicitly teach the control method of claim 1, although CHOI explicitly teaches the control method of wherein the camera (Fig. 1-3 and 20, #100 and #510 called an image sensor and a sensor, respectively. Paragraph [0054]) is a front camera (Fig. 3. Paragraph [0085]-CHOI discloses referring to FIGS. 1 to 3, the host vehicle 400 may include a vanishing point extraction device 10 and a vehicle controller 410. The vanishing point extraction device 10 may be disposed on the upper end of the host vehicle 400, and the image sensor 100 may photograph the front of the host vehicle 400.) configured to obtain image data for a field of view facing a front of the vehicle (Fig. 1. Paragraph [0089]-CHOI discloses the first image IMG1 is an image obtained (e.g., generated by the image sensor 100) by photographing the front of the host vehicle 400 (see FIG. 3) and/or generating an image of at least a portion of the surrounding environment that is proximate to a front of the host vehicle 400 (e.g., in front of the host vehicle 400) and may include an object OB1 recognized as a vehicle), and wherein the determining of the amount of position change of the vanishing point (Fig. 15, #VP1 and #VP2 called vanishing points. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 (specifically, the vanishing point corrector 315 (see FIG. 1)) may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4) includes: determining a change amount in y-axis of the template based on the amount of position change between the template and the matching area (Fig. 15, #VP1 and #VP2 called vanishing points. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 (specifically, the vanishing point corrector 315 (see FIG. 1)) may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4), and determining a change amount in y-axis of the vanishing point based on a change amount in y-axis (Fig. 15, #VP1 and #VP2 called vanishing points. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 (specifically, the vanishing point corrector 315 (see FIG. 1)) may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4). CHOI in view of SAGONG fail to explicitly teach where a roll slope at which the front camera is mounted is compensated from the change amount in y-axis of the template. However, SAKANO explicitly teaches where a roll slope at which the front camera is mounted is compensated from the change amount in y-axis of the template (Fig. 1. Paragraph [0080]-SAKANO discloses the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing. In paragraph [0101]-SAKANO discloses the roll correction processing 601 calibrates the roll angles of the cameras. To calibrate the roll angles, the roll correction processing 601 once sets the slopes of the vertical straight lines L1, L2, L3, and L4, shot by the camera 1, camera 2, camera 3, and camera 4, to the same direction and, after that, changes the roll angles of the camera 1, camera 2, camera 3, and camera 4 by the same amount). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI of having a control method of a vehicle, with the teachings of SAKANO of having where a roll slope at which the front camera is mounted is compensated from the change amount in y-axis of the template. Wherein CHOI’s method having determining a change amount in y-axis of the template based on the amount of position change between the template and the matching area, and determining a change amount in y-axis of the vanishing point based on a change amount in y-axis where a roll slope at which the front camera is mounted is compensated from the change amount in y-axis of the template. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Regarding claim 11, CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO explicitly teaches the control method of claim 10, although CHOI explicitly teaches the change amount in y-axis of the vanishing point (Fig. 4-7 and 15, 17-18, #VP1 and #VP2 called vanishing points. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the average coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2. Please also read paragraph [0076-0079]). CHOI in view of SAGONG fail to explicitly teach wherein the estimating of the change amount in the pose of the camera includes estimating an amount of pitch change of the front camera based on the change amount in y-axis of the vanishing point. However, SAKANO explicitly teaches wherein the estimating of the change amount in the pose (Fig. 6. Paragraph [0080]-SAKANO the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing. The calibration processing 408 produces a calibrated parameter 412 (second camera parameter) for the position and the pose that include an error from the design value of each camera) of the camera (Fig. 2, #1 called camera. Paragraph [0080] (wherein camera 1 is the front camera)) includes estimating an amount of pitch change of the front camera based on the change amount in y-axis (Fig. 6. Paragraph [0098]-SAKANO discloses the calibration processing 408 includes the virtual line generation processing 501, pitch correction processing 502, yaw correction processing 504, height correction processing 505, roll correction processing 601, and translation correction processing 602. The calibration processing 408 sequentially obtains the three camera pose parameters and the camera position parameters, corresponding to the three-dimensional space coordinates, by means of the pitch correction processing 502 (pitch angle), yaw correction processing 504 (yaw angle), height correction processing 505 (camera height), roll correction processing 601 (roll angle), and translation correction processing 602 (roll angle). Please also read paragraph [0088]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO of having a control method of a vehicle, with the teachings of SAKANO of having wherein the estimating of the change amount in the pose of the camera includes estimating an amount of pitch change of the front camera based on the change amount in y-axis. Wherein CHOI’s method having wherein the estimating of the change amount in the pose of the camera includes estimating an amount of pitch change of the front camera based on the change amount in y-axis of the vanishing point. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Regarding claim 12, CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO explicitly teaches the control method of claim 11, CHOI in view of SAGONG fail to explicitly teach wherein the estimating of the pose of the vehicle includes estimating a pitch pose of the vehicle based on the amount of pitch change of the front camera. However, SAKANO explicitly teaches wherein the estimating of the pose (Fig. 6. Paragraph [0080]-SAKANO the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing. The calibration processing 408 produces a calibrated parameter 412 (second camera parameter) for the position and the pose that include an error from the design value of each camera) of the vehicle (Fig. 1. Paragraph [0033]-SAKANO discloses FIG. 1 is a diagram showing an example of a calibration apparatus. In paragraph [0034]-SAKANO discloses the calibration apparatus 100 images the vehicle's peripheral area, which includes a calibration index provided in advance on the surface of the road where a vehicle is positioned, with the use of a plurality of cameras mounted on the vehicle and, using the plurality of imaged images, calibrates the cameras. This calibration apparatus 100 includes a calibration target 101, a camera 1, a camera 2, a camera 3, and a camera 4 all of which are an imaging unit, a camera interface 102, an operation device 103, a RAM 104 that is a storage unit, a ROM 105 that is a storage unit, an input device 106, and a display device 107) includes estimating a pitch pose of the vehicle based on the amount of pitch change (Fig. 6. Paragraph [0098]-SAKANO discloses the calibration processing 408 includes the virtual line generation processing 501, pitch correction processing 502, yaw correction processing 504, height correction processing 505, roll correction processing 601, and translation correction processing 602. The calibration processing 408 sequentially obtains the three camera pose parameters and the camera position parameters, corresponding to the three-dimensional space coordinates, by means of the pitch correction processing 502 (pitch angle), yaw correction processing 504 (yaw angle), height correction processing 505 (camera height), roll correction processing 601 (roll angle), and translation correction processing 602 (roll angle). Please also read paragraph [0080 and 0088]) of the front camera (Fig. 2, #1 called camera. Paragraph [0080] (wherein camera 1 is the front camera)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO of having a control method of a vehicle, with the teachings of SAKANO of having wherein the estimating of the pose of the vehicle includes estimating a pitch pose of the vehicle based on the amount of pitch change of the front camera. Wherein CHOI’s method having wherein the estimating of the pose of the vehicle includes estimating a pitch pose of the vehicle based on the amount of pitch change of the front camera. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Regarding claim 13, CHOI in view of SAGONG and in further view of AOKI explicitly teaches the control method of claim 1, although CHOI explicitly teaches the fused amount of position change of the vanishing point (Fig. 4. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the avrage coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2). CHOI in view of SAGONG fail to explicitly teach the control method of wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle, and wherein the estimating of the change amount in the pose of the camera includes: fusing an amount of position change of a vanishing point corresponding to each camera of the multi-camera, estimating a change amount in pose of each of the cameras of the multi-camera based on the fused amount of position change of the vanishing point, and estimating the pose of the vehicle based on the estimated change amount in the pose of each of the cameras. However, SAKANO explicitly teaches the control method of wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle (Fig. 1. Paragraph [0034]-SAKANO discloses calibration apparatus 100 images the vehicle's peripheral area, which includes a calibration index provided in advance on the surface of the road where a vehicle is positioned, with the use of a plurality of cameras mounted on the vehicle and, using the plurality of imaged images, calibrates the cameras), and wherein the estimating of the change amount in the pose of the camera includes: fusing an amount of position change corresponding to each camera of the multi-camera (Fig. 1. Paragraph [0080]-SAKANO discloses the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing. The calibration processing 408 produces a calibrated parameter 412 (second camera parameter) for the position and the pose that include an error from the design value of each camera. In paragraph [0081]-SAKANO discloses the processing of map generation processing 409 is the same as that of the map generation processing 402. The map contains the correspondence between each pixel in the videos of the camera 1, camera 2, camera 3, and camera 4 and a pixel in the converted bird's-eye-view camera videos. A composite image is generated by generating a bird's-eye-view video according to this correspondence. The map generation processing 409 uses the calibrated parameter 412 when generating the map and therefore correctly knows the poses of the camera 1, camera 2, camera 3, and camera 4), estimating a change amount in pose of each of the cameras of the multi-camera based on the fused amount of position change, and estimating the pose of the vehicle based on the estimated change amount in the pose of each of the cameras (Fig. 6. Paragraph [0097]-SAKANO discloses FIG. 6 is a flowchart showing the detail of the calibration processing 408. In paragraph [0098]-SAKANO discloses the calibration processing 408 includes the virtual line generation processing 501, pitch correction processing 502, yaw correction processing 504, height correction processing 505, roll correction processing 601, and translation correction processing 602. The calibration processing 408 sequentially obtains the three camera pose parameters and the camera position parameters, corresponding to the three-dimensional space coordinates, by means of the pitch correction processing 502 (pitch angle), yaw correction processing 504 (yaw angle), height correction processing 505 (camera height), roll correction processing 601 (roll angle), and translation correction processing 602 (roll angle). Further in paragraph [0099]-SAKANO discloses the pitch correction processing 502 and the yaw correction processing 504 output the pitch angles and the yaw angles for generating a bird's-eye-view video viewed from directly above, the height correction processing 505 outputs the camera heights for generating a bird's-eye-view video in which the distance between the straight lines is equal to that in the calibration target 101, the roll correction processing 601 outputs the roll angles considering the vehicle's angle-parking components and the camera roll angles, and the translation correction processing 602 outputs the camera translation positions). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI of having a control method of a vehicle, with the teachings of SAKANO of having the control method of wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle, and wherein the estimating of the change amount in the pose of the camera includes: fusing an amount of position change corresponding to each camera of the multi-camera, estimating a change amount in pose of each of the cameras of the multi-camera based on the fused amount of position change, and estimating the pose of the vehicle based on the estimated change amount in the pose of each of the cameras. Wherein CHOI’s method having the control method of wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle, and wherein the estimating of the change amount in the pose of the camera includes: fusing an amount of position change of a vanishing point corresponding to each camera of the multi-camera, estimating a change amount in pose of each of the cameras of the multi-camera based on the fused amount of position change of the vanishing point, and estimating the pose of the vehicle based on the estimated change amount in the pose of each of the cameras. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Regarding claim 14, CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO explicitly teach the control method of claim 13, CHOI fails to explicitly teach wherein the estimating of the pose of the vehicle includes estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight. However, SAKANO explicitly teaches wherein the estimating of the pose of the vehicle includes estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight (Fig. 6. Paragraph [0034]-SAKANO discloses a calibration apparatus 100 images the vehicle's peripheral area, which includes a calibration index provided in advance on the surface of the road where a vehicle is positioned, with the use of a plurality of cameras mounted on the vehicle and, using the plurality of imaged images, calibrates the cameras. This calibration apparatus 100 includes a calibration target 101, a camera 1, a camera 2, a camera 3, and a camera 4 all of which are an imaging unit, a camera interface 102, an operation device 103, a RAM 104 that is a storage unit, a ROM 105 that is a storage unit, an input device 106, and a display device 107. [0080]-SAKANO discloses the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing. Further in paragraph [0093]-SAKANO discloses the height correction processing 505 calibrates the camera heights. The height correction processing 505 calculates the camera height that makes each of these distances equal to the distance between the virtual straight lines formed by the feature points). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO of having a control method of a vehicle, with the teachings of SAKANO of having wherein the estimating of the pose of the vehicle includes estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight. Wherein CHOI’s method having wherein the estimating of the pose of the vehicle includes estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight. The motivation behind the modification would have been to obtain a method of that improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Regarding claim 18, CHOI in view of SAGONG and in further view of AOKI explicitly teach the vehicle of claim 15, CHOI further teaches the vehicle of wherein the camera is a front camera (Fig. 1-3 and 20, #100 and #510 called an image sensor and a sensor, respectively. Paragraph [0054]. Further in paragraph [0085]-CHOI discloses referring to FIGS. 1 to 3, the host vehicle 400 may include a vanishing point extraction device 10 and a vehicle controller 410. The vanishing point extraction device 10 may be disposed on the upper end of the host vehicle 400, and the image sensor 100 may photograph the front of the host vehicle 400) configured to obtain image data for a field of view facing a front of the vehicle (Fig. 1. Paragraph [0089]-CHOI discloses the first image IMG1 is an image obtained (e.g., generated by the image sensor 100) by photographing the front of the host vehicle 400 (see FIG. 3) and/or generating an image of at least a portion of the surrounding environment that is proximate to a front of the host vehicle 400 (e.g., in front of the host vehicle 400) and may include an object OB1 recognized as a vehicle), and wherein the controller is configured to: determine a change amount in y-axis of the template (Fig. 4. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the average coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2) based on the amount of position change between the template and the matching area (Fig. 4. Paragraph [0076]-CHOI discloses the matching point extractor 313 may obtain a plurality of first templates respectively corresponding to the plurality of sample points SP1 to SPn from the first image IMG1. In paragraph [0077]-CHOI discloses the matching point extractor 313 may compare the plurality of first templates of the first image IMG1 and the second image IMG2, identify an area similar to at least one of the plurality of first templates in (e.g., of) the second image IMG2, and obtain at least one area identified above from the second image IMG2 as a second template (wherein second image #IMG2 is the current frame). The matching point extractor 313 may extract at least one of matching points MP1 to MPm from at least one second template. The matching point extractor 313 may obtain a central point (e.g., center point) of each of the second templates as a matching point. Further in paragraph [0078]-CHOI discloses templates, or the like that are determined to be “similar” may refer to separate areas of one or more images that have a determined correlation that is greater than a correlation value threshold). Although CHOI explicitly teaches determine a change amount in y-axis of the vanishing point based on a change amount in y-axis (Fig. 4. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the average coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2). CHOI fails to explicitly teach determine a change amount in y-axis of the vanishing point based on a change amount in y-axis where a roll slope at which the front camera is mounted is compensated from the change amount in y-axis of the template, estimate an amount of pitch change of the front camera based on the change amount in y-axis of the vanishing point, and estimate a pitch pose of the vehicle based on the amount of pitch change of the front camera. However, SAKANO explicitly teaches determine a change amount in y-axis (Fig. 4. Paragraph [0080]-SAKANO discloses the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing. The calibration processing 408 produces a calibrated parameter 412 (second camera parameter) for the position and the pose that include an error from the design value of each camera) where a roll slope at which the front camera (Fig. 3. Paragraph [0055]-SAKANO discloses FIG. 3 is a diagram showing the positioning state of a vehicle 301 with respect to the calibration target 101. The camera 1 is mounted on the front side of the vehicle 301, the camera 2 on the rear side, the camera 3 on the left side, and the camera 4 on the right side) is mounted is compensated from the change amount in y-axis of the template (Fig. 4. Paragraph [0062]-SAKANO discloses video acquisition processing 401 acquires the video signals, generated by shooting the calibration target 101 by the camera 1, camera 2, camera 3, and camera 4, from the camera interface 102. Further in paragraph [0098]-SAKANO discloses the calibration processing 408 includes the virtual line generation processing 501, pitch correction processing 502, yaw correction processing 504, height correction processing 505, roll correction processing 601, and translation correction processing 602. The calibration processing 408 sequentially obtains the three camera pose parameters and the camera position parameters, corresponding to the three-dimensional space coordinates, by means of the pitch correction processing 502 (pitch angle), yaw correction processing 504 (yaw angle), height correction processing 505 (camera height), roll correction processing 601 (roll angle), and translation correction processing 602 (roll angle)). In paragraph [0101]-SAKANO discloses the roll correction processing 601 calibrates the roll angles of the cameras. While performing this processing, the roll correction processing 601 finds the roll angles that minimize an error in straight lines L1, L2, L3, and L4 in the boundary parts of the cameras in the bird's-eye-view video), estimate an amount of pitch change of the front camera, and estimate a pitch pose of the vehicle based on the amount of pitch change of the front camera (Fig. 4. Paragraph [0088]-SAKANO discloses pitch correction processing 502 calibrates the pitch angles of the cameras. This processing calculates the pitch angles of the camera 1, camera 2, camera 3, and camera 4 so that the parallel straight lines in the calibration target 101 (that is, straight lines L1, L2, L3, and L4 and straight lines L5, L6, L7, and L8) become parallel. In paragraph [0089]-SAKANO discloses the pitch correction processing 502 performs the calculation, for example, for the camera 1 and camera 2 as follows. An error function is designed that produces the minimum value when the angle between the lines of each of all pairs, formed by straight lines L1, L2, L3, and L4, is 0 degree and produces a larger value as the angle deviates from 0 degree. By repetitively changing the pitch angles of the cameras using the optimization technique so that the error function produces the minimum value, the pitch angles that make L1, L2, L3, and L4 parallel, that is, the ideal pitch angles that change the videos to the bird's-eye view videos, can be obtained). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI of having a vehicle, with the teachings of SAGONG of having determine a change amount in y-axis where a roll slope at which the front camera is mounted is compensated from the change amount in y-axis of the template, estimate an amount of pitch change of the front camera, and estimate a pitch pose of the vehicle based on the amount of pitch change of the front camera. Wherein CHOI’s vehicle having determine a change amount in y-axis of the template based on the amount of position change between the template and the matching area, determine a change amount in y-axis of the vanishing point based on a change amount in y-axis where a roll slope at which the front camera is mounted is compensated from the change amount in y-axis of the template, estimate an amount of pitch change of the front camera based on the change amount in y-axis of the vanishing point, and estimate a pitch pose of the vehicle based on the amount of pitch change of the front camera. The motivation behind the modification would have been to obtain a vehicle improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Regarding claim 19, CHOI in view of SAGONG and in further view of AOKI explicitly teaches the vehicle of claim 15, although CHOI explicitly teaches fuse an amount of position change of a vanishing point (Fig. 4. Paragraph [0122]-CHOI discloses referring to FIG. 15, the vanishing point extractor 310 may obtain a vanishing point VP2 of the second image IMG2 by (e.g., based on) correcting the vanishing point VP1 of the first image IMG1 (e.g., adjusting coordinates of the vanishing point VP1 in the second image to new, corrected coordinates in the second image IMG2 to establish the vanishing point VP2) using a plurality of matching points MP1 to MP4. In paragraph [0123]-CHOI discloses the vanishing point extractor 310 may calculate an average of coordinates of a plurality of matching points MP1 to MP4 and correct (e.g., adjust) the coordinates of the vanishing point VP1 of the first image IMG1 using the calculated y-coordinate of the avrage coordinate in order to establish the vanishing point VP2 as the point having the corrected coordinates in the second image IMG2). CHOI in view of SAGONG fails to explicitly teach wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle, and wherein the controller is configured to: fuse an amount of position change of a vanishing point corresponding to each camera of the multi-camera, estimate a change amount in pose of each of the cameras of the multi-camera based on the fused amount of position change of the vanishing point, and estimate the pose of the vehicle based on the estimated change amount in the pose of each of the cameras. However, SAKANO explicitly teaches the vehicle (Fig. 1. Paragraph [0034]-SAKANO discloses calibration apparatus 100 images the vehicle's peripheral area, which includes a calibration index provided in advance on the surface of the road where a vehicle is positioned, with the use of a plurality of cameras mounted on the vehicle and, using the plurality of imaged images, calibrates the cameras) of wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle (Fig. 3, #1-4 called cameras. Paragraph [0055]-SAKANO discloses FIG. 3 is a diagram showing the positioning state of a vehicle 301 with respect to the calibration target 101. The camera 1 is mounted on the front side of the vehicle 301, the camera 2 on the rear side, the camera 3 on the left side, and the camera 4 on the right side), and wherein the controller (Fig. 1. Paragraph [0034]-SAKANO discloses this calibration apparatus 100 includes a calibration target 101, a camera 1, a camera 2, a camera 3, and a camera 4 all of which are an imaging unit, a camera interface 102, an operation device 103, a RAM 104 that is a storage unit, a ROM 105 that is a storage unit, an input device 106, and a display device 107) is configured to: fuse an amount of position change corresponding to each camera of the multi-camera (Fig. 4. Paragraph [0080]-SAKANO discloses the calibration processing 408 produces a calibrated parameter 412 (second camera parameter) for the position and the pose that include an error from the design value of each camera. In paragraph [0081]-SAKANO discloses the processing of map generation processing 409 is the same as that of the map generation processing 402. The map contains the correspondence between each pixel in the videos of the camera 1, camera 2, camera 3, and camera 4 and a pixel in the converted bird's-eye-view camera videos. A composite image is generated by generating a bird's-eye-view video according to this correspondence. The map generation processing 409 uses the calibrated parameter 412 when generating the map and therefore correctly knows the poses of the camera 1, camera 2, camera 3, and camera 4), estimate a change amount in pose of each of the cameras of the multi-camera based on the fused amount of position change (Fig. 6. Paragraph [0080]-SAKANO discloses the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing), and estimate the pose of the vehicle based on the estimated change amount in the pose of each of the cameras (Fig. 6. Paragraph [0097]-SAKANO discloses FIG. 6 is a flowchart showing the detail of the calibration processing 408. In paragraph [0098]-SAKANO discloses the calibration processing 408 includes the virtual line generation processing 501, pitch correction processing 502, yaw correction processing 504, height correction processing 505, roll correction processing 601, and translation correction processing 602. The calibration processing 408 sequentially obtains the three camera pose parameters and the camera position parameters, corresponding to the three-dimensional space coordinates, by means of the pitch correction processing 502 (pitch angle), yaw correction processing 504 (yaw angle), height correction processing 505 (camera height), roll correction processing 601 (roll angle), and translation correction processing 602 (roll angle)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI of having a vehicle, with the teachings of SAKANO of having the vehicle of wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle, and wherein the controller is configured to: fuse an amount of position change of a vanishing point corresponding to each camera of the multi-camera, estimate a change amount in pose of each of the cameras of the multi- camera based on the fused amount of position change of the vanishing point, and estimate the pose of the vehicle based on the estimated change amount in the pose of each of the cameras. Wherein CHOI’s vehicle having the vehicle of wherein the camera is a multi-camera configured to obtain image data for a field of view facing a plurality of directions of the vehicle, and wherein the controller is configured to: fuse an amount of position change of a vanishing point corresponding to each camera of the multi-camera, estimate a change amount in pose of each of the cameras of the multi- camera based on the fused amount of position change of the vanishing point, and estimate the pose of the vehicle based on the estimated change amount in the pose of each of the cameras. The motivation behind the modification would have been to obtain a vehicle improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Regarding claim 20, CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO explicitly teaches vehicle of claim 19, CHOI in view of SAGONG fails to explicitly teach wherein the controller is configured for estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight. However, SAKANO explicitly teaches wherein the controller is configured for estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight (Fig. 6. Paragraph [0080]-SAKANO discloses the calibration processing 408 corrects an error in the position and the pose of each of the cameras (camera 1, camera 2, camera 3, and camera 4) based on their design values so that a non-deviated, ideal bird's-eye-view video is obtained. The position refers to the three-dimensional space coordinates (x, y, z), and the pose refers to rolling, pitching, and yawing. The calibration processing 408 produces a calibrated parameter 412 (second camera parameter) for the position and the pose that include an error from the design value of each camera). Further in paragraph [0097]-SAKANO discloses FIG. 6 is a flowchart showing the detail of the calibration processing 408. In paragraph [0098]-SAKANO discloses the calibration processing 408 includes the virtual line generation processing 501, pitch correction processing 502, yaw correction processing 504, height correction processing 505, roll correction processing 601, and translation correction processing 602. The calibration processing 408 sequentially obtains the three camera pose parameters and the camera position parameters, corresponding to the three-dimensional space coordinates, by means of the pitch correction processing 502 (pitch angle), yaw correction processing 504 (yaw angle), height correction processing 505 (camera height), roll correction processing 601 (roll angle), and translation correction processing 602 (roll angle)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of SAKANO of having a vehicle, with the teachings of SAKANO of having wherein the controller is configured for estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight. Wherein CHOI’s vehicle having wherein the controller is configured for estimating the pose of the vehicle as at least one of rolling, pitching, yawing, height, or going straight. The motivation behind the modification would have been to obtain a vehicle improves autonomous driving and camera calibration, since both CHOI and SAKANO concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while SAKANO provides systems and methods that improves relative positioning accuracy between a vehicle and a calibration index and simplifies the preparation for laying down a calibration index. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and SAKANO et al. (US 20140085469 A1), Abstract and Paragraph [0019-0024]. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over CHOI (US 20210256720 A1), hereinafter referenced as CHOI in view of SAGONG et al. (US 20200125859 A1), hereinafter referenced as SAGONG and in further view of AOKI et al. (US 20200068186 A1), hereinafter referenced as AOKI and in further view of AKITA et al. (US 20100027844 A1), hereinafter referenced as AKITA and in further view of TAM et al. (US 20230347881 A9), hereinafter referenced as TAM. Regarding claim 17, CHOI in view of SAGONG and in further view of AOKI and in further view of AKITA explicitly teach the vehicle of claim 16, CHOI in view of SAGONG fail to explicitly teach wherein the controller is configured for determining a movement direction of the template based on a driving direction of the vehicle and a roll angle at which the camera is mounted, and for moving the template in the determined movement direction. However, TAM explicitly teach wherein the controller is configured for determining a movement direction of the template based on a driving direction of the vehicle (Fig. 1, #100 called a vehicle. Paragraph [0059]-TAM discloses the vehicle, such as the vehicle 100, may be an autonomous vehicle or a semi-autonomous vehicle) and a roll angle at which the camera is mounted (Fig. 1, #126 called sensors. Paragraph [0055]. Further in paragraph [0052]-TAM discloses the location unit 116 may determine geolocation information, including but not limited to longitude, latitude, elevation, direction of travel, or speed, of the vehicle 100. In paragraph [0055]-TAM discloses the sensor 126 includes sensors that are operable to obtain information regarding the physical environment surrounding the vehicle 100. One or more sensors detect road geometry and obstacles, such as fixed obstacles, vehicles, cyclists, and pedestrians. The sensor 126 can be or include one or more video cameras. The sensor 126 and the location unit 116 may be combined. Additionally, in paragraph [0098]-TAM discloses a pose can be defined by variables such as coordinates (x, y, z), roll angle, pitch angle, and/or yaw angle), and for moving the template in the determined movement direction (Fig. 9. Paragraph [0142]-TAM discloses as uncertainty may be associated with the sensor data and/or the perceived world objects, a bounding box 905 that expands the actual/real size of the hazard object 906 can be associated with the hazard object 906. The shape representing the uncertainty may be any shape and is not limited to a rectangular box. A lateral pose uncertainty 908 (denoted Δy.sub.p) defines an initial determined (e.g., perceived, identified, set, etc.) later size of the object 906. The size of the bounding box of uncertainty may be a function of one or more of range uncertainty, angle (i.e., pose, orientation, etc.) uncertainty, or velocity uncertainly results. With respect to range uncertainly (i.e., uncertainty with respect to how far the hazard object 906 is from the vehicle 902) the vehicle 902 (more specifically, a world model 302) can assign a longer bounding box of uncertainty to an object that is perceived to be farther than a closer object. With respect to angle uncertainly (i.e., uncertainty with respect to the orientation/pose of the hazard object 906), can result in assigning different widths to the bounding box of uncertainty. In paragraph [0181]-TAM disclose at 1102, the dynamic properties of the dynamic hazard objects are determined. These properties may include the pose (e.g., heading) and the speed (i.e., velocity) of the dynamic hazard object. Please also see Fig. 5 and paragraph [0156]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHOI in view of SAGONG and in further view of AOKI and in further view of AKITA of having a vehicle, with the teachings of TAM of having wherein the controller is configured for determining a movement direction of the template based on a driving direction of the vehicle and a roll angle at which the camera is mounted, and for moving the template in the determined movement direction. Wherein CHOI’s vehicle having wherein the controller is configured for determining a movement direction of the template based on a driving direction of the vehicle and a roll angle at which the camera is mounted, and for moving the template in the determined movement direction. The motivation behind the modification would have been to obtain a vehicle improves autonomous driving safety and camera calibration, since both CHOI and TAM concern vehicles and image analysis. Wherein CHOI provides systems and methods that improve the ability to calibrate cameras and cross-reference actual movement trajectory, while TAM provides systems and methods that improves autonomous vehicle safety through circumventing and correcting driver errors. Please see CHOI (US 20210256720 A1), Abstract and Paragraph [0145] and TAM et al. (US 20230347881 A9), Abstract and Paragraph [0003 and 0059]. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. OKADA et al. (US 20130286205 A1)- An approaching object detection device that detects moving objects approaching a vehicle on the basis of images generated by an image pickup unit that captures images of surroundings of the vehicle at certain time intervals, the approaching object detection device includes: a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, detecting moving object regions that each include a moving object from an image; obtaining a moving direction of each of the moving object regions; and determining whether or not the moving object included in each of the moving object regions is a moving object approaching the vehicle on the basis of at least either an angle between the moving direction of each of the moving object regions in the image and a horizon in the image or a ratio of an area of a subregion...................Please see Fig. 5-7. Abstract. KIM et al (US 20200250470 A1)- A method for enhancing an accuracy of object distance estimation based on a subject camera by performing pitch calibration of the subject camera more precisely with additional information acquired through V2V communication is provided. And the method includes steps of: (a) a computing device, performing (i) a process of instructing an initial pitch calibration module to apply a pitch calculation operation to the reference image, to thereby generate an initial estimated pitch, and (ii) a process of instructing an object detection network to apply a neural network operation to the reference image, to thereby generate reference object detection information; (b) the computing device instructing an adjusting pitch calibration module to (i) select a target object, (ii) calculate an estimated target height of the target object, (iii) calculate an error corresponding to the initial estimated pitch, and (iv) determine an adjusted estimated pitch on the subject camera by using the error....................Please see Fig. 3-4. Abstract. COX et al (US 20220270358 A1)- The present disclosure provides devices, systems and methods for a vehicular sensor system that detects and classifies objects, and further determining a vanishing point and aspect ratios, which enable detecting a misalignment of a lens focus and determining quality metrics; and calibrating the sensor system.....................Please see Fig. 3, 5, 7-8. Abstract. UEDA et al (US 20220036099 A1)- A moving body obstruction detection device includes: a detection section that detects a predetermined moving body within an image that is captured by an imaging section provided at a vehicle; and an inferring section that infers a moving body state that relates to the moving body crossing a road, based on a position of a bounding box that surrounds the moving body detected by the detection section...................Please see Fig. 2, 4. Abstract. Jaehnisch et al (US 10424081 B2)- In a method and an apparatus for calibrating a camera system of a motor vehicle, the calibration parameters comprising the rotation angle, pitch angle, yaw angle and roll angle as well as the height of the camera above the road, the rotation angle is determined from the ascertainment of the vanishing point from a first optical flow between a first and a second successive camera image, and the height of the camera is determined from a second optical flow between a first and a second, successive camera image. To determine the first optical flow, a regular grid is placed over the first camera image, correspondences of the regular grid are searched for in the second camera image, and the first optical flow is determined from the movement of the grid over the camera images....................Please see Fig. 1-3. Abstract. HSIEN et al (US 20140327765 A1)- A camera image calibrating system applicable to a transportation vehicle including at least one image capturing unit, direction sensing units and a processing unit is provided. The image capturing unit is disposed on the transportation vehicle according to a height to preview an image. The direction sensing units are disposed on the transportation vehicle and the image capturing unit to obtain a vehicle directional angle and an image capturing directional angle. The processing unit calculates an image transformation relationship and causes the image to comply with an image preset condition. The processing unit determines an offset angle of the image capturing unit according to the image, the vehicle directional angle and the image capturing directional angle, and further calculates the image transformation relationship according to the height, the offset angle and the image, in a static calibration procedure. A method of calibrating camera images is also provided....................Please see Fig. 1-3. Abstract. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Jun 02, 2023
Application Filed
Aug 25, 2025
Non-Final Rejection — §103
Nov 28, 2025
Response Filed
Jan 31, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555249
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR SUPPORTING VIRTUAL GOLF SIMULATION
2y 5m to grant Granted Feb 17, 2026
Patent 12548171
INFORMATION PROCESSING APPARATUS, METHOD AND MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12541822
METHOD AND APPARATUS OF PROCESSING IMAGE, COMPUTING DEVICE, AND MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12505503
IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 23, 2025
Patent 12482106
METHOD AND ELECTRONIC DEVICE FOR SEGMENTING OBJECTS IN SCENE
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+33.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month