Prosecution Insights
Last updated: April 19, 2026
Application No. 18/123,824

VEHICLE AND METHOD OF CONTROLLING THE SAME

Final Rejection §103
Filed
Mar 20, 2023
Examiner
DHOOGE, DEVIN J
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
50 granted / 71 resolved
+8.4% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
48 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 71 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is filed in response to the action filed on 10/31/2025. Claims 1, 6-7, 9, and 14-15 are currently amended. Claims 5 and 13 are canceled. Claims 1-4, 6-12, and 14-16 are pending. Response to Arguments Applicant’s arguments filed on 10/31/2025 on pages 7-11, under REMARKS with respect to 35 U.S.C. 102 and 103 claim rejections to claims 1-16 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 U.S.C. 103 using US 2023/0068798 A1. Claim Rejections - 35 USC § 103 Claims 1, 7, 9, and 15 are rejected under 35 § U.S.C. 103 as being obvious over US 2013/0251194 A1 to SCHAMP (hereinafter “SCHAMP”) in view of US 2023/0068798 A1 to ETCHART et al. (hereinafter “ETCHART”). As per claim 1, SCHAMP discloses a vehicle comprising: a first camera configured to obtain a first image (a computing system resident on a vehicle comprising a stereo camera which comprises a first camera to capture first images; abstract; figs 1-2, 4a-b; paragraphs [0069-0070]); a second camera configured to obtain a second image captured in a different field of view from the first camera (and a second camera of the stereo camera to capture second images spaced apart a distance “r” from the first camera of the stereo camera system, wherein the separation distance “r” provides a different field of view for the first and second camera; abstract; figs 1-2, 4a-b; paragraphs [0069-0070]); and a controller configured to obtain a distance between the vehicle and an object by processing images obtained by the first and second cameras (a stereo vision system 16 configured to act as a controller and obtains a downrange distance DR of objects imaged by the first 40.1 or the second 40.2 stereo camera of the vehicle 12 wherein the DR distance is equivalent to object distance from vehicle bumper; figs 4a-b, 14a; paragraph [0086]), recognize a first object in a frame of the image (the stereo vision camera system adapted to recognize objects 50, 50.1, 50.2, 50.3 respectively in a frame of images captured from camera 1 and camera 2 of the stereo camera of the vehicle; paragraphs [0069-0070], [0082]), obtain a height of the first object (the system is adapted to capture a height of an object(s); paragraph [0071]), an aspect ratio of the first object (an aspect ratio of the objects is captured and found; paragraph [0073]), and a distance from the first object (and a downrange distance DR from the vehicles bumper is found for each respective object; paragraph [0086]), assign the height, the aspect ratio, and the distance to 3D coordinate values (based on individual coordinate values in a three dimensional space the objects are assigned locations in the coordinate space and assigned their respective values; paragraphs [0086-0087]), generate a 3D straight line based on a plurality of 3D coordinate values in each frame (the system is adapted to generate a centerline through objects based on the data values recorder; paragraphs [0086-0087]), store a characteristic relationship between the recognized first object and the 3D straight line in a memory (in the memory a characteristic mathematical relation is stored in order to align the recognized object to the center line module to produce a centerline through the object of the image; paragraphs [0069-0070], [0075], [0082]), obtain a height of a second object and an aspect ratio of the second object by image processing, upon recognizing the second object of the same model as the first object through the multi-cameras (based on the images provided from the stereo camera 1 and camera 2 a height, aspect ratio, and down range distance value are found for each respective object 50, 50.1, 50.2, etc..; paragraphs [0071, [0073], [0082]), and estimate a distance from the second object based on the characteristic relationship (based on the coordinates provide the object distance from one another may be calculated by calculating the DR coordinates of each centroid 116 associated with a respective object based on a region of interest assigned to said object as seen in fig 9; fig 9 paragraphs [0082], [0086], [0105]). SCHAMP fails to disclose wherein the controller is configured to generate 3D coordinate values for each of a plurality of frames corresponding to the height, the aspect ratio, and the distance of the first object in a 3D Euclidean space, accumulate the generated 3D coordinate values, and fit the accumulated 3D coordinate values to the 3D straight line, and wherein the controller is configured to determine a necessity for automated online calibration (AOC) based on a first characteristic relationship for the first object and a second characteristic relationship for the second object, and process the AOC for the first image and second image based on the determined necessity for the AOC. ETCHART discloses wherein the controller is configured to generate 3D coordinate values for each of a plurality of frames corresponding to the height (in this example the object is a user and is having a coordinate values of the mouth of the user to be generated, this could be done with any object including a vehicle and its respective parts and utilizing the technology is not exclusive to human users, one of the coordinate values generated is the height value; fig 10A; paragraphs [0180-0185], [0204]; CLAIM 6), the aspect ratio (another generated coordinate value is the aspect ratio; fig 10A; paragraphs [0180-0185], [0204]; CLAIM 6), and the distance of the first object in a 3D Euclidean space (the third coordinate value generated is the straight line distance in a 3D morphable model of a width distance in a straight line of the users mouth region; fig 10A; paragraphs [0178], [0180-0185], [0204]; CLAIM 6), accumulate the generated 3D coordinate values (the system generates and then saves the coordinate values to the system for the respective identified user and user mouth; fig 10A; paragraphs [0178], [0180-0185], [0204]; CLAIM 6), and fit the accumulated 3D coordinate values to the 3D straight line (the generated values are fit to a binary linear classifier wherein linear is interpreted to mean straight line; fig 10A; paragraphs [0178], [0180-0185], [0204], [0230]; CLAIM 6), and wherein the controller is configured to determine a necessity for automated online calibration (AOC) based on a first characteristic relationship for the first object and a second characteristic relationship for the second object (the system is adapted to automatically adjust parameters and templates (calibration is performed) and the system is connected to the internet network allowing it to be online for the parameter adjustments based on an intensity profile and is adapted to perform individual adjustments based on various vector and coordinate values including that of the first and second users acting as the first and second objects of interest; fig 10A; paragraphs [0092], [0115], [0151], [0180-0185], [0204]; CLAIM 6), and process the AOC for the first image and second image based on the determined necessity for the AOC (process and make the desired adjustments to the parameters and associated templates in order to have respective calibrated templates for each respective object/user profile saved to the system; fig 10A; paragraphs [0092], [0115], [0151], [0180-0185], [0204]; CLAIM 6). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have wherein the controller is configured to generate 3D coordinate values for each of a plurality of frames corresponding to the height, the aspect ratio, and the distance of the first object in a 3D Euclidean space, accumulate the generated 3D coordinate values, and fit the accumulated 3D coordinate values to the 3D straight line, and wherein the controller is configured to determine a necessity for automated online calibration (AOC) based on a first characteristic relationship for the first object and a second characteristic relationship for the second object, and process the AOC for the first image and second image based on the determined necessity for the AOC of ETCHART reference. The Suggestion/motivation for doing so would have been to provide the ability to maximize or minimize image features for example the system allows for energy function to be minimized by iteratively adjusting the parameters of the template to the fit to the image as suggested by ETCHART paragraph [0151] and the technology would be able to be applied to object recognition as the parameters may be adjusted to track and detect different image features related to the scenario at hand, whether that scenario includes objects which are users or object which are vehicles. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ETCHART with SCHAMP to obtain the invention as specified in claim 1. As per claim 7, SCHAMP in view of ETCHART discloses the vehicle of claim 1. Modified SCHAMP further discloses wherein, upon any one of the first camera or the second camera being turned in a yawing direction, the controller is further configured to correct the height of the first object based on a calibration result value stored in the memory, and to store the characteristic relationship for the first object based on the corrected height (the stereo camera system of the vehicle is adapted to move at a desired elevation angle to view the environment at an angle theta wherein theta may be adjusted in the yaw direction and the position of the angle theta may be stored in the memory, upon the adjustment of the angle theta the system performs calibration related to recognized objects height values based on stored mathematical relationships 3 and 4; paragraphs [0082-0084], [0086-0088], [0093], [0228]). As per claim 9, SCHAMP discloses a method of controlling a vehicle with multiple cameras including a first camera and a second camera (a computing system and corresponding image processing method performed by said system resident on a vehicle comprising a stereo camera which comprises a first camera to capture first images and a second camera to capture second images; abstract; figs 1-2, 4a-b; paragraphs [0069-0070]), the method comprising: obtaining a first image through the first camera (and a second camera of the stereo camera to capture second images spaced apart a distance “r” from the first camera of the stereo camera system, wherein the separation distance “r” provides a different field of view for the first and second camera; abstract; figs 1-2, 4a-b; paragraphs [0069-0070]); obtaining a second image captured in a different field of view from the first camera through the second camera (the second camera of the stereo camera to capture second images spaced apart a distance “r” from the first camera of the stereo camera system, wherein the separation distance “r” provides a different field of view for the first and second camera; abstract; figs 1-2, 4a-b; paragraphs [0069-0070]); recognizing a first object in a frame of the image (the stereo vision camera system adapted to recognize objects 50, 50.1, 50.2, 50.3 respectively in a frame of images captured from camera 1 and camera 2 of the stereo camera of the vehicle; paragraphs [0069-0070], [0082]); obtaining a height of the first object (the system is adapted to capture a height of an object(s); paragraph [0071]), an aspect ratio of the first object (an aspect ratio of the objects is captured and found; paragraph [0073]), and a distance from the first object (and a downrange distance DR from the vehicles bumper is found for each respective object; paragraph [0086]); assigning the height, the aspect ratio, and the distance to 3D coordinate values (based on individual coordinate values in a three dimensional space the objects are assigned locations in the coordinate space and assigned their respective values; paragraphs [0086-0087]); generating a 3D straight line based on a plurality of 3D coordinate values in each frame (the system is adapted to generate a centerline through objects based on the data values recorder; paragraphs [0086-0087]); storing a characteristic relationship between the recognized first object and the 3D straight line in a memory (in the memory a characteristic mathematical relation is stored in order to align the recognized object to the center line module to produce a centerline through the object of the image; paragraphs [0069-0070], [0075], [0082]); obtaining a height of a second object and an aspect ratio of the second object by image processing, upon recognizing the second object of the same model as the first object through the multiple cameras (based on the images provided from the stereo camera 1 and camera 2 a height, aspect ratio, and down range distance value are found for each respective object 50, 50.1, 50.2, etc..; paragraphs [0071, [0073], [0082]); and estimating a distance from the second object based on the characteristic relationship (based on the coordinates provide the object distance from one another may be calculated by calculating the DR coordinates of each centroid 116 associated with a respective object based on a region of interest assigned to said object as seen in fig 9; fig 9 paragraphs [0082], [0086], [0105]). SCHAMP fails to disclose wherein the generating of the 3D straight line includes generating 3D coordinate values for each of a plurality of frames corresponding to the height, the aspect ratio, and the distance of the first object in a 3D Euclidean space, accumulating the generated 3D coordinate values, and fitting the accumulated 3D coordinate values to the 3D straight line, and wherein the estimating of the distance includes determining a necessity for automated online calibration (AOC) based on a first characteristic relationship for the first object and a second characteristic relationship for the second object, and processing the AOC for the first image and second image based on the determined necessity for the AOC. ETCHART discloses wherein the generating of the 3D straight line includes generating 3D coordinate values for each of a plurality of frames corresponding to the height (in this example the object is a user and is having a coordinate values of the mouth of the user to be generated, this could be done with any object including a vehicle and its respective parts and utilizing the technology is not exclusive to human users, one of the coordinate values generated is the height value; fig 10A; paragraphs [0180-0185], [0204]; CLAIM 6), the aspect ratio (another generated coordinate value is the aspect ratio; fig 10A; paragraphs [0180-0185], [0204]; CLAIM 6), and the distance of the first object in a 3D Euclidean space (the third coordinate value generated is the straight line distance in a 3D morphable model of a width distance in a straight line of the users mouth region; fig 10A; paragraphs [0178], [0180-0185], [0204]; CLAIM 6), accumulating the generated 3D coordinate values (the system generates and then saves the coordinate values to the system for the respective identified user and user mouth; fig 10A; paragraphs [0178], [0180-0185], [0204]; CLAIM 6), and fitting the accumulated 3D coordinate values to the 3D straight line (the generated values are fit to a binary linear classifier wherein linear is interpreted to mean straight line; fig 10A; paragraphs [0178], [0180-0185], [0204], [0230]; CLAIM 6), and wherein the estimating of the distance includes determining a necessity for automated online calibration (AOC) based on a first characteristic relationship for the first object and a second characteristic relationship for the second object (the system is adapted to automatically adjust parameters and templates (calibration is performed) and the system is connected to the internet network allowing it to be online for the parameter adjustments based on an intensity profile and is adapted to perform individual adjustments based on various vector and coordinate values including that of the first and second users acting as the first and second objects of interest; fig 10A; paragraphs [0092], [0115], [0151], [0180-0185], [0204]; CLAIM 6), and processing the AOC for the first image and second image based on the determined necessity for the AOC (process and make the desired adjustments to the parameters and associated templates in order to have respective calibrated templates for each respective object/user profile saved to the system; fig 10A; paragraphs [0092], [0115], [0151], [0180-0185], [0204]; CLAIM 6). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have wherein the generating of the 3D straight line includes generating 3D coordinate values for each of a plurality of frames corresponding to the height, the aspect ratio, and the distance of the first object in a 3D Euclidean space, accumulating the generated 3D coordinate values, and fitting the accumulated 3D coordinate values to the 3D straight line, and wherein the estimating of the distance includes determining a necessity for automated online calibration (AOC) based on a first characteristic relationship for the first object and a second characteristic relationship for the second object, and processing the AOC for the first image and second image based on the determined necessity for the AOC of ETCHART reference. The Suggestion/motivation for doing so would have been to provide the ability to maximize or minimize image features for example the system allows for energy function to be minimized by iteratively adjusting the parameters of the template to the fit to the image as suggested by ETCHART paragraph [0151] and the technology would be able to be applied to object recognition as the parameters may be adjusted to track and detect different image features related to the scenario at hand. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine ETCHART with SCHAMP to obtain the invention as specified in claim 9. As per claim 15, SCHAMP in view of ETCHART discloses the method of claim 9, wherein, upon any one of the first camera or the second camera being turned in a yawing direction, the determining of the necessity for the AOC comprises: correcting the height of the first object based on a calibration result value stored in the memory; and storing the characteristic relationship for the first object based on the corrected height (the stereo camera system of the vehicle is adapted to move at a desired elevation angle to view the environment at an angle theta wherein theta may be adjusted in the yaw direction and the position of the angle theta may be stored in the memory, upon the adjustment of the angle theta the system performs calibration related to recognized objects height values based on stored mathematical relationships 3 and 4; paragraphs [0082-0084], [0086-0088], [0093], [0228]). Claims 2-4, 6, 10-12, 14 are rejected under 35 § U.S.C. 103 as being obvious over US 2013/0251194 A1 to SCHAMP (hereinafter “SCHAMP”) in view of US 2023/0068798 A1 to ETCHART et al. (hereinafter “ETCHART”) in view of US 2022/0277472 A1 to BIRCHFIELD et al (hereinafter “BIRCHFIELD”). As per claim 2, SCHAMP discloses the vehicle of claim 1. SCHAMP fails to disclose further comprising an inertial measurement unit (IMU) configured to determine a posture and acceleration state of the vehicle, wherein the controller is further configured to determine a necessity for vehicle dynamic compensation (VDC) based on a frequency output from the IMU. BIRCHFIELD discloses further comprising an inertial measurement unit (IMU) configured to determine a posture and acceleration state of the vehicle, wherein the controller is further configured to determine a necessity for vehicle dynamic compensation (VDC) based on a frequency output from the IMU (the system comprises an inertial measurement unit IMU to determine motion information of the vehicles and the attached stereo camera system of the vehicle in order to determine motion data for the vehicle and camera in order to perform corrections related to the motion to generate absolute camera motion information to process the one or more relative dimension values to calculate an absolute scale of the object of the images; paragraphs [0136], [0182], [0253]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have further comprising an inertial measurement unit (IMU) of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to provide an IMU in order to measure movement/motion information of the camera so that transformations can be performed on the captured images during motion as suggested by BIRCHFIELD paragraph [0136]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 2. As per claim 3, SCHAMP in view of BIRCHFIELD discloses the vehicle of claim 2. SCHAMP fails to disclose wherein the controller is further configured to control the memory not to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is higher than or equal to a predetermined value. BIRCHFIELD discloses wherein the controller is further configured to control the memory not to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is higher than or equal to a predetermined value (the sensor units include ring oscillators, which oscillate at a frequency proportional or equal to a temperature relationship wherein the temperature value is the predetermined value and is tracking component temperatures of the vehicle/system; paragraphs [0224], [0242], [0409], [0438], [0598]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have upon receiving the frequency output from the IMU, comparing when the magnitude of the frequency output is higher than or equal to a predetermined value of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to if the temperature relationship being tracked exceeds a certain value the vehicle may trigger a safe stop mode as suggested by BIRCHFIELD paragraph [0224]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 3. As per claim 4, SCHAMP in view of BIRCHFIELD discloses the vehicle of claim 2. SCHAMP fails to disclose wherein the controller is further configured to control the memory to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is lower than a predetermined value. BIRCHFIELD discloses wherein the controller is further configured to control the memory to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is lower than a predetermined value (the memory component of the computing system is adapted to store a mathematical relationship related to loss wherein, one or more neural networks comprise a loss value below a loss threshold (predetermined value) wherein the loss is measured as loss provided by equation L1; paragraphs [0099-0100], [0114], [0222-0224], [0438]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is lower than a predetermined value of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to utilize the provided loss function mathematical relationships to minimize loss in the system and corresponding image processing method as suggested by paragraph [0100] of BIRCHFIELD. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 4. As per claim 6, SCHAMP discloses the vehicle of claim 1. SCHAMP fails to disclose wherein the controller is further configured to calculate a gradient variation between a gradient of a 3D straight line equation based on the first characteristic relationship and a gradient of a 3D straight line equation based on the second characteristic relationship, and to determine an amount of calibration for the AOC based on the gradient variation. BIRCHFIELD discloses wherein the controller is further configured to calculate a gradient variation between a gradient of a 3D straight line equation based on the first characteristic relationship and a gradient of a 3D straight line equation based on the second characteristic relationship, and to determine an amount of calibration for the AOC based on the gradient variation (the system includes arithmetic logic unit(s) AMU’s to perform mathematical relationship calculations relating to various parameters weighted values assigned in the relationship and includes operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage 1005 and included in the AMUs; paragraphs [0079], [0146], [0150], [0154], [0577]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have been further configured to calculate a gradient variation between a gradient of a 3D straight line equation based on the first characteristic relationship of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to provide various weights to different mathematical relationships using the hyper parameters used in the AMU’s which includes gradient values as suggested by BIRCHFIELD paragraph [0146]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 6. As per claim 10, SCHAMP discloses the method of claim 9. SCHAMP fails to disclose further comprising determining a necessity for vehicle dynamic compensation (VDC) based on a frequency output from an inertial measurement unit (IMU). BIRCHFIELD discloses further comprising determining a necessity for vehicle dynamic compensation (VDC) based on a frequency output from an inertial measurement unit (IMU) (the system comprises an inertial measurement unit IMU to determine motion information of the vehicles and the attached stereo camera system of the vehicle in order to determine motion data for the vehicle and camera in order to perform corrections related to the motion to generate absolute camera motion information to process the one or more relative dimension values to calculate an absolute scale of the object of the images; paragraphs [0136], [0182], [0253]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have comprising determining a necessity for vehicle dynamic compensation (VDC) based on a frequency output from an inertial measurement unit of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to provide an IMU in order to measure movement/motion information of the camera so that transformations can be performed on the captured images during motion as suggested by BIRCHFIELD paragraph [0136]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 10. As per claim 11, SCHAMP in view of BIRCHFIELD discloses the method of claim 10. SCHAMP fails to disclose wherein the determining of the necessity for the VDC comprises controlling the memory not to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is higher than or equal to a predetermined value. BIRCHFIELD discloses wherein the determining of the necessity for the VDC comprises controlling the memory not to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is higher than or equal to a predetermined value (the sensor units include ring oscillators, which oscillate at a frequency proportional or equal to a temperature relationship wherein the temperature value is the predetermined value and is tracking component temperatures of the vehicle/system; paragraphs [0224], [0242], [0409], [0438], [0598]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have wherein the determining of the necessity for the VDC comprises controlling the memory not to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is higher than or equal to a predetermined value of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to if the temperature relationship being tracked exceeds a certain value the vehicle may trigger a safe stop mode as suggested by BIRCHFIELD paragraph [0224]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 11. As per claim 12, SCHAMP in view of BIRCHFIELD discloses the method of claim 10. SCHAMP fails to disclose wherein the determining of the necessity for the VDC comprises controlling the memory to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is lower than a predetermined value. BIRCHFIELD discloses wherein the determining of the necessity for the VDC comprises controlling the memory to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is lower than a predetermined value (the memory component of the computing system is adapted to store a mathematical relationship related to loss wherein, one or more neural networks comprise a loss value below a loss threshold (predetermined value) wherein the loss is measured as loss provided by equation L1; paragraphs [0099-0100], [0114], [0222-0224], [0438]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have wherein the determining of the necessity for the VDC comprises controlling the memory to store the characteristic relationship for the first object, upon the frequency output from the IMU, when the magnitude of the frequency output is lower than a predetermined value of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to utilize the provided loss function mathematical relationships to minimize loss in the system and corresponding image processing method as suggested by paragraph [0100] of BIRCHFIELD. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 12. As per claim 14, SCHAMP discloses the method of claim 9. SCHAMP fails to disclose wherein the determining of the necessity for the AOC comprises: calculating a gradient variation between a gradient of a 3D straight line equation based on the first characteristic relationship and a gradient of a 3D straight line equation based on the second characteristic relationship; and determining an amount of calibration for the AOC based on the gradient variation. BIRCHFIELD discloses wherein the determining of the necessity for the AOC comprises: calculating a gradient variation between a gradient of a 3D straight line equation based on the first characteristic relationship and a gradient of a 3D straight line equation based on the second characteristic relationship (the system includes arithmetic logic unit(s) AMU’s to perform mathematical relationship calculations relating to various parameters weighted values assigned in the relationship and includes operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in code and/or data storage 1005 and included in the AMUs; paragraphs [0079], [0146], [0150], [0154], [0577]); and determining an amount of calibration for the AOC based on the gradient variation (the calibration equations provided by the AMU’s are used based on selected hyper parameter to calibrate the system to the gradient of the vehicle and stereo camera system; paragraphs [0079], [0146], [0150], [0154], [0577]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have calculating a gradient variation between a gradient of a 3D straight line equation based on the first characteristic relationship and using that to find the calibration for the automated systems of BIRCHFIELD reference. The Suggestion/motivation for doing so would have been to provide various weights to different mathematical relationships using the hyper parameters used in the AMU’s which includes gradient values as suggested by BIRCHFIELD paragraph [0146]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BIRCHFIELD with SCHAMP to obtain the invention as specified in claim 14. Claims 8 and 16 are rejected under 35 § U.S.C. 103 as being obvious over US 2013/0251194 A1 to SCHAMP (hereinafter “SCHAMP”) in view of US 2023/0068798 A1 to ETCHART et al. (hereinafter “ETCHART”) in view of US 6,985,619 B1 to SETA et al (hereinafter “SETA”). As per claim 8, SCHAMP discloses the vehicle of claim 1. SCHAMP fails to disclose wherein the memory is further configured to store parallax information between the first camera and the second camera, and the controller is further configured to recognize an object in the first image and an object in the second image as the same image, upon the parallax information matching a difference between a 3D straight line of the object recognized in the first image and a 3D straight line of the object recognized in the second image. SETA discloses wherein the memory is further configured to store parallax information between the first camera and the second camera (analogue interface 3 includes a memory the memory of the computing system is adapted to store parallax information calculated by a stereo calculating circuit 6 which calculates parallax values and data relating to each camera of the stereo camera system; abstract; column 3, lines 4-23; column 4, lines 9-28), and the controller is further configured to recognize an object in the first image and an object in the second image as the same image, upon the parallax information matching a difference between a 3D straight line of the object recognized in the first image and a 3D straight line of the object recognized in the second image (and the interface 3 acting as a system controller is configured to recognize an object 50 in an image captured by stereo camera system camera 1 and camera 2 and determines parallax values for both via stereo calculating circuit 6 and matches a difference value of straight line road markers recognized in both images and used to orient the vehicle, wherein in step 3, the reliability of the left and right lane markers are verified, following two things are the difference value between the position of the lane marker in the previous cycle and the position of the lane marker in the present cycle is greater than a specified value; column 4, lines 9-28; column 9, lines 2-60). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have the memory is further configured to store parallax information between the first camera and the second camera of SETA reference. The Suggestion/motivation for doing so would have been to verify the reliability of the left and right lane markers used in the image and is compared to a threshold of reliability after finding the difference value as suggested in SETA column 10, lines 60-66 and column 11, line 66 – column 12, line 2. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine SETA with SCHAMP to obtain the invention as specified in claim 8. As per claim 16, SCHAMP discloses the method of claim 9. SCHAMP fails to disclose further comprising recognizing an object in the first image and an object in the second image as the same image, upon parallax information matching a difference between a 3D straight line of the object recognized in the first image and a 3D straight line of the object recognized in the second image, wherein the parallax information comprises a geometrical relationship between the first camera and the second camera. SETA discloses further comprising recognizing an object in the first image and an object in the second image as the same image, upon parallax information matching a difference between a 3D straight line of the object recognized in the first image and a 3D straight line of the object recognized in the second image (analogue interface 3 includes a memory the memory of the computing system is adapted to store parallax information calculated by a stereo calculating circuit 6 which calculates parallax values and data relating to each camera of the stereo camera system; abstract; column 3, lines 4-23; column 4, lines 9-28), wherein the parallax information comprises a geometrical relationship between the first camera and the second camera (and the interface 3 acting as a system controller is configured to recognize an object 50 in an image captured by stereo camera system camera 1 and camera 2 and determines parallax values for both via stereo calculating circuit 6 and matches a difference value of straight line road markers recognized in both images and used to orient the vehicle, wherein in step 3, the reliability of the left and right lane markers are verified, following two things are the difference value between the position of the lane marker in the previous cycle and the position of the lane marker in the present cycle is greater than a specified value; column 4, lines 9-28; column 9, lines 2-60). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SCHAMP to have the memory is further configured to store parallax information between the first camera and the second camera of SETA reference. The suggestion/motivation for doing so would have been to verify the reliability of the left and right lane markers used in the image and is compared to a threshold of reliability after finding the difference value as suggested in SETA column 10, lines 60-66 and column 11, line 66 – column 12, line 2. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine SETA with SCHAMP to obtain the invention as specified in claim 16. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000. /Devin Dhooge/ USPTO Patent Examiner Art Unit 2677 /Jonathan S Lee/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Mar 20, 2023
Application Filed
Jul 29, 2025
Non-Final Rejection — §103
Oct 31, 2025
Response Filed
Jan 02, 2026
Final Rejection — §103
Apr 08, 2026
Request for Continued Examination
Apr 10, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602773
Deep-Learning-based T1-Enhanced Selection of Linear Coefficients (DL-TESLA) for PET/MR Attenuation Correction
2y 5m to grant Granted Apr 14, 2026
Patent 12579780
HYPERSPECTRAL TARGET DETECTION METHOD OF BINARY-CLASSIFICATION ENCODER NETWORK BASED ON MOMENTUM UPDATE
2y 5m to grant Granted Mar 17, 2026
Patent 12524982
NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, VISUALIZATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12517146
IMAGE-BASED DECK VERIFICATION
2y 5m to grant Granted Jan 06, 2026
Patent 12505673
MULTIMODAL GAME VIDEO SUMMARIZATION WITH METADATA
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 71 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month