Prosecution Insights
Last updated: April 19, 2026
Application No. 18/199,209

VEHICLE AND CONTROL METHOD THEREOF

Final Rejection §101§102§103
Filed
May 18, 2023
Examiner
SANTOS, KIRSTEN JADE M
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
2 (Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
32 granted / 60 resolved
+1.3% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
26.2%
-13.8% vs TC avg
§103
44.1%
+4.1% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments regarding the rejection of claims 1-20 under 35 U.S.C 101 have been fully considered but they are not persuasive. The examiner has carefully considered applicant's argument regarding the estimation step being presented as an abstract idea that can be practically performed in the human mind. Applicant argues that the signals received by various sensors is not something that the human mind is equipped to do. Specifically, the examiner would like to note that the "estimate" step is a form of calculation or deduction made based on the data received, and that in this case, the various sensors (acceleration, speed, steering angle, yaw rate, etc.) are considered generic components that merely implement the abstract idea at apply it level. Regarding the control step utilizing a display to emphasize an image region depending on the direction and magnitude of the identified longitudinal and lateral gradient, despite the control signal present, it is clear that the estimated information is still merely being projected/displayed on a screen, which is still considered a post solution action, which is a form of insignificant extra-solution activity. Lastly, to address applicant's arguments towards an improvement, an improved abstract idea (e.g., improved calculating, improved mental process, etc.) is not to make a claim that is directed to an abstract idea eligible. See Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151, 120 USPQ2d 1473, 1483 (Fed. Cir. 2016) ("a *new* abstract idea is still an abstract idea") (emphasis in original). As such, the examiner respectfully disagrees and the rejection is maintained. Applicant's arguments regarding the rejection of claims 1-20 under 35 U.S.C 102 have been fully considered but they are not persuasive. The examiner has carefully considered applicant's arguments regarding Kazuya failing to disclose "estimating a longitudinal gradient and a lateral gradient of a road surface based on a sensor value received from at least one of a longitudinal/later acceleration sensor..." and respectfully disagrees. There are specific examples in the disclosure that note the detection of a lateral and longitudinal gradient, such as Figs. 6-7 which are indicative of a vehicle actively traversing an incline. Values indicative of lateral and longitudinal gradients such as pitch and roll angle, indicate whether the vehicle is present on a horizontal surface, or an incline. Additionally, applicant argues that Kazuya fails to disclose displaying a front region, or lateral region of a surround view image based on whether the vehicle is traveling on an inclined surface and merely displays a posture via a symbol of the entire vehicle. The examiner respectfully disagrees, The cited reference expressly discloses that the display area is divided into a plurality of regions in which, "images of various directions," and "a clinometer indicating posture of the vehicle," are displayed ([0062]-[0063], Fig.12) Relative to information obtained from acceleration sensors regarding pitch and roll, it is indicated whether the vehicle is traveling on an incline, and there is an event where the driver is notified whether a gradient on the road surface is non-traversable. Moreover, applicant argues that Kazuya fails to disclose wherein the controller is configured to control the display to emphasize an image region associated with a blind spot area of a driver depending on direction and magnitude of the longitudinal gradient and lateral gradient. However, Kazuya describes a periphery monitoring device in which the display area is divided into a plurality of display regions, each corresponding to different directions around the vehicle (front, lateral, rear, etc.) and further discloses that in each divided area, an average value of a gradient is calculated and the display color is determined by comparing the gradient value with a comparison gradient. For example, when the average gradient indicates an area that the vehicle may be unable to travel, the divided area is emphasized in red on the display, and when a gradient indicates a region requiring the driver's attention, the area is displayed in yellow ([0082-0083]). As such, the examiner respectfully disagrees and the rejection is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. STEP 1 Yes, the claims are directed toward a vehicle and method which both fall within the statutory categories. STEP 2A (PRONG 1) Claim 1 A vehicle comprising: at least one sensor a display a surround view monitor system configured to obtain a surround view image a controller configured to estimate a gradient of a road surface based on a sensor value received from the at least one sensor control the display to display at least a part of the surround view image based on a preset condition related to the gradient of the road surface being satisfied The examiner submits that the foregoing bolded limitation constitutes a mental process because under its broadest reasonable interpretation, the claim covers performance of the limitations in the human mind. Highlighted above, the limitation merely consists of a process of perceiving information relative to obtained sensor data. The estimating step is equivalent to a person perceiving the obtained sensor data of road information and additionally, forming a simple judgement or performing a calculation to determine a gradient depending on the acquired sensor data. As such, claim 1 recites a mental process. Claim 11 A control method of a vehicle comprising: estimating a gradient of a road surface based on a sensor value received from at least one sensor provided in the vehicle displaying at least a part of a surround view image based on a preset condition related to the gradient of the road surface being satisfied Relative to the claim 1 analysis, the examiner submits that the foregoing bolded limitation constitutes a mental process because under its broadest reasonable interpretation, the claim covers performance of the limitations in the human mind. Highlighted above, the limitation merely consists of a process of perceiving information relative to obtained sensor data. The estimating step is equivalent to a person perceiving the obtained sensor data of road information and additionally, forming a simple judgement or performing a calculation to determine a gradient depending on the acquired sensor data. As such, claim 1 recites a mental process. As such, claim 11 recites a mental process. STEP 2A (PRONG 2) Claim 1 A vehicle comprising: at least one sensor a display a surround view monitor system configured to obtain a surround view image a controller configured to estimate a gradient of a road surface based on a sensor value received from the at least one sensor control the display to display at least a part of the surround view image based on a preset condition related to the gradient of the road surface being satisfied The examiner submits that the above identified additional limitations do not integrate the previously discussed abstract idea into a practical application. Regarding the additional limitations of, “obtain a surround view image,” and “display at least a part of…” the examiner submits that these limitations are insignificant extra-solution activities that merely use the generic computer components, such as, “a sensor,” “surround view monitor system,” and “a display,” to perform the processes. Specifically, the “obtain a surround view image,” step is recited at a high level of generality (i.e. as a general means of receiving, obtaining, or acquiring information for use in a store and processing step, or determination) and amounts to mere data gathering, which is a form of insignificant extra solution activity. Additionally, the “display at least a part” step is also recited at a high level of generality (i.e, as a general means of transmitting, outputting, or displaying and encompasses a post solution action, which is also a form of insignificant extra solution activity. As discussed, the recited facilities (sensor, surround view system, and display) are generic components meant to implement the abstract idea on a computer and merely “apply” the mental judgements in a general-purpose vehicle control environment. Thus, it is clear that the abstract ideas have not been integrated into a practical application. Claim 11 A control method of a vehicle comprising: estimating a gradient of a road surface based on a sensor value received from at least one sensor provided in the vehicle displaying at least a part of a surround view image based on a preset condition related to the gradient of the road surface being satisfied Similarly to the claim 1 analysis, the examiner submits that the above identified additional limitations do not integrate the previously discussed abstract idea into a practical application. Regarding the additional limitation of, “displaying at least a part of…” the examiner submits that this limitation is an insignificant extra-solution activity that merely uses the generic computer components to perform the processes. The “displaying at least a part” step is recited at a high level of generality (i.e, as a general means of transmitting, outputting, or displaying and encompasses a post solution action, which is also a form of insignificant extra solution activity. Thus, it is clear that the abstract ideas have not been integrated into a practical application. STEP 2B Claims 1 and 11 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above. The additional elements such as a processor and memory to perform the step amounts to nothing more than applying the exception using a generic computer component. General application of an exception using a generic computer component cannot provide an inventive concept. Conclusion Thus, since claims 1 and 11 are: (a) directed towards abstract ideas, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that claims 1 and 11 are directed towards non-statutory subject matter. Dependent claims 2-10 and 12-20 do not recite any further limitations that cause the claim to be patent eligible. The limitations of the dependent claims are directed towards additional aspects of the judicial exception and/or additional elements that do not integrate the judicial exception into a practical application. Regarding some of the other examples of additional limitations in the dependent claims such as, “determine that an uphill…” (claim 2), “estimate a lateral gradient…” (claim 4), “determine a blind spot,” (claim 6), and “estimate the gradient of the road surface,” (claim 12), and “determining that the preset condition,” (claim 14), the examiner submits that these limitations are additional abstract ideas that can be practically performed in the human mind. For example, determining and estimating steps, in the context of the claims, encompasses a person looking at data collected (acquired, obtained, etc.) and forming a simple judgement (determination, estimation, analysis, comparison, etc.) either mentally, or using a pen and paper. Similar to the claim 1 and 11 analysis, these steps are equivalent to a person perceiving the obtained sensor data of road information and additionally, forming a simple judgement or performing a calculation to determine a gradient or condition depending on the acquired sensor data. Furthermore, some other examples of additional limitations in the dependent claims, such as, “stop outputting…” (claim 3), “control the display to stop,” (claim 5), and “control the display to display a firs region” (claim 7) are directed towards additional aspects of the judicial exception and additional elements that do not integrate the judicial exception into a practical application. The examiner submits that these limitations are insignificant extra-solution activities that merely use the generic computer components, such as, “a sensor,” “surround view monitor system,” and “a display,” to perform the processes. Additionally, the steps are also recited at a high level of generality (i.e, as a general means of transmitting, outputting, or displaying and encompasses a post solution action, which is also a form of insignificant extra solution activity. As such, claims 1-20 are rejected under 35 U.S.C 101 as being drawn to an abstract idea without significantly more, and thus are ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe Kazuya et al. (US20190163988A1), hereinafter referred to as Kazuya. Regarding claim 1, Kazuya discloses: A vehicle (see at least Kazuya, Fig.1, Item 1, “vehicle”) comprising: at least one sensor (see at least Kazuya, Fig.3, which discloses all the components broken down within the periphery monitoring system including sensors; ¶¶ [0038] discloses a periphery monitoring system with various types of sensors such as, acceleration, steering angle, and torque sensors) a display (see at least Kazuya, Fig.1, Item 8, “display”) a surround view monitor system configured to obtain a surround view image (see at least Kazuya, Fig.2, “periphery monitoring device” mounted around the system; ¶¶ [0037] imaging units to transmit surrounding information/images are placed around the vehicle at various positions such as, but not exclusive to, the front, end, right, and left sides of the vehicle) a controller configured to estimate a longitudinal and lateral gradient of a road surface based on a sensor value received from the at least one of longitudinal/lateral acceleration sensor, a wheel speed sensor, a steering angle sensor, or a yaw rate sensor (see at least Kazuya, ¶¶ [0040] discloses an electronic control unit capable of calculating the gradient state of a road surface; [0046]-[0047] discloses the sensor data retrieved to calculate ascending and descending gradients data, [0058]-[0060]) control the display to display a front region or a lateral region of the surround view image based on whether the vehicle is travelling on an inclined road surface (see at least Kazuya, Fig.12 which discloses an example of the display presenting gradient information in the form of a surrounding image of the vehicle by the periphery monitoring device; Fig.13 discloses an example of a preset condition (a processing procedure) to display the image; ¶¶ [0037] discloses the configuration of a periphery monitoring system that executes processing and image processing on the basis of the captured image data obtained by the imaging units; examples of such images include, but are not limited to: image of a wider viewing angle or a virtual bird's eye view image that is viewing the vehicle from above, [0062]-[0063] discloses the display area divided into a plurality of regions in which, images of various directions, and a clinometer indicating posture of the vehicle are displayed relative to the measured values from the acceleration sensors) wherein the controller is configured to control the display to emphasize an image region associated with a blind spot area of a driver depending on direction and magnitude of the longitudinal gradient and lateral gradient (see at least Kazuya, Fig.12 ¶¶ [0062]-[0063], [0082]-[0083] which discloses periphery monitoring device in which the display area is divided into a plurality of display regions, each corresponding to different directions around the vehicle (front, lateral, rear, etc.) and further discloses that in each divided area, an average value of a gradient is calculated and the display color is determined by comparing the gradient value with a comparison gradient. For example, when the average gradient indicates an area that the vehicle may be unable to travel, the divided area is emphasized in red on the display, and when a gradient indicates a region requiring the driver's attention, the area is displayed in yellow this means the controller is configured to control the display to emphasize an image region associated with a blind spot area of a driver depending on direction and magnitude of the longitudinal gradient and lateral gradient) Regarding claim 2, Kazuya discloses: The vehicle according to claim 1, wherein the controller is configured to: estimate a longitudinal gradient of the road surface in real time (see at least Kazuya, ¶¶ [0049]-[0050]) determine that an uphill condition is satisfied based on determining that the longitudinal gradient of the road surface is greater than a first threshold value (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained pitch (lateral) and roll (longitudinal) sensor data) determine a maximum value of the longitudinal gradient in response to determining that the uphill condition is satisfied (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained longitudinal sensor data; [0071] discloses the referenced approach angle serving as a comparison or maximum value for the determination unit to compare against the difference in order to determine an uphill condition) determine a first preset condition when a difference value between the maximum value of the longitudinal gradient and the longitudinal gradient of the road surface estimated in real time is greater than a second threshold (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained longitudinal sensor data; [0071] discloses the referenced approach angle serving as a comparison or maximum value for the determination unit to compare against the difference in order to determine an uphill condition based on comparison of the difference value and approach angle M1; Fig.13 illustrates the processing method of determining an ascending/descending condition relative to the gradient) Regarding claim 3, Kazuya discloses: The vehicle according to claim 2, wherein the controller is configured to control the display to stop outputting the surround view image based on determining that the longitudinal gradient of the road surface estimated in real time has reached a third threshold value after the uphill condition is satisfied, and wherein the third threshold value is less than the first threshold value (see at least Kazuya, ¶¶ [0076]-[0079] which discloses the termination condition where the process determines whether a longitudinal gradient after it is determined to be an uphill/descending condition is equal to or larger than the alarming gradient along the route which indicates a continuous uphill condition; otherwise the termination condition is a condition where it is determined that the presentation of gradient information, the periphery view via the imaging units, is not necessary) Regarding claim 4, Kazuya discloses: The vehicle according to claim 1, wherein the controller is configured to: estimate the lateral gradient of the road surface in real time (see at least Kazuya, ¶¶ [0046] which discloses the calculation of a lateral gradient (inclination pitch angle around the lateral axis) in real time as a vehicle traverses a route, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained pitch (lateral) and roll (longitudinal) sensor data) determine a second preset condition when a magnitude of the lateral gradient of the road surface is greater than a first threshold value (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained longitudinal sensor data; [0071] discloses the referenced approach angle serving as a comparison or maximum value for the determination unit to compare against the difference in order to determine an uphill condition) Regarding claim 5, Kazuya discloses: The vehicle according to claim 4, wherein the controller is configured to control the display to stop outputting the surround view image based on determining that the magnitude of the lateral gradient of the road surface estimated in real time has reached a second threshold value as the magnitude of the lateral gradient of the road surface becomes greater than the first threshold value, and wherein the second threshold value is smaller than the first threshold value (see at least Kazuya, ¶¶ [0076]-[0079] which discloses the termination condition where the process determines whether a gradient after it is determined to be an uphill/descending condition is equal to or larger than the alarming gradient along the route which indicates a continuous uphill condition; otherwise the termination condition is a condition where it is determined that the presentation of gradient information, the periphery view via the imaging units, is not necessary) Regarding claim 6, Kazuya discloses: The vehicle according to claim 1, wherein the controller is configured to: determine a blind spot region in the surround view image based on the longitudinal gradient and lateral gradient of the road surface (see at least Kazuya, Fig.12, which discloses the display view image of the front, rear, and blind spots of the vehicle; ¶¶ [0037] which discloses the imaging unit and the type of images displayed depending on the location of the imaging unit on the vehicle; [0085]-[0086] discloses emphasizing the gradient position either in the front, rear, or periphery and highlighting where on the display that the driver needs to pay attention to) control the display to display a visual indicator indicating the blind spot region on the surround view image (see at least Kazuya, Fig.12, which discloses the display view image of the front, rear, and the specific blind spots of the vehicle; [0085]-[0086] discloses emphasizing the gradient position either in the front, rear, or periphery and highlighting where on the display that the driver needs to pay attention to) Regarding claim 7, Kazuya discloses: The vehicle according to claim 1, wherein: the controller is configured to control the display to display a first region of the surround view image based on a first preset condition being satisfied (see at least Kazuya, Fig.12 which discloses an example of the display presenting gradient information in the form of a surrounding image of various regions the vehicle by the periphery monitoring device; Fig.13 discloses an example of a preset condition (a processing procedure) to display the image; ¶¶ [0037] discloses the configuration of a periphery monitoring system that executes processing and image processing on the basis of the captured image data obtained by the imaging units; examples of such images include, but are not limited to: image of a wider viewing angle or a virtual bird's eye view image that is viewing the vehicle from above, [0058]) control the display to display a second region of the surround view image based a second preset condition being satisfied (see at least Kazuya, Fig.12 which discloses an example of the display presenting gradient information in the form of a surrounding image of various regions the vehicle by the periphery monitoring device; Fig.13 discloses an example of a preset condition (a processing procedure) to display the image; ¶¶ [0037] discloses the configuration of a periphery monitoring system that executes processing and image processing on the basis of the captured image data obtained by the imaging units; examples of such images include, but are not limited to: image of a wider viewing angle or a virtual bird's eye view image that is viewing the vehicle from above, [0058]) Regarding claim 8, Kazuya discloses: The vehicle according to claim 1, wherein the controller is configured to control the display to display the surround view image based whether the vehicle is traveling on an inclined road surface only when an automatic display function is activated (see at least Kazuya, ¶¶ [0065] which discloses the display mode determination unit displaying according to the display mode set by the user in the operation unit) Regarding claim 9, Kazuya discloses: The vehicle according to claim 1, wherein the controller is configured to control the display to display the surround view image based whether the vehicle is traveling on an inclined road surface only when a transmission gear is position at a drive (D) stage and a vehicle speed is equal to or less than a preset speed (see at least Kazuya ¶¶ [0070] discloses the display mode of the device is not set to display when the speed of the vehicle is equal to or slower than a predetermined speed; it will be assumed that the vehicle is traveling on a road surface with a small change in gradient and there is no need to provide gradient information; Fig. 13, the condition whether S100 is fulfilled) Regarding claim 10, Kazuya discloses: The vehicle according to claim 1, wherein the controller is configured to control the display to stop outputting the surround view image based on reception of a user input to interrupt the output of the surround view image (see at least Kazuya, ¶¶ [0065] which discloses the display mode determination unit displaying according to the display mode set by the user in the operation unit or reception of user input in the operation input) Regarding claim 11, Kazuya discloses: A control method of a vehicle (see at least Kazuya, Fig.13 which discloses the control processing method) comprising: estimating a longitudinal gradient and lateral gradient of a road surface based on a sensor value received from at least one of a longitudinal/lateral acceleration, a wheel speed sensor, a steering angle sensor, and a yaw rate sensor provided in the vehicle (see at least Kazuya, ¶¶ [0040] discloses an electronic control unit capable of calculating the gradient state of a road surface; [0046]-[0047] discloses the sensor data retrieved to calculate ascending and descending gradients data) displaying at least a part of a surround view image based whether the vehicle is traveling on an inclined road surface, including emphasizing an image region associated with a blind spot area of a driver depending on direction and magnitude of the longitudinal gradient and lateral gradient (see at least Kazuya, Fig.12 which discloses an example of the display presenting gradient information in the form of a surrounding image of the vehicle by the periphery monitoring device; Fig.13 discloses an example of a preset condition (a processing procedure) to display the image; ¶¶ [0037] discloses the configuration of a periphery monitoring system that executes processing and image processing on the basis of the captured image data obtained by the imaging units; examples of such images include, but are not limited to: image of a wider viewing angle or a virtual bird's eye view image that is viewing the vehicle from above, [0062]-[0063], [0082]-[0083] which discloses periphery monitoring device in which the display area is divided into a plurality of display regions, each corresponding to different directions around the vehicle (front, lateral, rear, etc.) and further discloses that in each divided area, an average value of a gradient is calculated and the display color is determined by comparing the gradient value with a comparison gradient. For example, when the average gradient indicates an area that the vehicle may be unable to travel, the divided area is emphasized in red on the display, and when a gradient indicates a region requiring the driver's attention, the area is displayed in yellow this means the controller is configured to control the display to emphasize an image region associated with a blind spot area of a driver depending on direction and magnitude of the longitudinal gradient and lateral gradient) Regarding claim 12, Kazuya discloses: The control method according to claim 11, wherein estimating the gradient of the road surface comprises estimating the longitudinal gradient and the lateral gradient of the road surface in real time (see at least Kazuya, ¶¶ [0049]-[0050]), and the control method further comprises: determining that an uphill condition is satisfied based on determining that the longitudinal gradient of the road surface is greater than a first threshold value (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained pitch (lateral) and roll (longitudinal) sensor data) determining a maximum value of the longitudinal gradient in response to determining that the uphill condition is satisfied (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained longitudinal sensor data; [0071] discloses the referenced approach angle serving as a comparison or maximum value for the determination unit to compare against the difference in order to determine an uphill condition) determining a first preset condition is satisfied based on determining that a difference value between the maximum value of the longitudinal gradient and the longitudinal gradient of the road surface estimated in real time is greater than a second threshold (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained longitudinal sensor data; [0071] discloses the referenced approach angle serving as a comparison or maximum value for the determination unit to compare against the difference in order to determine an uphill condition based on comparison of the difference value and approach angle M1; Fig.13 illustrates the processing method of determining an ascending/descending condition relative to the gradient) Regarding claim 13, Kazuya discloses: The control method according to claim 12, further comprising: stopping outputting the surround view image based on determining that the longitudinal gradient of the road surface estimated in real time has reached a third threshold value after the uphill condition is satisfied, wherein the third threshold value is less than the first threshold value (see at least Kazuya, ¶¶ [0076]-[0079] which discloses the termination condition where the process determines whether a longitudinal gradient after it is determined to be an uphill/descending condition is equal to or larger than the alarming gradient along the route which indicates a continuous uphill condition; otherwise the termination condition is a condition where it is determined that the presentation of gradient information, the periphery view via the imaging units, is not necessary) Regarding claim 14, Kazuya discloses: The control method according to claim 11, wherein estimating the gradient of the road surface comprises: estimating the longitudinal and the lateral gradient of the road surface in real time (see at least Kazuya, ¶¶ [0046] which discloses the calculation of a lateral gradient (inclination pitch angle around the lateral axis) in real time as a vehicle traverses a route, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained pitch (lateral) and roll (longitudinal) sensor data) the control method further comprises determining that a second preset condition is satisfied based on determining that a magnitude of the lateral gradient of the road surface is greater than a first threshold value (see at least Kazuya, ¶¶ [0053]-[0054] discloses a determination of an uphill condition when the gradient difference (G1 – G0) is equal to or larger than the approach angle M1 (first threshold/reference value); it is then determined that there is an ascending gradient that the vehicle may, or may not be able to climb along a route; Fig.7 illustrates an example of an uphill condition being determined based on the obtained longitudinal sensor data; [0071] discloses the referenced approach angle serving as a comparison or maximum value for the determination unit to compare against the difference in order to determine an uphill condition) Regarding claim 15, Kazuya discloses: The control method according to claim 14, further comprising: stopping outputting the surround view image based on determining that the magnitude of the lateral gradient of the road surface estimated in real time has reached a second threshold value as the magnitude of the lateral gradient of the road surface becomes greater than the first threshold value, wherein the second threshold value is smaller than the first threshold value (see at least Kazuya, ¶¶ [0076]-[0079] which discloses the termination condition where the process determines whether a gradient after it is determined to be an uphill/descending condition is equal to or larger than the alarming gradient along the route which indicates a continuous uphill condition; otherwise the termination condition is a condition where it is determined that the presentation of gradient information, the periphery view via the imaging units, is not necessary) Regarding claim 16, Kazuya discloses: The control method according to claim 11, further comprising: determining a blind spot region in the surround view image based on the longitudinal and the lateral gradient of the road surface (see at least Kazuya, Fig.12, which discloses the display view image of the front, rear, and blind spots of the vehicle; ¶¶ [0037] which discloses the imaging unit and the type of images displayed depending on the location of the imaging unit on the vehicle; [0085]-[0086] discloses emphasizing the gradient position either in the front, rear, or periphery and highlighting where on the display that the driver needs to pay attention to) displaying a visual indicator indicating the blind spot region on the surround view image (see at least Kazuya, Fig.12, which discloses the display view image of the front, rear, and the specific blind spots of the vehicle; [0085]-[0086] discloses emphasizing the gradient position either in the front, rear, or periphery and highlighting where on the display that the driver needs to pay attention to) Regarding claim 17, Kazuya discloses: The control method according to claim 11, wherein displaying at least a part of the surround view image comprises displaying a first region of the surround view image based a first preset condition being satisfied (see at least Kazuya, Fig.12 which discloses an example of the display presenting gradient information in the form of a surrounding image of various regions the vehicle by the periphery monitoring device; Fig.13 discloses an example of a preset condition (a processing procedure) to display the image; ¶¶ [0037] discloses the configuration of a periphery monitoring system that executes processing and image processing on the basis of the captured image data obtained by the imaging units; examples of such images include, but are not limited to: image of a wider viewing angle or a virtual bird's eye view image that is viewing the vehicle from above, [0058]) displaying a second region of the surround view image based on a second preset condition being satisfied (see at least Kazuya, Fig.12 which discloses an example of the display presenting gradient information in the form of a surrounding image of various regions the vehicle by the periphery monitoring device; Fig.13 discloses an example of a preset condition (a processing procedure) to display the image; ¶¶ [0037] discloses the configuration of a periphery monitoring system that executes processing and image processing on the basis of the captured image data obtained by the imaging units; examples of such images include, but are not limited to: image of a wider viewing angle or a virtual bird's eye view image that is viewing the vehicle from above, [0058]) Regarding claim 18, Kazuya discloses: The control method according to claim 11, wherein displaying the at least the part of the surround view image based on whether the vehicle is traveling on an inclined road surface is performed only when an automatic display function is activated (see at least Kazuya, ¶¶ [0065] which discloses the display mode determination unit displaying according to the display mode set by the user in the operation unit) Regarding claim 19, Kazuya discloses: The control method according to claim 11, wherein displaying the at least the part of the surround view image based whether the vehicle is traveling on an inclined road surface is performed only when a transmission gear is position at a drive (D) stage and a vehicle speed is equal to or less than a preset speed (see at least Kazuya ¶¶ [0070] discloses the display mode of the device is not set to display when the speed of the vehicle is equal to or slower than a predetermined speed; it will be assumed that the vehicle is traveling on a road surface with a small change in gradient and there is no need to provide gradient information; Fig. 13, the condition whether S100 is fulfilled) Regarding claim 20, Kazuya discloses: The control method according to claim 11, further comprising: stopping outputting the surround view image based on reception of a user input to interrupt the output of the surround view image (see at least Kazuya, ¶¶ [0065] which discloses the display mode determination unit displaying according to the display mode set by the user in the operation unit or reception of user input in the operation input) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIRSTEN JADE M SANTOS whose telephone number is (571)272-7442. The examiner can normally be reached Monday: 8:00 am - 4:00 pm, 6:00-8:00 pm (+ with flex). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KIRSTEN JADE M SANTOS/Examiner, Art Unit 3664 /RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Mar 20, 2025
Non-Final Rejection — §101, §102, §103
Jul 07, 2025
Response Filed
Oct 17, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566072
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12552255
VEHICULAR DISPLAY HAVING RECHARGING MODULE WITH ANNEXATION INTERFACE
2y 5m to grant Granted Feb 17, 2026
Patent 12530931
DISTRIBUTED DIAGNOSTICS ARCHITECTURE FOR A VEHICLE
2y 5m to grant Granted Jan 20, 2026
Patent 12522483
APPARATUS AND METHOD FOR AUTOMATICALLY DETERMINING THE MOVEMENT SPACE AND AUTONOMOUSLY OPTIMIZING THE DRIVING BEHAVIOR OF AN OPERATING AUTOMATED GUIDED VEHICLE COMPRISING LOADING IN DYNAMIC PRODUCTION AND LOGISTICS ENVIRONMENTS
2y 5m to grant Granted Jan 13, 2026
Patent 12454272
METHOD FOR ESTIMATING AN ACCIDENT RISK OF AN AUTONOMOUS VEHICLE
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
88%
With Interview (+34.6%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month