Prosecution Insights
Last updated: April 19, 2026
Application No. 17/990,173

APPARATUS FOR CONTROLLING AN AUTONOMOUS DRIVING, VEHICLE SYSTEM HAVING THE SAME METHOD THEREOF

Final Rejection §103
Filed
Nov 18, 2022
Examiner
LEVY, MERRITT E
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
4 (Final)
33%
Grant Probability
At Risk
5-6
OA Rounds
3y 7m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
26 granted / 78 resolved
-18.7% vs TC avg
Strong +37% interview lift
Without
With
+36.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
56 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office action is in response to the amendments filed on June 30, 2025. Claims 1-3, 5, and 7-9, and 11-19 are currently pending, with Claims 1, 11, and 17-18 being amended, and Claim 10 being canceled. Response to Amendments In response to Applicant’s amendments, filed October 21, 2025, the Examiner maintains the previous claim interpretation, and withdraws the previous withdraws the previous 35 U.S.C. 102 and 103 rejections. Response to Arguments Applicant’s arguments, filed February 28, 2025, with respect to the rejections of Claims 1-3, 5, 7-9, 11-19 under Watanabe-606, in view of Menig, Watanabe-955, and Bhat, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Watanabe-606, in view of Watanabe-955, Menig, and Bhat. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: Claims 1 and 16 recite “an interface device …” Claim 17 recites “a sensing device …” and “an autonomous driving control apparatus …” Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof: Regarding the limitation of “an interface device …”, the instant Specification at Paragraphs [0058] and [0059] at least states “the interface device 130 may comprise an input means for receiving a control command from a user and an output means for outputting an operation state of the autonomous driving control apparatus … may comprise a key button, a mouse, a keyboard, a touch screen, a microphone, a joystick …” and “the interface device 130 may be implemented as a head-up display (HUD), a cluster, an audio video navigation (AVN), a human machine interface (HM), a user setting menu (USM) …”. As such, the Examiner is interpreting “an interface device” to be any device capable of receiving input and providing output based on the received input, such as a touchscreen display. Regarding the limitation of “a sensing device …”, the instant Specification at Paragraph [0079] at least states “The sensing device 200 may be configured to detect an environment around the vehicle … and may compromise a plurality of sensors for detecting objects …”. As such, the Examiner is interpreting “a sensing device” to be any device, program, software, or hardware, capable of receiving environmental data. Regarding the limitation of “an autonomous driving control apparatus …”, the instant Specification at Paragraphs [0051]-[0053] at least states that “The autonomous driving control apparatus 100 may comprise a communication device 110, a storage 120, an interface device 130, and a processor 140” and “the autonomous driving control apparatus 100 may be configured to differently display recognized objects … and may be controlled to drive while maintaining an inter-vehicle distance …”. As such, the Examiner is interpreting “an autonomous driving control device” to be a program, module, software, or hardware associated with the vehicle to control or execute certain vehicle functions. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 5, 7-9, 11-14, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2020/0290606 A1, to Watanabe, et al (hereinafter referred to as Watanabe-606; previously of record), and further in view of U.S. Patent Publication No. 2018/0198955 A1, to Watanabe (hereinafter referred to as Watanabe-955; previously of record). As per Claim 1, Watanabe-606 discloses the features of an autonomous driving control apparatus (e.g. Paragraph [0048]; where an autonomous driving assistance device (100) is provided on a vehicle to avoid road hazards), comprising: an interface device (e.g. Paragraph [0048]; where the display device (3), the voice input portion (4), and the state detection device (5) configure a human-machine interface (HMI) functioning as an information input portion used by a user) configured to display a driving path of a vehicle and one or more vehicle surrounding objects (Paragraphs [0049], [0068]; Figure 8; where the display device (3) displays images gathered by vehicle sensors and cameras, and can display the host vehicle, and its path as well as surveillance objects); and a processor (e.g. Paragraphs [0044], [0053]-[0054], [0218]; where the autonomous driving assistance device (100) comprises a control portion (1), includes an integrated hazard prediction portion (10) as an avoidance processing portion to analyze and process video and images to predict hazards during driving; and where the controllers are implemented by a processor configured to execute one or more functions embodied in computer programs stored in memory) configured to visually classify and display the one or more vehicle surrounding objects on the interface device (e.g. Paragraphs [0090]-[0091]; Figures 7-8, 21-22, 25; where the system displays and classifies objects, such as pedestrians, bicycles, and other vehicles) based on a recognition status of the one or more vehicle surrounding objects (e.g. Paragraphs [0076], [0082], [0109]-[0110]; where when an object is recognized, it is assigned an ID) and a risk level of the vehicle surrounding objects (e.g. Paragraphs [0112]-[0114], [0120]; where the control portion (1) determined whether the specified surveillance area/object indicates a large hazard level and that the subject vehicle (P) is highly likely to collide with the specified object; and where a hazard level for the surveillance area/object is determined), based on recognition information of the one or more vehicle surrounding objects while driving (e.g. Paragraphs [0076], [0082], [0109]-[0110]; where when an object is recognized, it is assigned an ID); wherein the recognition status includes a recognition accuracy (e.g. Paragraphs [0120]-[0122], [0169]; where the control portion (1) performs image recognition to extract an outline, find the probability of object detection, and predict the movement, and estimate the probability of colliding with a surveillance area/object; and where the control portion (1) determines is a recheck is necessary to identify an object when the initial hazard level is determined to be higher than or equal to a predetermined value; and where the system performs the recheck to accurately calculate the hazard level) and the risk level includes a possibility of collision with the vehicle (e.g. Paragraphs [0112]-[0114], [0120]; where the control portion (1) determined whether the specified surveillance area/object indicates a large hazard level and that the subject vehicle (P) is highly likely to collide with the specified object; and where a hazard level for the surveillance area/object is determined); wherein the processor, when a vehicle surrounding object, of the one or more vehicle surrounding objects, is determined as an undetermined control target is detected, but not recognized because the recognition accuracy of the one or more vehicle surrounding objects is low, is further configured to determine the vehicle surrounding object as a control target detected and recognized by at least the user or the autonomous driving control apparatus, using a user command based on the information displayed on the interface device (e.g. Paragraphs [0044], [0109]-[0110], [0131]; where the system receives user input about a driving operation of the vehicle, determines if areas or objects input by the user are considered monitoring targets, tracks the monitoring target, and sets at least one hazard avoidance processing portion based on the user input; and where a user recognizes a situation and specifies it as a surveillance area, or recognizes a person or vehicle and specifies it as a surveillance object; where if a user-specified surveillance area/object is recognized, if the surveillance area/object is recognized, the control portion (1) displays the recognized surveillance area/object in a highlighted color, and if the area/object is not recognized, the control portion (1) determines whether the surveillance area/object is trailed); controls the vehicle to avoid collision with the vehicle surrounding object determined as the control target (e.g. Paragraphs [0048], [0058], [0061]; where the control portion (1) controls the implementation of the avoidance remedy, and controls vehicle operations based on control information that is supplied from the integrated hazard prediction portion (10) that is needed to avoid hazards), ‘…’ wherein the processor controls a vehicle surrounding object to be marked in a second color or in a second shape on a driving path when the vehicle surrounding object is determined as the control target based on the user command (e.g. Paragraphs [0066], [0109]-[0110], [0114]; where the system determines if there is user-input information, and prompts the user to select an item, and if the user-specified surveillance area/object is recognized, and displays the recognized surveillance area/object in a highlight color; and if the user-specified surveillance area/object determined as safe, the system changes the highlight color to a safety color (i.e. changes the color based on a user selection of an item and based on the classification of that item when it is identified)). Watanabe-606 fails to disclose every feature of wherein the processor is configured to: control the one or more vehicle surrounding objects to be displayed in a first color or a first shape when recognition accuracy of the one or more vehicle surrounding objects is lower than a predetermined reference value. However, Watanabe-955, in a similar field of endeavor, teaches the features of wherein the processor is configured to: control the one or more vehicle surrounding objects to be displayed in a first color or a first shape when recognition accuracy of the one or more vehicle surrounding objects is lower than a predetermined reference value. Watanabe-955 teaches an image display system for a vehicle, where the marking image (35) changes its display attributes such as shape, size, and color in accordance with the traveling state of the target preceding vehicle (32), the positional relationship with the vehicle, etc.; where the color of the marking image is changed to a more conspicuous color (for example from yellow to orange or red), and where the color or the marking image (35) can be changed from blue, green, yellow, orange, and red; and where the color of the contour line can be determined and changed according to the risk level of the object (e.g. Paragraphs [0066]-[0067], [0118]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of using different color schemes in the system of Watanabe-955, in order to provide a driver an intuitive way to determine potential vehicle risk (see at least Paragraphs [0012] and [0014] of Watanabe-955). As per Claim 2, Watanabe-606, in view of Watanabe-955, teaches the features Claim 1, and Watanabe-606 further discloses the features of wherein the processor, based on the recognition status of the one or more vehicle surrounding objects and the risk level of the vehicle surrounding objects, is configured to: control the one or more vehicle surrounding objects to be classified and displayed according to one or more of the following: colors of the one or more vehicle surrounding objects; and shapes of the one or more vehicle surrounding objects (e.g. Paragraphs [0148], [0150]; Figures 13, 18-19; where the user-specified surveillance area/object may be represented in a “safety” or “caution” color, and various shapes to determine the object). As per Claim 5, Watanabe-606, in view of Watanabe-955, teaches the features Claim 1, and Watanabe-606 further discloses the features of wherein the processor is configured to: control the one or more vehicle surrounding objects to be displayed together with unique IDs by assigning the unique IDs to the one or more vehicle surrounding objects (e.g. Paragraph [0078]-[0079]; Figures 20-21; where the recognition IDs are linked with moving objects, caution items, etc., and are displayed in order of priority). As per Claim 7, Watanabe-606, in view of Watanabe-955, teaches the features Claim 1, and Watanabe-606 further discloses the features of wherein the processor is configured to: recognize the user command through one or more of the following: voice command recognition; a button operation; and a touch input (e.g. Paragraphs [0048], [0050]; where the autonomous driving assistance device (100 includes a voice input portion (4), which is used for voice input; and includes a touch panel (32) for inputting information). As per Claim 8, Watanabe-606, in view of Watanabe-955, teaches the features of Claim 1, and Watanabe-606 further discloses the features of wherein, when the user command is inputted as a voice command voice command comprises: a unique ID of the vehicle surrounding object (e.g. Paragraphs [0048]-[0050], [0064]-[0065]; [0217]; where the autonomous driving assistance device (100 includes a voice input portion (4), which is used for voice input; and where the system may import input information by detecting the operations or voice of the user; and where the user can specify an area or object and the system can assign an ID to the user-specified object); and an action to be performed (e.g. Paragraphs [0064]-[0066]; where the user can utter a phrase such as “right of the next signalized intersection”, and the system receives the utterance as input, and determines what to display). As per Claim 9, Watanabe-606, in view of Watanabe-955, teaches the features of Claim 8, and Watanabe-606 further disclose the features of wherein: the unique ID is given based on type information of the vehicle surrounding object (e.g. Paragraph [0066]; where the AI vision based driver assistance system analyzer generates and annotates (e.g. with a label) an AI-based boundary box around an object recognized in the image frame), and the action to be performed comprises one or more of determining the vehicle surrounding object as the control target; releasing the control target of an object determined as the control target; and deleting a misrecognized vehicle surrounding object (e.g. Paragraphs [0045], [0099], [0213]; where the determination portion identifies the input information about an area or an object as a surveillance target, i.e., a monitoring target, or tracking target, based on the situation or place at the time of processing)). As per Claim 11, Watanabe-606, in view of Watanabe-955, teaches the features of Claim 1, and Watanbe-606 teaches the features of control the vehicle surrounding object to be displayed ‘…’ when it is necessary to track the vehicle surrounding object, based on the user command (e.g. Paragraphs [0044], [0064, [0109], [0114]; Figures 2, 18-22; where a display device receives user touch input and determines the area as a monitoring object in order to track it, and where the user-specified surveillance object is recognized and highlighted in a specific color, and changes the color if the system determines the object is a safety color or a caution color). Watanabe-606 fails to disclose every feature of control the vehicle surrounding object to be displayed in a third color or in a third shape when it is necessary to track the vehicle surrounding object. However, Watanabe-955, in a similar field of endeavor, teaches an image display system for a vehicle, where the marking image (35) changes its display attributes such as shape, size, and color in accordance with the traveling state of the target preceding vehicle (32), the positional relationship with the vehicle, etc.; where the color of the marking image is changed to a more conspicuous color (for example from yellow to orange or red), and where the color or the marking image (35) can be changed from blue, green, yellow, orange, and red; and where the color of the contour line can be determined and changed according to the risk level of the object (e.g. Paragraphs [0066]-[0067], [0118]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of using different color schemes in the system of Watanabe-955, in order to provide a driver an intuitive way to determine potential vehicle risk (see at least Paragraphs [0012] and [0014] of Watanabe-955). As per Claim 12, Watanabe-606, in view of Watanabe-955, teaches the features of Claim 11, and Watanabe-955 further teaches the features of wherein the processor is configured to: control the vehicle to maintain a safe distance with a tracking target; and continue to track the vehicle surrounding object when the vehicle surrounding object is determined as the tracking target. Watanabe-955, in a similar field of endeavor, teaches a image display system for a vehicle, where the constant speed/ inter-vehicle distance control system (ACC) maintains an inter-vehicle distance, where the ACC system includes a control unit comprising a follow-up travel control unit (71), a constant speed travel control unit (72), and a target preceding vehicle determination unit (73), where the follow-up running control unit (71) executes a follow-up running mode in which the host vehicle follows the preceding vehicle so as to maintain the actual inter-vehicle distance with respect to the preceding vehicle; and where the preceding vehicle detection unit (11) continuously tracks the target preceding vehicle (32) and continuously collects data such as the inter-vehicle distance, relative speed, and direction, which is provided to the ACC (e.g. Paragraphs [0040], [0059], [0061], [0071]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of maintaining a safe inter-vehicle distance in the system of Watanabe-955, in order to reduce anxiety of and alert the user to changes in the vehicle driving behavior (see at least Paragraphs [0078] and [0081] of Watanabe-955). As per Claim 13, Watanabe-606, in view of Watanabe-955, teaches the features of Claim 11, and Watanabe-606 further discloses the features of wherein the processor is configured to: control the vehicle surrounding object to be marked in the first color or the first shape on the driving path (e.g. Paragraphs [0148], [0150]; Figures 13, 18-19; where the user-specified surveillance area/object may be represented in a “safety” or “caution” color, and various shapes to determine the object); control a unique ID of an object for providing information to be displayed together (e.g. e.g. Paragraph [0078]-[0079]; Figures 20-21; where the recognition IDs are linked with moving objects, caution items, etc., and are displayed in order of priority) when: ‘…’ control the vehicle surrounding object to be displayed ‘…’ when it is necessary to track the vehicle surrounding object, based on the user command (e.g. Paragraphs [0044], [0064, [0109], [0114]; Figures 2, 7-8where a display device receives user touch input and determines the area as a monitoring object in order to track it, and where the user-specified surveillance object is recognized and highlighted in a specific color, and changes the color if the system determines the object is a safety color or a caution color; and where objects are displayed for informational purposes, such as buildings (BD)). Watanabe-606 fails to disclose every feature of recognition accuracy of the vehicle surrounding object is lower than a predetermined reference value; wherein the vehicle surrounding object is determined as a target for providing information rather than a control target; and control the vehicle surrounding object to be displayed in a fourth color or a fourth shape when the vehicle surrounding object is determined as an information usage target, based on the user command. Watanabe-955 teaches the features of recognition accuracy of the vehicle surrounding object is lower than a predetermined reference value. Watanabe-955 teaches an image display system for a vehicle, where the marking image (35) changes its display attributes such as shape, size, and color in accordance with the traveling state of the target preceding vehicle (32), the positional relationship with the vehicle, etc.; where the color of the marking image is changed to a more conspicuous color (for example from yellow to orange or red), and where the color or the marking image (35) can be changed from blue, green, yellow, orange, and red; and where the color of the contour line can be determined and changed according to the risk level of the object (e.g. Paragraphs [0066]-[0067], [0118]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of using different color schemes in the system of Watanabe-955, in order to provide a driver an intuitive way to determine potential vehicle risk (see at least Paragraphs [0012] and [0014] of Watanabe-955). Watanabe-955 further teaches the features of wherein the vehicle surrounding object is determined as a target for providing information rather than a control target. Watanabe-955 teaches a image display system for a vehicle, where the objects recognized by the driving environment recognition system (42) are provided on the display; and where the contour detected object (such as a guard rail) can be changed in thickness so that the side closer to the own vehicle is thicker and the far side is thinner, and can continuously change from a distant color to a more conspicuous color, for example, from blue or green to yellow, orange, or red, depending on the size and proximity of the object to the vehicle (e.g. Paragraphs [0128], [0130]; Figures 9(a) and 9(b)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of determining an object as providing information in the system of Watanabe-955, in order to provide a perspective of the road environment to the driver (see at least Paragraph [0130] of Watanabe-955). Watanabe-955 further teaches the features control the vehicle surrounding object to be displayed in a fourth color or a fourth shape when the vehicle surrounding object is determined as an information usage target, based on the user command. Watanabe-955 teaches a image display system for a vehicle, where the marking image (35) changes its display attributes such as shape, size, and color in accordance with the traveling state of the target preceding vehicle (32), the positional relationship with the vehicle, etc.; where the color of the marking image is changed to a more conspicuous color (for example from yellow to orange or red), and where the color or the marking image (35) can be changed from blue, green, yellow, orange, and red; and where the color of the contour line can be determined and changed according to the risk level of the object (e.g. Paragraphs [0066]-[0067], [0118]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of using different color schemes in the system of Watanabe-955, in order to provide a driver an intuitive way to determine potential vehicle risk (see at least Paragraphs [0012] and [0014] of Watanabe-955). As per Claim 14, Watanabe-606, in view of Watanabe-955, teaches the features of Claim 13, and Watanabe-955 further teaches the features of wherein the processor is configured to: control an object around the vehicle to be marked in a fifth color or fifth shape on the driving path when the vehicle surrounding object is not the control target. Watanabe-955 teaches a image display system for a vehicle, where the marking image (35) changes its display attributes such as shape, size, and color in accordance with the traveling state of the target preceding vehicle (32), the positional relationship with the vehicle, etc.; where the color of the marking image is changed to a more conspicuous color (for example from yellow to orange or red), and where the color or the marking image (35) can be changed from blue, green, yellow, orange, and red; and where the color of the contour line can be determined and changed according to the risk level of the object (e.g. Paragraphs [0066]-[0067], [0118]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of using different color schemes in the system of Watanabe-955 in order to provide a driver an intuitive way to determine potential vehicle risk (see at least Paragraphs [0012] and [0014] of Watanabe-955). As per Claim 16, Watanabe-606, in view of Watanabe-955, teaches the features of Claim 1, and Watanbe-606 further discloses the features of a head-up display (HUD); a cluster; an audio video navigation (AVN); a human machine interface (HM); a user setting menu (USM); a monitor; or an AR-enabled windshield display (e.g. Paragraphs [0049], [0217]; where a head-up display may be used for display portion (31)). As per Claim 17, Watanabe-606 discloses the features of a vehicle system (e.g. Paragraph [0048]; where the vehicle has an autonomous driving assistance device (100) which performs an avoidance remedy during automatic operation of the system), comprising: a sensing device (e.g. Paragraphs [0163]-[0165]; where the system detects a vehicle or obstacle ahead by using radar as a device to provide an obstacle detection function; and uses a camera to measure position and distance to an object (i.e. sensing devices)) configured to acquire: vehicle surrounding object information (e.g. Paragraphs [0163]-[0165]; Figures 11-12; where the system detects a vehicle or obstacle ahead by using radar as a device to provide an obstacle detection function; and uses a camera to measure position and distance to an object (i.e. surrounding objects)); and vehicle surrounding environment information (e.g. Paragraph [0049]; Figures 11-12; where the vehicle camera system (2) captures the forward and surroundings of the vehicle); and an autonomous driving control apparatus (e.g. Paragraph [0048]; where an autonomous driving assistance device (100) is provided on a vehicle to avoid road hazards) including a processor (e.g. Paragraphs [0044], [0053]-[0054], [0218]; where the autonomous driving assistance device (100) comprises a control portion (1), includes an integrated hazard prediction portion (10) as an avoidance processing portion to analyze and process video and images to predict hazards during driving; and where the controllers are implemented by a processor configured to execute one or more functions embodied in computer programs stored in memory) configured to visually classify and display the one or more vehicle surrounding objects (e.g. Paragraphs [0090]-[0091]; Figures 7-8, 21-22, 25; where the system displays and classifies objects, such as pedestrians, bicycles, and other vehicles) on an interface device (e.g. Paragraph [0048]; where the display device (3), the voice input portion (4), and the state detection device (5) configure a human-machine interface (HMI) functioning as an information input portion used by a user) based on a recognition status of the one or more vehicle surrounding objects (.g. Paragraphs [0076], [0082], [0109]-[0110]; where when an object is recognized, it is assigned an ID) and a risk level of the one or more vehicle surrounding objects (e.g. Paragraphs [0112]-[0114], [0120]; where the control portion (1) determined whether the specified surveillance area/object indicates a large hazard level and that the subject vehicle (P) is highly likely to collide with the specified object; and where a hazard level for the surveillance area/object is determined), based on recognition information of the vehicle surrounding objects while driving (e.g. Paragraphs [0076], [0082], [0109]-[0110]; where when an object is recognized, it is assigned an ID); wherein the recognition status includes a recognition accuracy (e.g. Paragraphs [0120]-[0122], [0169]; where the control portion (1) performs image recognition to extract an outline, find the probability of object detection, and predict the movement, and estimate the probability of colliding with a surveillance area/object; and where the control portion (1) determines is a recheck is necessary to identify an object when the initial hazard level is determined to be higher than or equal to a predetermined value; and where the system performs the recheck to accurately calculate the hazard level) and the risk level includes a possibility of collision with the vehicle (e.g. Paragraphs [0112]-[0114], [0120]; where the control portion (1) determined whether the specified surveillance area/object indicates a large hazard level and that the subject vehicle (P) is highly likely to collide with the specified object; and where a hazard level for the surveillance area/object is determined); wherein the autonomous driving control apparatus, when a vehicle surrounding object of the one or more vehicle surrounding object is determined as an undetermined control target detected, but not recognized because recognition accuracy of the one or more vehicle surrounding objects is low determine the vehicle surrounding object as a control target detected and recognized by at least the user or the autonomous driving control apparatus, using a user command based on the information displayed on the interface device (e.g. Paragraphs [0044], [0109]-[0110], [0131]; where the system receives user input about a driving operation of the vehicle, determines if areas or objects input by the user are considered monitoring targets, tracks the monitoring target, and sets at least one hazard avoidance processing portion based on the user input; and where a user recognizes a situation and specifies it as a surveillance area, or recognizes a person or vehicle and specifies it as a surveillance object; where if a user-specified surveillance area/object is recognized, if the surveillance area/object is recognized, the control portion (1) displays the recognized surveillance area/object in a highlighted color, and if the area/object is not recognized, the control portion (1) determines whether the surveillance area/object is trailed)); wherein the autonomous driving control apparatus further controls the vehicle to avoid collision with the vehicle surrounding object determined as the control target (e.g. Paragraphs [0048], [0058], [0061]; where the control portion (1) controls the implementation of the avoidance remedy, and controls vehicle operations based on control information that is supplied from the integrated hazard prediction portion (10) that is needed to avoid hazards), ‘…’ wherein the processor controls a vehicle surrounding object to be marked in a second color or in a second shape on a driving path when the vehicle surrounding object is determined as the control target based on the user command (e.g. Paragraphs [0066], [0109]-[0110], [0114]; where the system determines if there is user-input information, and prompts the user to select an item, and if the user-specified surveillance area/object is recognized, and displays the recognized surveillance area/object in a highlight color; and if the user-specified surveillance area/object determined as safe, the system changes the highlight color to a safety color (i.e. changes the color based on a user selection of an item and based on the classification of that item when it is identified)). Watanabe-606 fails to disclose every feature of wherein the processor is configured to: control the one or more vehicle surrounding objects to be displayed in a first color or a first shape when recognition accuracy of the one or more vehicle surrounding objects is lower than a predetermined reference value. However, Watanabe-955, in a similar field of endeavor, teaches the features of wherein the processor is configured to: control the one or more vehicle surrounding objects to be displayed in a first color or a first shape when recognition accuracy of the one or more vehicle surrounding objects is lower than a predetermined reference value. Watanabe-955 teaches an image display system for a vehicle, where the marking image (35) changes its display attributes such as shape, size, and color in accordance with the traveling state of the target preceding vehicle (32), the positional relationship with the vehicle, etc.; where the color of the marking image is changed to a more conspicuous color (for example from yellow to orange or red), and where the color or the marking image (35) can be changed from blue, green, yellow, orange, and red; and where the color of the contour line can be determined and changed according to the risk level of the object (e.g. Paragraphs [0066]-[0067], [0118]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of using different color schemes in the system of Watanabe-955, in order to provide a driver an intuitive way to determine potential vehicle risk (see at least Paragraphs [0012] and [0014] of Watanabe-955). As per Claim 18, Watanabe-606 discloses the features of an autonomous driving control method (e.g. Paragraph [0048]; where an autonomous driving assistance device (100) is provided on a vehicle to avoid road hazards), comprising: acquiring, by a processor (e.g. Paragraphs [0044], [0053]-[0054], [0218]; where the autonomous driving assistance device (100) comprises a control portion (1), includes an integrated hazard prediction portion (10) as an avoidance processing portion to analyze and process video and images to predict hazards during driving; and where the controllers are implemented by a processor configured to execute one or more functions embodied in computer programs stored in memory), recognition information of one or more vehicle surrounding objects while driving (e.g. Paragraphs [0076], [0082], [0109]-[0110]; where when an object is recognized, it is assigned an ID); and visually classifying, by the processor, and displaying the one or more vehicle surrounding objects on an interface device (e.g. Paragraphs [0090]-[0091]; Figures 7-8, 21-22, 25; where the system displays and classifies objects, such as pedestrians, bicycles, and other vehicles) based on a recognition status of the one or more vehicle surrounding objects (e.g. Paragraphs [0076], [0082], [0109]-[0110]; where when an object is recognized, it is assigned an ID) and a risk level of the one or more vehicle surrounding objects (e.g. Paragraphs [0112]-[0114], [0120]; where the control portion (1) determined whether the specified surveillance area/object indicates a large hazard level and that the subject vehicle (P) is highly likely to collide with the specified object; and where a hazard level for the surveillance area/object is determined), based on recognition information of the one or more vehicle surrounding objects (e.g. Paragraphs [0076], [0082], [0109]-[0110]; where when an object is recognized, it is assigned an ID); wherein the recognition status includes a recognition accuracy (e.g. Paragraphs [0120]-[0122], [0169]; where the control portion (1) performs image recognition to extract an outline, find the probability of object detection, and predict the movement, and estimate the probability of colliding with a surveillance area/object; and where the control portion (1) determines is a recheck is necessary to identify an object when the initial hazard level is determined to be higher than or equal to a predetermined value; and where the system performs the recheck to accurately calculate the hazard level) and the risk level includes a possibility of collision with the vehicle (e.g. Paragraphs [0112]-[0114], [0120]; where the control portion (1) determined whether the specified surveillance area/object indicates a large hazard level and that the subject vehicle (P) is highly likely to collide with the specified object); when a vehicle surrounding object of the one or more vehicle surrounding object is determined as an undetermined control target detected, but not recognized because recognition accuracy of the one or more vehicle surrounding objects is low, the processor is further configured to determine the vehicle surrounding object as a control target detected and recognized by least the user or an autonomous driving control apparatus using a user command based on the information displayed on the interface device (e.g. Paragraphs [0044], [0109]-[0110], [0131]; where the system receives user input about a driving operation of the vehicle, determines if areas or objects input by the user are considered monitoring targets, tracks the monitoring target, and sets at least one hazard avoidance processing portion based on the user input; and where a user recognizes a situation and specifies it as a surveillance area, or recognizes a person or vehicle and specifies it as a surveillance object; where if a user-specified surveillance area/object is recognized, if the surveillance area/object is recognized, the control portion (1) displays the recognized surveillance area/object in a highlighted color, and if the area/object is not recognized, the control portion (1) determines whether the surveillance area/object is trailed); wherein the processor additionally controls the vehicle to avoid collision with the vehicle surrounding object determined as the control target (e.g. Paragraphs [0048], [0058], [0061]; where the control portion (1) controls the implementation of the avoidance remedy, and controls vehicle operations based on control information that is supplied from the integrated hazard prediction portion (10) that is needed to avoid hazards), ‘…’ wherein the processor controls a vehicle surrounding object to be marked in a second color or in a second shape on a driving path when the vehicle surrounding object is determined as the control target based on the user command (e.g. Paragraphs [0066], [0109]-[0110], [0114]; where the system determines if there is user-input information, and prompts the user to select an item, and if the user-specified surveillance area/object is recognized, and displays the recognized surveillance area/object in a highlight color; and if the user-specified surveillance area/object determined as safe, the system changes the highlight color to a safety color (i.e. changes the color based on a user selection of an item and based on the classification of that item when it is identified)). Watanabe-606 fails to disclose every feature of wherein the processor is configured to: control the one or more vehicle surrounding objects to be displayed in a first color or a first shape when recognition accuracy of the one or more vehicle surrounding objects is lower than a predetermined reference value. However, Watanabe-955, in a similar field of endeavor, teaches the features of wherein the processor is configured to: control the one or more vehicle surrounding objects to be displayed in a first color or a first shape when recognition accuracy of the one or more vehicle surrounding objects is lower than a predetermined reference value. Watanabe-955 teaches an image display system for a vehicle, where the marking image (35) changes its display attributes such as shape, size, and color in accordance with the traveling state of the target preceding vehicle (32), the positional relationship with the vehicle, etc.; where the color of the marking image is changed to a more conspicuous color (for example from yellow to orange or red), and where the color or the marking image (35) can be changed from blue, green, yellow, orange, and red; and where the color of the contour line can be determined and changed according to the risk level of the object (e.g. Paragraphs [0066]-[0067], [0118]). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the Applicant’s invention, with a reasonable expectation for success, to modify the autonomous driving assistance device of Watanabe-606, with the feature of using different color schemes in the system of Watanabe-955, in order to provide a driver an intuitive way to determine potential vehicle risk (see at least Paragraphs [0012] and [0014] of Watanabe-955). As per Claim 19, Watanabe-606 discloses the features Claim 18, and Watanabe-606 discloses the features of wherein the visually classifying and displaying of the one or more vehicle surrounding objects comprises, by a processor, based on the recognition status of the one or more vehicle surrounding objects or the risk level of the one or more vehicle surrounding objects comprises: controlling one or more vehicle surrounding objects to be classified and displayed according to at least one of colors of the one or more vehicle surrounding objects or shapes of the one or more vehicle surrounding objects (e.g. Paragraphs [0044], [0109]-[0110], [0131]; where a user recognizes a situation and specifies it as a surveillance area, or recognizes a person or vehicle and specifies it as a surveillance object; where if a user-specified surveillance area/object is recognized, if the surveillance area/object is recognized, the control portion (1) displays the recognized surveillance area/object in a highlighted color, and if the area/object is not recognized, the control portion (1) determines whether the surveillance area/object is trailed). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Watanabe-606, in view of Watanabae-955, as applied to Claim 1, and further in view of U.S. Patent Publication No. 2001/0012976 A1, to Menig (hereinafter referred to as Menig; previously of record). As per Claim 3, Watanabe-606, in view of Watanabe-955, teaches the features Claim 1, but the combination of Watanabe-606, in view of Watanabe-955, fails teach every feature of wherein the processor is configured to: control the one or more vehicle surrounding objects to be notified to a user by using one or more of tactile, vibration, and auditory, based on the recognition status of the one or more vehicle surrounding objects or the risk level of the one or more vehicle surrounding objects. Menig, in a similar field of endeavor, teaches an integrated message display system for a vehicle, where the integrated message center (ICU) and its message center act as the driver interface for the collision warning system, and when the collision warning system (CWS) detects a collision warning condition, it communicates the condition to the integrated message center (ICU), which in turn, generates the appropriate message from the message center, which typically includes a visual and an accompanying auditory warning, and as the closing distance between the truck and the vehicle in
Read full office action

Prosecution Timeline

Nov 18, 2022
Application Filed
Nov 25, 2024
Non-Final Rejection — §103
Feb 28, 2025
Response Filed
Mar 24, 2025
Final Rejection — §103
Jun 24, 2025
Examiner Interview Summary
Jun 24, 2025
Applicant Interview (Telephonic)
Jun 30, 2025
Request for Continued Examination
Jul 02, 2025
Response after Non-Final Action
Jul 14, 2025
Non-Final Rejection — §103
Oct 21, 2025
Response Filed
Nov 17, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601596
Estimation of Target Location and Sensor Misalignment Angles
2y 5m to grant Granted Apr 14, 2026
Patent 12603005
DRIVER ASSISTANCE MODULE FOR A MOTOR VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12594944
METHOD AND SYSTEM FOR VEHICLE DRIVE MODE SELECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12594960
NAVIGATIONAL CONSTRAINT CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583382
SYNCHRONIZED LIGHTING FOR ELECTRIC VEHICLES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+36.6%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month