DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 29, 2025 has been entered.
Response to Arguments
The amendment filed December 2, 2025 has been entered with the RCE filed December 29, 2025. Claims 1-18 have been amended. Claim 20 is new. The remaining claim, claim 19, is in previously presented form. Therefore, claims 1-20 are pending in the application. Claims 1, 9, and 18 are the independent claims.
The applicant’s Remarks, filed December 2, 2025, has been fully considered. The applicant argues, under the heading “Rejections Under 35 U.S.C. § 112” that the claims have been amended in response to the rejection given in the last detailed action, which was the Final Rejection dated October 2, 2025. That rejection included a 35 USC 112(a) and 35 USC 112(b) rejection, but no art rejections.
The applicant argues that the amendments have support in at least Fig. 6 and paragraphs 0078-0089 and 0091 of the present disclosure, overcome the rejections, and therefore the claims are now in condition for allowance.
Claim 1 now recites, with the examiner’s broad reasonable interpretations in bold:
An apparatus comprising:
a plurality of vehicle controllers configured to control a vehicle;
a hacking monitoring system configured to determine whether a hacking activity associated with the vehicle is detected; and
a vehicle control device configured detect, via the hacking monitoring system, hacking activity associated with the vehicle;
identify at least one first vehicle controller, of the plurality of vehicle controllers, corresponding to the hacking activity; [the brake controller for example]
determine, based on the hacking activity, a risk level based on a type of the at least one first vehicle controller; and [this broadly and reasonably means that the system can determine that if the brake controller in particular is hacked, for example, that comes with a particular risk level.]
adjust, based on the risk level and based on a the second vehicle controller of the plurality of vehicle controllers; and-this broadly and reasonably means that if the brake controller has been hacked, for example, this could indicate a particular risk level. If a different controller was hacked, that might also indicate the same particular risk level as when the brake controller was hacked. When this clause says “based on the risk level” this means, in one broad reasonably interpretation, not based on which particular controller was hacked, but based on the risk level based on the type of controller was hacked, as stated in the previous clause (there is a difference, as made clear in the last detailed action). Based on this determined risk level, adjust the performance of a second vehicle controller, i.e., one that has not been hacked, broadly and reasonably. ]
an advanced driver assistance system (ADAS)
control, based on the second vehicle controller [this clause does not say “control, based on the” fact that the first controller has been hacked. But it does say, “control, based on the hacking activity, an ADAS operation associated with the at least one second vehicle controller,” which is the controller that has had its performance adjusted “based on the risk level” which itself was determined “based on a type of the at least one first vehicle controller” in which hacking activity has taken place.]
So the claim can broadly and reasonably be summarized as teaching that, if a first controller (such as a braking controller) is hacked, adjust the performance of a second controller, and control “based on the hacking activity” an ADAS operation associated with the second controller].
The present published disclosure, Kim (US2024/0202327 A1), in one broad reasonable interpretation, teaches that when a controller is hacked, the system can determine the risk level associated with that. For example, paragraph 0079 teaches that “Table 2 relates to an emergency driving level, and shows control of the vehicle controller 210 depending on a risk level.” In Fig. 6, if, the gateway controller is hacked, this results in an LV2 risk level (aka “emergency level”) and the “countermeasures” are to brake the vehicle, otherwise restrict the braking and steering, and send a notification to nearby vehicles. This countermeasure is not necessarily directly based on which controller is hacked, but on the risk level associated with that controller being hacked. In the top two lines of Fig. 6, whether the engine controller is hacked (line 1) or the brake controller is hacked (line 2), the risk level (aka “emergency level”) is the same: emergency driving Lv2. Furthermore, the “countermeasures” in Table 6 for both the top two lines end up being the same because the risk level is the same. The countermeasure is: “guide autonomous driving system to safety zone.” So countermeasures (such as urgent stop of risk level “Lv2”) can be the same, even though the controllers that were hacked are different. So the disclosure supports the clauses in claim 1 that reads:
determine, based on the hacking activity, a risk level based on a type of the at least one first vehicle controller; and
adjust, based on the risk level and based on a the second vehicle controller of the plurality of vehicle controllers;
The system in the disclosure and in the claims detects hacking in a controller, looks up a risk level, then determine a response to that risk level. This response is then implemented in the ADAS using a non-hacked controller.
As an example, if the V2V controller is hacked, the ADAS, “based on the
It is not uncommon in the art to detect hacking and then make an emergency stop. But present claim 1 is more narrow than that. It teaches, as just explained, detecting what controller the hacking has occurred in, looking up a risk level associated with that controller having been hacked, determining a response associated with that risk level, and then commanding non-hacked controllers to respond based on that risk level. That is much more specific.
One close prior art is Tamura et al. (US2021/0237665), used in the rejection of the Non-Final Rejection dated June 6, 2025. The system of Tamara can detect a hack (see paragraph 0044) and perform a mitigation action, such as enter a failsafe driving mode (see paragraph 0182). But rather than look up a specific response based on a specific risk level, as in the present claim 1, Tamura detects improper commands and overwrites them (see paragraph 0186). And whereas the present disclosure maintains some level of ADAS operation, Tamura at least in one embodiment shuts down the ADAS completely (see paragraph 0182).
Another close prior art is Konrardy (US2021/0116256). See paragraph 0217 for a component malfunction leading to “placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” This is generally related to the present claim.
Konrardy also teaches “hacking attempts, [and] cyber attacked” in paragraph 0138. In paragraph 0143, the system can determine if the host vehicle can continue within “predetermined safety parameters (i.e., having risk levels for such operation below predetermined safe operation threshold levels of risk)”. This “risk” can be based on “damage” from a “cyber-attack” on the vehicle controller. This “placing restrictions or limits on the use of the one or more autonomous operation features,” and “engaging additional components to compensate for the malfunction,” in Konrardy are all very similar to present claim 1.
So in Konrardy the system can detect a hack of a controller, determine specific predetermined risk levels, restrict some autonomous features, and engage other components. Paragraph 0118, among others, teaches pulling the vehicle to the shoulder to minimize the negative effects of an incident. This is summarized in part in Fig. 5 as well as Fig. 8.
See Konrardy paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s current software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component. The evaluated “severity” can be, for example, “low, mid, high, critical, etc.” according to paragraph 0213. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. This makes sense because a component without a controller cannot incur a cyberattack. A components that merely include gearboxes, for example, cannot be hacked. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level.
Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.”
It is difficult to read this as not relating to present claim 1. In the paragraphs just cited in Konrardy, the system can reasonably determine that the steering component with the steering controller, for example, has suffered a cyber attack, determine this as “high” or “critical” risk, decide to restrict the steering controller from use, and bring the vehicle to a stop in the lane using the braking controller, or perhaps move the vehicle to the shoulder using differential braking and engine control, but not the steering control since the steering controller has been hacked. The system detects a hack, determines a risk, and performs an ADAS function with a different component’s controller to mitigate the hack. That anticipates present claim 1, in the examiner’s understanding of it. Therefore, the examiner respectfully does not agree with the applicant’s argument that the claims overcome the prior art. The examiner finds the present claim as a new embodiment of previously presented claims. The examiner agrees that the present claims overcome the 35 USC 112(a) and 35 USC 112(b) rejections made in the last detailed action and withdraws those rejections. Due to amendment, the grounds for rejection have changed. Please see the rejections below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 9-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Konrardy (US2021/0116256).
Regarding claim 1, Konrardy discloses:
An apparatus comprising (see Fig. 1A):
a plurality of vehicle controllers configured to control a vehicle (see Konrardy paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s current software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component. That is based on paragraph 0211 which recites that “extent of use of the component in the autonomous vehicle,” implying that some components are used more than others, and there are a plurality of them. See also paragraph 0208 for “components” (plural) related to “distinct autonomous operation features” and “hardware components associated therefore (e.g….controllers)”.);
a hacking monitoring system configured to determine whether a hacking activity associated with the vehicle is detected (see paragraph 0138 for an on-board computer detecting “hacking attempts, [and] cyber attacks”.); and
a vehicle control device configured detect, via the hacking monitoring system, hacking activity associated with the vehicle (see paragraph 0138 for an on-board computer detecting “hacking attempts, [and] cyber attacks”.);
identify at least one first vehicle controller, of the plurality of vehicle controllers, corresponding to the hacking activity (see paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component of the plurality of components mentioned in at least paragraph 0208. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. This makes sense because a component without a controller cannot incur a cyberattack. A components that merely include gearboxes, for example, cannot be hacked.);
determine, based on the hacking activity, a risk level based on a type of the at least one first vehicle controller (see paragraph 0213 for detecting a software hack and then evaluated the “severity,” which can be, “low, mid, high, critical, etc.” The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level.); and
adjust, based on the risk level and based on the second vehicle controller of the plurality of vehicle controllers (see paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level. Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.”); and-
an advanced driver assistance system (ADAS) (see paragraph 0215 for determining that the result of a hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.”);
control, based on the second vehicle controller (see paragraph 0217 which teaches that once there are “identified occurrences of component [with processor] malfunctioning” (due to a hack) the system will seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.)
Regarding claim 9, Konrardy discloses:
A method comprising:
determining, by a computing device, see paragraph 0138 for an on-board computer detecting “hacking attempts, [and] cyber attacks”.);
identifying at least one first vehicle controller, of a plurality of vehicle controllers configured to control the vehicle, corresponding to the hacking activity (see paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component of the plurality of components mentioned in at least paragraph 0208. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. This makes sense because a component without a controller cannot incur a cyberattack. A components that merely include gearboxes, for example, cannot be hacked.);
determine, based on the hacking activity, a risk level based on a type of the at least one first vehicle controller (see paragraph 0213 for detecting a software hack and then evaluated the “severity,” which can be, “low, mid, high, critical, etc.” The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level.);
adjusting, based on the risk level and based on -thesecond vehicle controller of the vehicle (see paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level. Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.”); and
controlling, based on the hacking activity, at least one advanced driver assistance system (ADAS) operation associated with the at least one second vehicle controllesee paragraph 0215 for determining that the result of a hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” See paragraph 0217 which teaches that once there are “identified occurrences of component [with processor] malfunctioning” (due to a hack) the system will seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 10, Konrardy discloses the method of claim 9.
Konrardy further discloses:
The method of claim 9, wherein an ADAS of the vehicle comprises:
a plurality of function applications to perform see paragraph 0213 for the “an inability to operate in a fully autonomous or semi-autonomous mode.” See paragraph 0093 for the autonomous vehicle having adaptive cruise control and automatic lane centering.), and
wherein the plurality of function applications comprises at least one of:Forward Collision-Avoidance Assist (FCA) to assist forward collision-avoidance,Lane Keeping Assist (LKA) to assist lane keeping, Blind-Spot Collision-Avoidance Assist (BCA) to assist rearward collision-avoidance, Intelligent Speed Limit Assist (ISLA) to assist intelligent speed limit, Smart Cruise Control (SCC) to perform smart cruise control, Navigation-based Smart Cruise Control (NSCC) to perform navigation-based smart cruise control, Lane Following Assist (LFA) to assist lane following, or Highway Driving Assist (HDA) to assist highway driving (see paragraph 0093).
Regarding claim 11, Konrardy discloses the method of claim 10.
Konrardy further discloses:
The method of claim 10, wherein
the at least one first vehicle controller comprises: a power train controller to control longitudinally accelerating (see paragraph 0093. A fully autonomous vehicle uses a controller to control acceleration.), and
wherein the controlling of see Konrardy paragraph 0147 for on-board computer 114 detecting situations and moving a host vehicle out of a lane and to a shoulder. see paragraph 0217 which teaches that once there are “identified occurrences of component [with processor] malfunctioning” (due to a hack) the system will seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 12, Konrardy discloses the method of claim 10.
Konrardy further discloses:
The method of claim 10, wherein
the at least one first vehicle controller comprises:a brake controller to control longitudinally decelerating (see paragraph 0094 for autonomous braking for collision avoidance. See paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049.), and
wherein the controlling of see paragraph 0124 for enabling or disabling autonomous features, including adaptive cruise control. See paragraph 0093. See paragraph 0094 for autonomous braking for collision avoidance. See paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component of the plurality of components mentioned in at least paragraph 0208. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level. Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049.).
Regarding claim 13, Konrardy discloses the method of claim 10.
Konrardy further discloses:
The method of claim 10, wherein
the at least one first vehicle controller comprises: a steering controller to control lateral driving (see paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049. See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident), and
wherein the controlling of see paragraph 0124 for enabling or disabling autonomous features, including adaptive cruise control. See paragraph 0093. See paragraph 0094 for autonomous braking for collision avoidance. See paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component of the plurality of components mentioned in at least paragraph 0208. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level. Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049.).
Regarding claim 14, Konrardy discloses the method of claim 10.
Konrardy further discloses:
The method of claim 10, wherein
the at least one first vehicle controller comprises: a gateway for vehicle networking (see Fig. 4B and paragraph 0120 for a controller and server a network 130 connecting them. See paragraph 0114 for control gates), and
wherein the controlling of see paragraph 0055. See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 15, Konrardy discloses the method of claim 10.
Konrardy further discloses:
The method of claim 10, wherein
the at least one first vehicle controller comprises: a vehicle to everything (V2X) controller for communication with an external device (see paragraph 0285 for using V2V), and
wherein the controlling of see paragraph 0213 for detecting a software hack and then evaluated the “severity,” which can be, “low, mid, high, critical, etc.” See paragraph 0217 which teaches that once there are “identified occurrences of component [with processor] malfunctioning” (due to a hack) the system will seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 16, Konrardy discloses the method of claim 10.
Konrardy further discloses:
The method of claim 10, wherein
the at least one first vehicle controller comprises: an audio, video, and navigation (AVN) controller to control a user interface (see paragraph 0063 for a vehicle telephone, entertainment, navigation, or information system of the vehicle.), and
wherein the controlling of see paragraph 0055 for a navigation application 144. See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 18, Konrardy discloses:
An apparatus for a vehicle, the apparatus comprising:
a plurality of vehicle controllers each configured to control at least one operation of the vehicle (see Konrardy paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s current software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component. That is based on paragraph 0211 which recites that “extent of use of the component in the autonomous vehicle,” implying that some components are used more than others, and there are a plurality of them. See also paragraph 0208 for “components” (plural) related to “distinct autonomous operation features” and “hardware components associated therefore (e.g….controllers)”.);
at least one processor (see Fig. 1B, item 181.1); and
a memory storing instructions that, when executed by the at least one processor, are configured to cause the apparatus to (see paragraph 0053):
control at least one of an autonomous driving operation or an advanced driver assistance system (ADAS) operation (see paragraph 0213 for fully autonomous or semi-autonomous mode.” See paragraph 0093 for the autonomous vehicle having adaptive cruise control and automatic lane centering);
detect hacking activity associated with the vehicle (see paragraph 0138 for an on-board computer detecting “hacking attempts, [and] cyber attacks”.);
identify at least one first vehicle controller, of the plurality of vehicle controllers, corresponding to the hacking activity (see paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component of the plurality of components mentioned in at least paragraph 0208. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. This makes sense because a component without a controller cannot incur a cyberattack. A components that merely include gearboxes, for example, cannot be hacked.);
determine, based on the hacking activity, a risk level based on a type of the at least one first vehicle controller (see paragraph 0213 for detecting a software hack and then evaluated the “severity,” which can be, “low, mid, high, critical, etc.” The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level.);
adjust, based on the risk level and based on -the_hacking activitysecond vehicle controller of the plurality of vehicle controllerssee paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level. Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.”); and
control, based on the hacking activitysecond vehicle controller or an ADAS operation associated with the at least one second vehicle controller (see paragraph 0217 which teaches that once there are “identified occurrences of component [with processor] malfunctioning” (due to a hack) the system will seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 19, Konrardy discloses the apparatus of claim 18.
Konrardy further discloses:
The apparatus of claim 18, further comprising
an autonomous driving system configured to perform the at least one of the autonomous driving operation or the ADAS operation (see paragraph 0213 for fully autonomous or semi-autonomous mode.” See paragraph 0093 for the autonomous vehicle having adaptive cruise control and automatic lane centering.).
Regarding claim 20, Konrardy discloses the apparatus of claim 18.
Konrardy further discloses:
The apparatus of claim 18, wherein
the at least one first vehicle controller comprises an engine controller (see paragraph 0093. A fully autonomous vehicle uses a controller to control accleration. See paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049. See paragraph 0292.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2-8 and 17 rejected under 35 U.S.C. 103 as being unpatentable over Konrardy in view of Cha (US2021/0114534 A1).
Regarding claim 2, Konrardy teaches the apparatus of claim 1.
Konrardy further discloses:
An apparatus wherein the ADAS comprises:
a plurality of function applications to perform see paragraph 0213 for the “an inability to operate in a fully autonomous or semi-autonomous mode.” See paragraph 0093 for the autonomous vehicle having adaptive cruise control and automatic lane centering.); and
an ADAS control device to control see paragraph 0147 for on-board computer 114 detecting situations and moving a host vehicle out of a lane. See paragraph 0093 for the autonomous vehicle having adaptive cruise control and automatic lane centering.).
Yet Konrardy does not explicitly further teach:
a sensor fusion configured to receive an external sensing signal.
However Cha teaches:
a sensor fusion configured to receive an external sensing signal (the examiner submits that a sensor fusion system is inherent in an ADAS system that performs lane keeping, automatic braking, etc. But the examiner cites Cha here because Cha explicitly teaches this. See Cha Fig. 1 for item 117 and paragraph 0018 for “a sensor fusion detection controller 117”. Note that Cha is directed toward the same general idea as Konrardy. See Cha paragraph 0016 for a “hacker” that tries to hack the host vehicle 102. See paragraph 0028 for the system being able to “determine whether an attack has occurred or not.” See Fig. 3, step 308 and paragraph 0044 for flagging and discarding values that come from a hacker and not the vehicle system.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Konrardy, to add the additional features of a sensor fusion configured to receive an external sensing signal, as taught by Cha. The motivation for doing so would be to navigate autonomously with a secure system, as recognized by Cha (see paragraph 0006).
This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III.
Regarding claim 3, Konrardy and Cha teach the apparatus of claim 2.
Konrardy further teaches:
An apparatus wherein the plurality of function applications comprises at least one of:
Forward Collision-Avoidance Assist (FCA) to assist forward collision-avoidance, Lane Keeping Assist (LKA) to assist lane keeping, Blind-Spot Collision-Avoidance Assist (BCA) to assist rearward collision-avoidance, Intelligent Speed Limit Assist (ISLA) to assist intelligent speed limit, Smart Cruise Control (SCC) to perform smart cruise control, Navigation-based Smart Cruise Control (NSCC) to perform navigation-based smart cruise control, Lane Following Assist (LFA) to assist lane following, or Highway Driving Assist (HDA) to assist highway driving (see paragraph 0093),
wherein the at least one vehicle controller comprises a power train controller to control longitudinally accelerating (see paragraph 0093. A fully autonomous vehicle uses a controller to control accleration.), and
wherein the ADAS control device is configured to stop, based on a in a broad reasonable interpretation, HAD includes ACC and automatic lane-keeping or lane-keeping assist. With that in mind, see Konrardy paragraph 0147 for on-board computer 114 detecting situations and moving a host vehicle out of a lane and to a shoulder. see paragraph 0217 which teaches that once there are “identified occurrences of component [with processor] malfunctioning” (due to a hack) the system will seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 4, Konrardy and Cha teach the apparatus of claim 2.
Konrardy further teaches:
The apparatus of claim 2, wherein the at least one first vehicle controller comprises:
a brake controller to control longitudinally decelerating (See paragraph 0094 for autonomous braking for collision avoidance. See paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049.), and
wherein the ADAS control device is configured to stop, based on a see paragraph 0124 for enabling or disabling autonomous features, including adaptive cruise control. See paragraph 0093. See paragraph 0094 for autonomous braking for collision avoidance. See paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component of the plurality of components mentioned in at least paragraph 0208. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level. Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049.).
Regarding claim 5, Konrardy and Cha teach the apparatus of claim 2.
Konrardy further teaches:
The apparatus of claim 2, wherein the at least one first vehicle controller comprises:
a steering controller to control lateral driving (see paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049. See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.), and
wherein the ADAS control device is configured to stop, based on see paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049. See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident. See paragraph 0124 for enabling or disabling autonomous features, including adaptive cruise control. See paragraph 0093. See paragraph 0094 for autonomous braking for collision avoidance. See paragraph 0213 for a system that can determining the “vulnerabilities” in an autonomous vehicle’s software that is “executing on the component,” as stated in paragraph 0214. This “the component” can reasonably be interpreted as a specific component of the plurality of components mentioned in at least paragraph 0208. See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” See paragraph 0215 for determining an “identified occurrence of the component malfunctioning.” This reasonably means that “the component” of the vehicle has been hacked, including the component’s controller. The “impact” of this software hack may be “an inability to operate in a fully autonomous or semi-autonomous mode.” According to paragraph 0216 and 0217, the response to the hack depends on the determined “risk” level. Paragraph 0217 goes on to teach that once there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0279 for “different hardware features for automatic braking, different computer instructions for automatic steering”. See paragraph 0049.).
Regarding claim 6, Konrardy and Cha teach the apparatus of claim 2.
Konrardy further teaches:
The apparatus of claim 2, wherein the at least one first vehicle controller comprises:
a gateway for vehicle networking (see Fig. 4B and paragraph 0120 for a controller and server a network 130 connecting them. See paragraph 0114 for control gates), and
wherein the ADAS control device is configured to stop, based on a-thesee paragraph 0055. See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 7, Konrardy and Cha teach the apparatus of claim 2.
Konrardy further teaches:
The apparatus of claim 2, wherein
the at least one first vehicle controller comprises:
a vehicle to everything (V2X) controller for communication with an external device (see paragraph 0285 for using V2V), and
wherein the ADAS control device is configured to maintain, based on a-thesee paragraph 0213 for detecting a software hack and then evaluated the “severity,” which can be, “low, mid, high, critical, etc.” See paragraph 0217 which teaches that once there are “identified occurrences of component [with processor] malfunctioning” (due to a hack) the system will seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.” See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
claim 8
Regarding claim 8, Konrardy and Cha teach the apparatus of claim 2.
Konrardy further teaches:
The apparatus of claim 2, wherein the at least one first vehicle controller comprises:
an audio, video, and navigation (AVN) controller to control a user interface (see paragraph 0063 for a vehicle telephone, entertainment, navigation, or information system of the vehicle.), and
wherein the ADAS control device is configured to stop, based on a-thesee paragraph 0055 for a navigation application 144. See paragraph 0118, among others, for pulling the vehicle to the shoulder to minimize the negative effects of an incident.).
Regarding claim 17, Konrardy and Cha teach the apparatus of claim 3,
Konrardy further discloses:
The apparatus of claim 3, wherein
the ADAS control device is configured to: while maintaining at least one autonomous driving function active, stop, based on thesee paragraph 0217 for the teaching that when there are “identified occurrences of component [with processor] malfunctioning” the system can seek “mitigation”. This mitigation may including “making adjustments to the operation of one or more autonomous operation features associated with the malfunctioning component, placing restrictions or limits on the use of the one or more autonomous operation features, or engaging additional components to compensate for the malfunction.”).
Additional Art
The prior art made of record here, though not relied upon, is considered pertinent to the present disclosure.
Tsurumi et al. (US2020/0334926) teaches at least detecting hacking in ADAS and reducing vehicle functions accordingly.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL M. ROBERT whose telephone number is (571)270-5841. The examiner can normally be reached M-F 7:30-4:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at 571-272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL M. ROBERT/Primary Examiner, Art Unit 3665