Prosecution Insights
Last updated: April 19, 2026
Application No. 17/506,441

VEHICLE

Final Rejection §103
Filed
Oct 20, 2021
Examiner
MILLER, LEAH NICOLE
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hyundai Autoever Corp.
OA Round
6 (Final)
56%
Grant Probability
Moderate
7-8
OA Rounds
3y 4m
To Grant
48%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
18 granted / 32 resolved
+4.3% vs TC avg
Minimal -8% lift
Without
With
+-8.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
27.3%
-12.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 32 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the amendments filed on 10 November 2025. Claims 1-4, and 6-8 are presently pending and are presented for examination. Claim 5 was previously cancelled. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in the Republic of Korea on 10 December 2020 (KR10-2020-0172549). Applicant cannot rely upon the certified copy of the foreign priority application to overcome potential future rejections made using references falling between the filing date and the foreign priority date, because a translation of said application has not been made of record in accordance with 37 CFR 1.55. When an English language translation of a non-English language foreign application is required, the translation must be that of the certified copy (of the foreign application as filed) submitted together with a statement that the translation of the certified copy is accurate. See MPEP §§ 215 and 216. No action is required by Applicant at this time. Response to Arguments Applicant's arguments, see Remarks filed 10 November 2025, have been fully considered but they are not persuasive. Applicant argues, see Remarks, pg. 6-7, that the amended limitations to claim 1 that define an optimized range for camera and RADAR sensors of an autonomous vehicle sensing system constitutes “more than a mere design choice” and “represents a deliberate optimization…” Examiner respectfully disagrees. First, it is noted that Applicant’s argument does not establish a showing of criticality of the claimed combination of ranges. Applicant can rebut a prima facie case of obviousness by showing the criticality of the range. “The law is replete with cases in which the difference between the claimed invention and the prior art is some range or other variable within the claims…In such a situation, the applicant must show that the particular range is critical, generally by showing that the claimed range achieves unexpected results relative to the prior art range.” In re Woodruff, 919 F.2d 1575, 16 USPQ2d 1934 (Fed. Cir. 1990). Additionally, it would have been obvious to one of ordinary skill in the art, at the time of the application, to choose a sensor range that best fits the needs of an autonomous vehicle, in order to yield the most usable, reliable results of autonomous vehicle control. US-20180244195-A1 (“Haight”) teaches that various maximum distances can be used, and adjusted in real-time, for camera, RADAR, and LiDAR sensors on an autonomous vehicle, based on need (see Haight, para. 0041). It has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art; see In re Aller, 105 USPQ 233. Furthermore, US-20200064483-A1 (“Li”) discloses sensor distances in the same ranges to the amended limitations (see Li, 0130 and 0219). For these reasons examiner is unpersuaded and maintains the corresponding rejections. Applicant argues, see Remarks, pg. 7-8, that Haight “is silent as to distances up to which the main forward camera 112 b, the narrow forward camera 112 a, and the wide forward camera 112 c are configured to acquire surrounding information…is silent as to a distance up to which the rear view camera 112 f is configured to acquire surroundings information…fails to teach or suggest ‘a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera…’” Examiner respectfully disagrees. Haight teaches that various maximum distances can be used, and adjusted in real-time, for camera, RADAR and LiDAR sensors on an autonomous vehicle, based on need (see Haight, para. 0041). For the same reasons as discussed above, in the previous paragraph, discovering optimum or workable ranges involve only routine skill in the art. For these reasons examiner is unpersuaded and maintains the corresponding rejections. Applicant argues, see Remarks, pg. 8-9, that when designing a sensing system for an autonomous vehicle, one of ordinary skill in the art would not find a rear camera with a field of view (FOV) shorter than a main forward camera as an obvious variant of a rear camera with a FOV longer than a main forward camera FOV and that the sensor ranges are “more than mere design choices, but rather represents a deliberate optimization…” Examiner respectfully disagrees. Haight teaches that various maximum distances can be used, and adjusted in real-time, for camera, RADAR and LiDAR sensors on an autonomous vehicle, based on need (see Haight, para. 0041). An example of how a need informs relative distance differences in various sensor FOVs is whether a vehicle is operating in a forward gear or a reverse gear. Just as a human driver turns there head rearward in a vehicle, uses a rear-view mirror, and/or uses a screen displaying an image of a rear view of a vehicle when operating a vehicle in a reverse gear, the relative importance (i.e., relative distance, range, or angle of the FOV) in the rear direction of the vehicle is higher than the relative importance (i.e., relative distance, range, or angle of the FOV) in the forward direction. Therefore, it would be obvious to one of ordinary skill in the art, at the time of the application, that various relative FOV distance relationships amongst autonomous vehicle sensors can be used to optimize system performance. For the same reasons as discussed above, in the previous two paragraphs, discovering optimum or workable ranges involve only routine skill in the art. For these reasons examiner is unpersuaded and maintains the corresponding rejections. The remaining arguments are essentially the same as those addressed above and/or below and are unpersuasive for at least the same reasons. Therefore, examiner is unpersuaded and maintains the corresponding rejections. Claim Objections Claim(s) 1 is/are objected to because of the following informalities: Claim 1: “the road condition information, the vehicle travelling information” should be “the road condition information, the vehicle traveling information”; Claim 1: “radar and the LiDAR as the specific, and” should be “radar and the LiDAR as the specific area, and”; and Claim 1: “and the rear observation camera is configured” should be “and the rear . Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1 and 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over US-10,479,376, hereinafter “Meyhofer” (previously of record), in view of US-2008/0252482-A1, hereinafter “Stopczynski” (previously of record), US-8676488-B2 (citations from US-20120065841-A1), hereinafter “Nagata” (previously of record), JP-2005100336-A, hereinafter “Inoue” (previously of record) and US-20180244195-A1, hereinafter “Haight” (previously of record). Regarding claim 1: Meyhofer discloses a vehicle performing autonomous driving (Meyhofer, FIG. 1: Self-Driving Vehicle 10), the vehicle comprising: a communication part wirelessly connected to an external server and external devices, and configured to receive signals from the external server and the external devices (Meyhofer, FIG. 4: Communication Interface 450, Network 480; col. 14, line 65 – col. 15, line 7: “In an example of FIG. 4, the computer system 400 can include a communication interface 450 [i.e., communication part] that can enable communications over a network 480 [i.e., configured to receive signals]. In one implementation, the communication interface 450 can also provide a data bus or other local links to electro-mechanical interfaces of the vehicle, such as wireless [i.e., wirelessly connected] or wired links to and from control mechanisms 420 (e.g., via a control interface 422) [i.e., external devices], sensor systems 430 [i.e., external devices], and can further provide a network link to a backend transport management system (implemented on one or more datacenters) [i.e., external server] over one or more networks 480.”); a driving part including an engine configured to drive the vehicle and acquire information about an element that drives the vehicle (Meyhofer, FIG. 4: Control Mechanisms 420, with Control Interfaces 422, Acceleration 422, Braking 424, Steering 426 and Signaling Systems 428; col. 5, lines 22-29: “For example, the control system 100 can analyze the sensor data 115 to generate low level commands 158 executable by the acceleration system 172 [i.e., an engine; Note: It would be obvious to one of ordinary skill in the art, at the time of the application, that an acceleration system could include an engine, a battery pack, a fuel cell stack or an equivalent propulsion system, or a combination of two or more of those propulsion system components.], steering system 157, and braking system 176 of the SDV 10. Execution of the commands 158 by the control mechanisms 170 can result in throttle inputs, braking inputs, and steering inputs that collectively cause the SDV 10 to operate along sequential road segments to a particular destination.”; col. 15, lines 38-49: “Execution of the control instructions 462 can cause the processing resources 410 to generate control commands 415 in order to autonomously operate the SDV's acceleration 422, braking 424, steering 426, and signaling systems 428 (collectively, the control mechanisms 420). Thus, in executing the control instructions 462, the processing resources 410 can receive sensor data 432 from the sensor systems 430, dynamically compare the sensor data 432 to a current localization map 464, and generate control commands 415 for operative control over the acceleration, steering, and braking of the SDV."); an information acquisition part including a camera, a radar and a LiDAR (Meyhofer, FIG. 1: Sensors 102, with Camera 101, LiDAR 103 and RADAR 105); and a control part (Meyhofer, FIG. 1: SDV Control System 100) configured to: determine road condition information of a road on which the vehicle travels based on a signal acquired from the communication part (Meyhofer, FIG. 2 & 3: Sensor Selection Component 220 receives Contextual Information 213 from Network Service 260 and col. 13, lines 55-67: "In addition to detecting conditions from the sensor data, the condition detection logic 230 can receive contextual information (314) from a network service 260. A region-specific network service 260 can record location-based contextual information about a region, and a combination of sensor data and position information of the SDV can be correlated to accurately determine environment conditions. By way of example, contextual information can include labels or descriptors, or numeric equivalents or correlations of parameters, which indicate one or more of the following: road construction, traffic, emergency situations, local weather, time and date, accumulated precipitation on road surfaces, etc."); determine vehicle traveling information of the vehicle based on information acquired from the driving part (Meyhofer, FIG. 3: "Detect conditions relating to the operation of a self-driving vehicle 310" has an input of "Vehicle Conditions 301;" and col. 13, lines 47-50: "Some examples of vehicle conditions (301) are the speed of the vehicle, acceleration, direction of movement (i.e., forward or reverse), traction, sensor status, and vehicle status (i.e., parked or moving).); receive a recognition result of the information acquisition part (Meyhofer, FIG. 2: Sensor Selection Component 220 with Condition Detection 230 that receives Sensor Data 211 and col. 10, lines 44-56: "According to one aspect, vehicle sensor interfaces obtain raw sensor data from the various sensors, and sensor analysis components of the vehicle control system implement functionality such as object detection, image recognition, image processing, and other sensor processes in order to detect hazards, objects, or other notable events in the roadway. The sensor analysis components can be implemented by multiple different processes, each of which analyzes different sensor profile data sets. In this aspect, the condition detection logic 230 receives the analyzed sensor data 211 from the sensor analysis components. Therefore, the condition detection logic 230 can detect conditions based on not only raw sensor data 211, but also analyzed sensor data 211."); determine a required performance based on the road condition information, the vehicle travelling information, and the recognition result (Meyhofer, col. 6, lines 32-45: "The sensor selection component 120 represents logic that prioritizes the processing or use of sensor data 115 by type (e.g., by sensor device) based on a predetermined condition or set of conditions. In some examples, the predetermined condition or set of conditions may relate to the operation of the SDV, and include for example, (i) telemetry information of the vehicle, including a velocity or acceleration of the vehicle; (ii) environment conditions in the region above the roadway, such as whether active precipitation (e.g., rainfall or snow fall) or fog is present; (iii) environment conditions that affect the roadway surface, including the presence of precipitation (e.g., soft rain, hard rain, light snowfall, active snowfall, ice); and/or (iv) the type of roadway in use by the vehicle (e.g., highway, main thoroughfare, residential road).", col. 7, lines 2-6: "Although some of the sensors 102 may offer superior performance in good weather conditions or at slower speeds, it is important for the SDV 10 to recognize adverse conditions and analyze sensor data 115 with those conditions and the performance characteristics of the sensors 102 in mind." and col. 8, lines 9-19: “The perception output 129 can provide input into the motion planning component 130. The motion planning component 130 includes logic to detect dynamic objects of the vehicle's environment from the perceptions. When dynamic objects are detected, the motion planning component 130 may utilize the location output 121 of the localization component 122 to determine a response trajectory 125 of the vehicle for steering the vehicle outside of the current sensor horizon. The response trajectory 125 can be used by the vehicle control interface 128 in advancing the vehicle forward safely.”) Meyhofer does not appear to explicitly disclose the following: change an object recognition performance of the information acquisition part based on the required performance at a specific area or areas; and in a situation in which the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle, improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera to a predetermined range, in a situation in which a brake pedal is operated, the control part determines a front area of the radar and LiDAR as the specific area, in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area, and in a situation in which a steering wheel or a steering wheel pedal of the vehicle is operated, the control part determines side areas of the radar and the LiDAR as the specific areas, wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Stopczynski teaches: change an object recognition performance of the information acquisition part based on the required performance at a specific area or areas (Stopczynski, FIG. 2; para. 0004: "The system comprises a Blind Spot Detection System equipped with radar sensors having multiple beam selection control and programmable range capability to allow said radar sensors to define a specific region of interest for detection of vehicle within a blind spot area; and a lane departure warning system having a vision sensor for determining host vehicle offset to road lane markings, edge of road, guard rails or other obstacles."); and Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Meyhofer, with the concept of changing an object recognition performance of information acquisition components based on a performance requirement of a specific area, taught by Stopczynski, in order to improve the accuracy of recognizing any objects detected by the information acquisition components in specific areas and limit false detection of objects in specific areas (Stopczynski, para. 0002: “Blind Spot Detection Systems with programmable range capability have set a fixed programmable maximum limit to avoid false detection of objects in the lane or road beyond the adjacent lanes, such as guardrails, vehicles in lanes beyond the adjacent lane to the host vehicle, etc.”). Meyhofer and Stopczynski do not appear to explicitly teach the following: in a situation in which the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle, improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera to a predetermined range, in a situation in which a brake pedal is operated, the control part determines a front area of the radar and LiDAR as the specific area, in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area, and in a situation in which a steering wheel or a steering wheel pedal of the vehicle is operated, the control part determines side areas of the radar and the LiDAR as the specific areas, wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Nagata teaches: in a situation in which the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle, improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera to a predetermined range (Nagata, Note: Examiner is interpreting “improving a classification characteristic” as changing the priority or region of interest of the camera.; para. 0037: “Instead of a millimeter-wave radar, an image sensor, such as a camera [i.e., improving a classification characteristic], a laser radar, or the like may be applied.”; para. 0009: “In this case, the control unit may set higher priority on a monitoring sensor which monitors an area near the traveling direction of the host vehicle [i.e., the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle] than a monitoring sensor which monitors an area apart from the traveling direction of the host vehicle on the basis of one of the traveling state of the host vehicle and the state of the driver of the host vehicle detected by the state detection unit.”; para. 0010: “With this configuration, the control unit sets higher priority on a monitoring sensor [i.e., improve a classification characteristic] which monitors an important area near the traveling direction of the host vehicle [i.e., a part corresponding to the one area in an image acquired by the camera to a predetermined range] than a monitoring sensor which monitors a less important area apart from the traveling direction of the host vehicle on the basis of one of the traveling state of the host vehicle and the state of the driver of the host vehicle detected by the state detection unit. Therefore, it is possible to appropriately set priority in accordance with the importance of the monitoring sensors.”), in a situation in which a brake pedal is operated, the control part determines a front area of the radar and LiDAR as the specific area (Note: “a front area of the radar and LiDAR” is being interpreted as either the front, rear or sides of the vehicle upon which those sensors are mounted, depending on the direction the mounted sensors are pointing. For example, a rear-of-vehicle facing sensor has a front area of the sensor that corresponds with the rear area of the vehicle.) (Nagata, FIG. 1: rear area millimeter-wave radar 14, right side millimeter-wave radar 15, and left side millimeter-wave radar 16 (see annotated FIG. 1, below); FIG. 3: steps S107 and S109; para. 0045: "When the determination result on whether the traveling direction is front or rear is deceleration [i.e., brake pedal is operated], the obstacle detection method determination ECU 41 increments +1 in the priority flags of the rear area millimeter-wave radar 14 [i.e., rear area of the vehicle, because that is the direction radar 14 is mounted on the vehicle, but it also corresponds with the front area of the radar], the right side millimeter-wave radar 15, and the left side millimeter-wave radar 16 (S107, S109).")… PNG media_image1.png 575 406 media_image1.png Greyscale Nagata, annotated FIG. 1 …in a situation in which a steering wheel or a steering wheel pedal of the vehicle is operated, the control part determines side areas of the radar and the LiDAR as the specific areas (Nagata, FIG. 1; FIG.3: steps S103, S105, and S106; para. 0044: "When the determination result on whether the traveling direction is left or right is the right direction [i.e., steering wheel…is operated], the obstacle detection method determination ECU 41 increments +1 in the priority flags of the front right side millimeter-wave radar 12, the rear right side millimeter-wave radar 15, and the right dead angle millimeter-wave radar 17 (S103, S105). When the determination result on whether the traveling direction is left or right is the left direction [i.e., steering wheel…is operated], the obstacle detection method determination ECU41 increments +1 in the priority flags of the front left side millimeter-wave radar 13, the rear left side millimeter-wave radar 16, and the left dead angle millimeter-wave radar 18 (S103, S106).")… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, and with a reasonable likelihood of success, to further modify the invention disclosed by Meyhofer, as modified by Stopczynski, with driver input considerations, taught by Nagata, for the benefit of improving object recognition, and thus vehicle safety (Nagata, para. 0002: “In order to improve the safety or convenience of automobiles… For this reason, a technique which enables the recognition of information relating to obstacles, such as other vehicles which are traveling around the host vehicle, with satisfactory precision has become important.”). does not appear to disclose the following: Meyhofer, Stopczynski, and Nagata do not appear to explicitly teach the following: …in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area, and… …wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Inoue teaches: …in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area (translated document of Inoue, para. 0039: “The flowchart in FIG. 5 is for the case where the acceleration / deceleration intention detection means M4 detects the driver's intention to accelerate [i.e., an accelerator pedal is operated]... In step S23, the detection area setting means M2 [i.e., control part] sets a first detection area [i.e., the front area of the radar and the LiDAR], that is, a detection area corresponding to a normal lane width along the estimated travel path.”; para. 0040: “When the acceleration / deceleration intention detection means M4 detects the driver's intention to accelerate [i.e., an accelerator pedal is operated] in the next step S28, the detection area setting means M2 [i.e., control part] sets a second detection area narrower than the first detection area [i.e., determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area] in step S29.”; Note: One of ordinary skill in the art, at the time of the application, would know that narrowing the detection area or beam of a radar is equivalent to elongating the range or distance of the detection area or beam (“Pub. 1310: Radar Navigation Manual,” (1985), pg. 19: “For a given amount of transmitted power, the main lobe of the radar beam extends to a greater distance at a given power level with greater concentration of power in narrower beam widths. To increase maximum detection range capabilities, the energy is concentrated into as narrow a beam as is feasible.”; Additionally, one of ordinary skill in the art, at the time of the application, would know that radar, cameras and lidar are obvious variants and their fields of view would similarly be controlled.), and Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Meyhofer, as modified by Stopczynski and Nagata, with the concept of increasing the detectable range or distance of a sensor, installed on a vehicle, when the vehicle accelerates, taught by Inoue, in order to detect an object as early as possible, without exceeding computational resource capability, to provide enough reaction time, or braking distance, to react to the object, which increases driving safety (translated document of Inoue, para. 0005: “The present invention has been made in view of the above-described circumstances, and an object thereof is to appropriately set the detection area of the object detection means so that an obstacle can be reliably detected.”). Meyhofer, Stopczynski, Nagata, and Inoue do not appear to explicitly teach the following: …wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Haight teaches: …wherein the camera includes a narrow-angle front camera (Haight, para. 0041: “The vehicle 100 is equipped with…a narrow forward camera 112a [i.e., the camera includes a narrow-angle front camera]….”), a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera (Haight, FIG. 2: narrow forward camera 112a, main forward camera 112b, [i.e., a main front camera configured to acquire surrounding information at a shorter distance] see annotated figure, below; para. 0043: “The main forward camera 112 b provides a field of view wider than the narrow forward camera 112 a [i.e., over a wider angle range than the narrow-angle front camera]…”), PNG media_image2.png 421 639 media_image2.png Greyscale Haight, annotated FIG. 2 a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera (Haight, FIG. 2: main forward camera 112b, wide forward camera 112c, [i.e., a wide-angle front camera configured to acquire surrounding information at a shorter distance] see annotated figure, below; para. 0043: “The wide forward camera 112c [i.e., wide-angle front camera] provides a field of view wider than the main forward camera 112b [i.e., a wide-angle front camera configured to acquire surrounding information…over a wider angle range than the main front camera].”), and PNG media_image3.png 423 641 media_image3.png Greyscale Haight, annotated FIG. 2 a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera (Haight, FIG. 2: rear view camera 112f, wide forward camera 112c, main forward camera 112b, [i.e., a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera] see annotated figure, below; para. 0041: “Note also that various maximum distances listed in FIG. 2 are illustrative and can be adjusted higher or lower, based on need, such as via using other devices, device types, or adjusting range, whether manually or automatically, including in real-time [i.e., a rear camera configured to acquire rear surrounding information…and at a longer distance than the wide-angle front camera].”; Note: it would be obvious to one of ordinary skill in the art, at the time of the application, that a rear camera with a field of view shorter than a main forward camera field of view is an obvious variant of a rear camera with a field of view longer than a main forward camera field of view.), and PNG media_image4.png 423 639 media_image4.png Greyscale Haight, annotated FIG. 2 wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera (Haight, FIG. 2: narrow forward camera 112a, radar 110, see annotated figure, below), and PNG media_image5.png 421 639 media_image5.png Greyscale Haight, annotated FIG. 2 wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle (Haight, para. 0041: “Note also that various maximum distances listed in FIG. 2 are illustrative and can be adjusted higher or lower, based on need, such as via using other devices, device types, or adjusting range, whether manually or automatically, including in real-time.”; Note: Haight teaches that various maximum distances of the cameras and RADAR can be used based on need. It would have been obvious to one of ordinary skill in the art, at the time of the application, to choose a sensor range that best fits the needs of an autonomous vehicle, in order to yield the most usable, reliable results for autonomous vehicle control. It has been held that where the general conditions of a claim are disclosed in the prior art, discovering the optimum or workable ranges involves only routine skill in the art; see In re Aller, 105 USPQ 233.. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Meyhofer, as modified by Stopczynski, Nagata, and Inoue, with the concept of implementing various perception sensors (i.e., cameras, RADAR, LiDAR, etc.) around a vehicle with fields of view with different ranges and distances, taught by Haight, in order to be able to detect objects in a full 360 degree zone around the vehicle and leverage the relative strengths of each type of sensor (Haight, para. 0041: “Note that this configuration provides a 360 degree monitoring zone around the vehicle 100.”; para. 0037: “The radar 110 includes a transmitter producing an electromagnetic wave such as in a radio or microwave spectrum, a transmitting antenna, a receiving antenna, a receiver, and a processor (which may be the same as the processor 104) to determine a property of a target. The same antenna may be used for transmitting and receiving as is common in the art. The transmitter antenna radiates a radio wave (pulsed or continuous) from the transmitter to reflect off the target and return to the receiver via the receiving antenna, giving information to the processor about the target's location, speed, angle, and other characteristics.”; para. 0038: “The camera 112 may capture images to enable the processor 104 to perform various image processing techniques, such as compression, image and video analysis, telemetry, or others. For example, image and video analysis can comprise object recognition, object tracking, any known computer vision or machine vision analytics, or other analysis.”). Regarding claim 6: Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach the vehicle of claim 1, and Meyhofer further discloses the following: wherein the control part is configured to, among pieces of surrounding information about a specific area acquired by a plurality of modules forming the information acquisition part, in response to an existence of at least one module having acquired different surrounding information about the specific area, perform control to cause the information acquisition part to acquire the surrounding information by assigning a high weight to the at least one module that has acquired the different surrounding information (Meyhofer, col. 6, line 61 - col. 7, line 21: "Examples recognize that certain operating conditions present significant challenges to self-driving vehicles. In particular, weather such as fog, mist, rain, or snow can impair the ability of some of the sensors 102 to collect sensor data 115 with sufficient accuracy to reliably navigate the SDV 10 through an environment. In addition, as the SDV 10 increases in speed while driving, there is less time to detect and avoid potential hazards safely or comfortably. Although some of the sensors 102 may offer superior performance in good weather conditions or at slower speeds, it is important for the SDV 10 to recognize adverse conditions and analyze sensor data 115 with those conditions and the performance characteristics of the sensors 102 in mind. Therefore, a sensor selection component 120 detects conditions which have a bearing on the performance characteristics of the sensors 102 and other conditions that may influence the importance of sensor data 115 from one sensor over another. In addition, the sensor selection component 120 prioritizes, through either a weighting or selection process, each of the sensors 102 using a set of sensor priority rules that are based on expected performance characteristics of each of the sensors 102 in the detected conditions. Components of the SDV control system 100, such as the localization component 122, perception component 124, prediction engine 126, and motion planning logic 130, can use the resulting sensor priority 127 to weight or select sensor data 115 when analyzing the current sensor state to perform vehicle operations."). Regarding claim 7: Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach the vehicle of claim 1, and Meyhofer further discloses the following: wherein the control part is configured to, based on a performance of at least one module that forms the information acquisition part (Meyhofer, col. 2, lines 9-24: "A sensor selection component detects conditions which have a bearing on the performance characteristics of the sensors and other conditions that may influence the importance of sensor data from one sensor over another. In addition, the sensor selection component prioritizes, through either a weighting or selection process, each of the sensors using a set of sensor priority rules that are based on expected performance characteristics of each of the sensors in the detected conditions. These performance characteristics can be determined from a combination of technical specifications for the sensors and testing performed with each of the sensors in the relevant conditions. Components of the SDV control system can use the resulting sensor priority to weight or select sensor data when analyzing the current sensor state to perform vehicle operations."), determine the required performance for changing a recognition weight of the at least one module (Meyhofer, col. 2, lines 9-24: "A sensor selection component detects conditions which have a bearing on the performance characteristics of the sensors and other conditions that may influence the importance of sensor data from one sensor over another. In addition, the sensor selection component prioritizes, through either a weighting or selection process, each of the sensors using a set of sensor priority rules that are based on expected performance characteristics of each of the sensors in the detected conditions. These performance characteristics can be determined from a combination of technical specifications for the sensors and testing performed with each of the sensors in the relevant conditions. Components of the SDV control system can use the resulting sensor priority to weight or select sensor data when analyzing the current sensor state to perform vehicle operations."); and change the object recognition performance of the information acquisition part based on the required performance (Meyhofer, col. 2, lines 9-24: "A sensor selection component detects conditions which have a bearing on the performance characteristics of the sensors and other conditions that may influence the importance of sensor data from one sensor over another. In addition, the sensor selection component prioritizes, through either a weighting or selection process, each of the sensors using a set of sensor priority rules that are based on expected performance characteristics of each of the sensors in the detected conditions. These performance characteristics can be determined from a combination of technical specifications for the sensors and testing performed with each of the sensors in the relevant conditions. Components of the SDV control system can use the resulting sensor priority to weight or select sensor data when analyzing the current sensor state to perform vehicle operations."). Regarding claim 8: Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach the vehicle of claim 1, and Meyhofer further discloses the following: wherein the control part is configured to, based on a type of an object included in the surrounding image of the vehicle acquired by the information acquisition part, determine the required performance for changing a weight of the surrounding image of the vehicle corresponding to the object (Meyhofer, col. 3, lines 8-17: "In some aspects, a number of sensor priority rules are applied to the detected conditions to select the set of sensors. The sensor priority rules include weights to apply to the sensor data from the number of sensors, and the control system prioritizes the sensor data based on the weights. In addition, the sensor priority rules and/or weights can be based on performance characteristics of each of the sensors in the detected conditions, and aspects relating to the operation of the SDV include detecting objects in an environment around the SDV." and col. 10, lines 44-56: “According to one aspect, vehicle sensor interfaces obtain raw sensor data from the various sensors, and sensor analysis components of the vehicle control system implement functionality such as object detection, image recognition, image processing, and other sensor processes in order to detect hazards, objects, or other notable events in the roadway. The sensor analysis components can be implemented by multiple different processes, each of which analyzes different sensor profile data sets. In this aspect, the condition detection logic 230 receives the analyzed sensor data 211 from the sensor analysis components. Therefore, the condition detection logic 230 can detect conditions based on not only raw sensor data 211, but also analyzed sensor data 211.”). Claim(s) 2-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Meyhofer, in view of Stopczynski, Nagata, Inoue, Haight, and US 2020/0355820, hereinafter, “Zeng” (previously of record). Regarding claim 2: Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach the vehicle of claim 1, however, Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach do not appear to explicitly teach the following: wherein the control part, when the required performance is related to improving a recognition accuracy of one area of a surrounding area of the vehicle, changes a recognition area of the radar to a vicinity of the one area. However, in the same field of endeavor, Zeng does teach wherein the control part, when the required performance is related to improving a recognition accuracy of one area of a surrounding area of the vehicle, changes a recognition area of the radar to a vicinity of the one area (Zeng, para. 0026: "Because the LiDAR 26 has a steerable beam, a single laser source can be rapidly pointed in different directions, to bounce light off different surfaces within the scene and thus capture what is called a point cloud image of the scene. While it is possible to steer the laser beam in a raster pattern, resembling the pattern captured by camera 24, the LiDAR 26 is by no means restricted to such a predefined steering pattern. Rather, the beam steering processor within the LiDAR can steer the beam in virtually any user-defined or software-defined direction.", para. 0029: "While LiDAR sensors been illustrated as the verifying sensor in the disclosed implementation, RADAR sensors can also be used.", para. 0033: "One objective of the disclosed attention mechanism is to inform the sensors within the Sensor layer 34 where attention should be focused, and conversely, where attention can be suppressed or withheld." and para. 0108: "The processor uses this pose and extent-for-detection assessment to populate a tracking environment model hypothesis, at 116, of where the vehicle and all moving objects within the extent-for-detection range will be as time unfolds. The processor stores this information in an environment model data store 118, which the processor uses to generate the ROI bitmap mask (attention bitmap) at 120. This attention bitmap provides the seed used to inform the processor at step 102 where to command the sensors to turn their attention."). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, and with a reasonable likelihood of success, to modify the invention disclosed by Meyhofer, as modified by Stopczynski, Nagata, Inoue, and Haight, with the control capabilities taught by Zeng, for the benefit of improving object recognition accuracy. (Zeng, para. 0004: "The systems and methods disclosed here provide a selective attention mechanism to steer the perception sensor (e.g., LiDAR laser beam, or in some instances the camera region of interest) to regions within the scene where deeper visual acuity is warranted.") Regarding claim 3: Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach the vehicle of claim 1, however, Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach do not appear to explicitly teach the following: wherein the control part, when the required performance is related to acquiring information about a moving object around the vehicle, changes a recognition area of the radar to a vicinity of the moving object. However, in the same field of endeavor, Zeng does teach wherein the control part, when the required performance is related to acquiring information about a moving object around the vehicle, changes a recognition area of the radar to a vicinity of the moving object (Zeng, para. 0029: "While LiDAR sensors been illustrated as the verifying sensor in the disclosed implementation, RADAR sensors can also be used.", para. 0033: "One objective of the disclosed attention mechanism is to inform the sensors within the Sensor layer 34 where attention should be focused, and conversely, where attention can be suppressed or withheld.", and para. 0108: "The processor uses this pose and extent-for-detection assessment to populate a tracking environment model hypothesis, at 116, of where the vehicle and all moving objects within the extent-for-detection range will be as time unfolds. The processor stores this information in an environment model data store 118, which the processor uses to generate the ROI bitmap mask (attention bitmap) at 120. This attention bitmap provides the seed used to inform the processor at step 102 where to command the sensors to turn their attention."). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, and with a reasonable likelihood of success, to modify the invention disclosed by Meyhofer, as modified by Stopczynski, Nagata, Inoue, and Haight, with the control capabilities taught by Zeng, for the benefit of improving object recognition and collision avoidance. (Zeng, para. 0004: "The systems and methods disclosed here provide a selective attention mechanism to steer the perception sensor (e.g., LiDAR laser beam, or in some instances the camera region of interest) to regions within the scene where deeper visual acuity is warranted.") Regarding claim 4: Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach the vehicle of claim 1, however, Meyhofer, Stopczynski, Nagata, Inoue, and Haight teach do not appear to explicitly teach the following: wherein the control part, when the required performance is related to improving a resolution to acquire information about one area of a surrounding area of the vehicle, changes a recognition area of the LiDAR to a center of the one area. However, in the same field of endeavor, Zeng does teach wherein the control part, when the required performance is related to improving a resolution to acquire information about one area of a surrounding area of the vehicle, changes a recognition area of the LiDAR to a center of the one area (Zeng, FIG. 1 and para. 0021: "The LiDAR 26 produces a narrow laser beam 30 that is steerable by electronics within the perception sensor package.", para. 0026: "While it is possible to steer the laser beam in a raster pattern, resembling the pattern captured by camera 24, the LiDAR 26 is by no means restricted to such a predefined steering pattern. Rather, the beam steering processor within the LiDAR can steer the beam in virtually any user-defined or software-defined direction.", para. 0033: "One objective of the disclosed attention mechanism is to inform the sensors within the Sensor layer 34 where attention should be focused, and conversely, where attention can be suppressed or withheld."). Since the sensors of Zeng are capable of being steered in any user- or software-defined direction, the direction of the LiDAR sensor can also be directed towards the center, or any other position, of the recognition area requiring improved resolution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, and with a reasonable likelihood of success, to modify the invention disclosed by Meyhofer, as modified by Stopczynski, Nagata, Inoue, and Haight, with the control capabilities of Zeng, for the benefit of improving object recognition. (Zeng, para. 0004: "The systems and methods disclosed here provide a selective attention mechanism to steer the perception sensor (e.g., LiDAR laser beam, or in some instances the camera region of interest) to regions within the scene where deeper visual acuity is warranted.") Alternatively, claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over US-10,479,376, hereinafter “Meyhofer” (previously of record), in view of US-2008/0252482-A1, hereinafter “Stopczynski” (previously of record), US-8676488-B2 (citations from US-20120065841-A1), hereinafter “Nagata” (previously of record), JP-2005100336-A, hereinafter “Inoue” (previously of record), US-20180244195-A1, hereinafter “Haight” (previously of record), and US-20200064483-A1, hereinafter “Li” (newly of record). Regarding claim 1: Meyhofer discloses a vehicle performing autonomous driving (Meyhofer, FIG. 1: Self-Driving Vehicle 10), the vehicle comprising: a communication part wirelessly connected to an external server and external devices, and configured to receive signals from the external server and the external devices (Meyhofer, FIG. 4: Communication Interface 450, Network 480; col. 14, line 65 – col. 15, line 7: “In an example of FIG. 4, the computer system 400 can include a communication interface 450 [i.e., communication part] that can enable communications over a network 480 [i.e., configured to receive signals]. In one implementation, the communication interface 450 can also provide a data bus or other local links to electro-mechanical interfaces of the vehicle, such as wireless [i.e., wirelessly connected] or wired links to and from control mechanisms 420 (e.g., via a control interface 422) [i.e., external devices], sensor systems 430 [i.e., external devices], and can further provide a network link to a backend transport management system (implemented on one or more datacenters) [i.e., external server] over one or more networks 480.”); a driving part including an engine configured to drive the vehicle and acquire information about an element that drives the vehicle (Meyhofer, FIG. 4: Control Mechanisms 420, with Control Interfaces 422, Acceleration 422, Braking 424, Steering 426 and Signaling Systems 428; col. 5, lines 22-29: “For example, the control system 100 can analyze the sensor data 115 to generate low level commands 158 executable by the acceleration system 172 [i.e., an engine; Note: It would be obvious to one of ordinary skill in the art, at the time of the application, that an acceleration system could include an engine, a battery pack, a fuel cell stack or an equivalent propulsion system, or a combination of two or more of those propulsion system components.], steering system 157, and braking system 176 of the SDV 10. Execution of the commands 158 by the control mechanisms 170 can result in throttle inputs, braking inputs, and steering inputs that collectively cause the SDV 10 to operate along sequential road segments to a particular destination.”; col. 15, lines 38-49: “Execution of the control instructions 462 can cause the processing resources 410 to generate control commands 415 in order to autonomously operate the SDV's acceleration 422, braking 424, steering 426, and signaling systems 428 (collectively, the control mechanisms 420). Thus, in executing the control instructions 462, the processing resources 410 can receive sensor data 432 from the sensor systems 430, dynamically compare the sensor data 432 to a current localization map 464, and generate control commands 415 for operative control over the acceleration, steering, and braking of the SDV."); an information acquisition part including a camera, a radar and a LiDAR (Meyhofer, FIG. 1: Sensors 102, with Camera 101, LiDAR 103 and RADAR 105); and a control part (Meyhofer, FIG. 1: SDV Control System 100) configured to: determine road condition information of a road on which the vehicle travels based on a signal acquired from the communication part (Meyhofer, FIG. 2 & 3: Sensor Selection Component 220 receives Contextual Information 213 from Network Service 260 and col. 13, lines 55-67: "In addition to detecting conditions from the sensor data, the condition detection logic 230 can receive contextual information (314) from a network service 260. A region-specific network service 260 can record location-based contextual information about a region, and a combination of sensor data and position information of the SDV can be correlated to accurately determine environment conditions. By way of example, contextual information can include labels or descriptors, or numeric equivalents or correlations of parameters, which indicate one or more of the following: road construction, traffic, emergency situations, local weather, time and date, accumulated precipitation on road surfaces, etc."); determine vehicle traveling information of the vehicle based on information acquired from the driving part (Meyhofer, FIG. 3: "Detect conditions relating to the operation of a self-driving vehicle 310" has an input of "Vehicle Conditions 301;" and col. 13, lines 47-50: "Some examples of vehicle conditions (301) are the speed of the vehicle, acceleration, direction of movement (i.e., forward or reverse), traction, sensor status, and vehicle status (i.e., parked or moving).); receive a recognition result of the information acquisition part (Meyhofer, FIG. 2: Sensor Selection Component 220 with Condition Detection 230 that receives Sensor Data 211 and col. 10, lines 44-56: "According to one aspect, vehicle sensor interfaces obtain raw sensor data from the various sensors, and sensor analysis components of the vehicle control system implement functionality such as object detection, image recognition, image processing, and other sensor processes in order to detect hazards, objects, or other notable events in the roadway. The sensor analysis components can be implemented by multiple different processes, each of which analyzes different sensor profile data sets. In this aspect, the condition detection logic 230 receives the analyzed sensor data 211 from the sensor analysis components. Therefore, the condition detection logic 230 can detect conditions based on not only raw sensor data 211, but also analyzed sensor data 211."); determine a required performance based on the road condition information, the vehicle travelling information, and the recognition result (Meyhofer, col. 6, lines 32-45: "The sensor selection component 120 represents logic that prioritizes the processing or use of sensor data 115 by type (e.g., by sensor device) based on a predetermined condition or set of conditions. In some examples, the predetermined condition or set of conditions may relate to the operation of the SDV, and include for example, (i) telemetry information of the vehicle, including a velocity or acceleration of the vehicle; (ii) environment conditions in the region above the roadway, such as whether active precipitation (e.g., rainfall or snow fall) or fog is present; (iii) environment conditions that affect the roadway surface, including the presence of precipitation (e.g., soft rain, hard rain, light snowfall, active snowfall, ice); and/or (iv) the type of roadway in use by the vehicle (e.g., highway, main thoroughfare, residential road).", col. 7, lines 2-6: "Although some of the sensors 102 may offer superior performance in good weather conditions or at slower speeds, it is important for the SDV 10 to recognize adverse conditions and analyze sensor data 115 with those conditions and the performance characteristics of the sensors 102 in mind." and col. 8, lines 9-19: “The perception output 129 can provide input into the motion planning component 130. The motion planning component 130 includes logic to detect dynamic objects of the vehicle's environment from the perceptions. When dynamic objects are detected, the motion planning component 130 may utilize the location output 121 of the localization component 122 to determine a response trajectory 125 of the vehicle for steering the vehicle outside of the current sensor horizon. The response trajectory 125 can be used by the vehicle control interface 128 in advancing the vehicle forward safely.”) Meyhofer does not appear to explicitly disclose the following: change an object recognition performance of the information acquisition part based on the required performance at a specific area or areas; and in a situation in which the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle, improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera to a predetermined range, in a situation in which a brake pedal is operated, the control part determines a front area of the radar and LiDAR as the specific area, in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area, and in a situation in which a steering wheel or a steering wheel pedal of the vehicle is operated, the control part determines side areas of the radar and the LiDAR as the specific areas, wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Stopczynski teaches: change an object recognition performance of the information acquisition part based on the required performance at a specific area or areas (Stopczynski, FIG. 2; para. 0004: "The system comprises a Blind Spot Detection System equipped with radar sensors having multiple beam selection control and programmable range capability to allow said radar sensors to define a specific region of interest for detection of vehicle within a blind spot area; and a lane departure warning system having a vision sensor for determining host vehicle offset to road lane markings, edge of road, guard rails or other obstacles."); and Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Meyhofer, with the concept of changing an object recognition performance of information acquisition components based on a performance requirement of a specific area, taught by Stopczynski, in order to improve the accuracy of recognizing any objects detected by the information acquisition components in specific areas and limit false detection of objects in specific areas (Stopczynski, para. 0002: “Blind Spot Detection Systems with programmable range capability have set a fixed programmable maximum limit to avoid false detection of objects in the lane or road beyond the adjacent lanes, such as guardrails, vehicles in lanes beyond the adjacent lane to the host vehicle, etc.”). Meyhofer and Stopczynski do not appear to explicitly teach the following: in a situation in which the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle, improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera to a predetermined range, in a situation in which a brake pedal is operated, the control part determines a front area of the radar and LiDAR as the specific area, in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area, and in a situation in which a steering wheel or a steering wheel pedal of the vehicle is operated, the control part determines side areas of the radar and the LiDAR as the specific areas, wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Nagata teaches: in a situation in which the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle, improve a classification characteristic of a part corresponding to the one area in an image acquired by the camera to a predetermined range (Nagata, Note: Examiner is interpreting “improving a classification characteristic” as changing the priority or region of interest of the camera.; para. 0037: “Instead of a millimeter-wave radar, an image sensor, such as a camera [i.e., improving a classification characteristic], a laser radar, or the like may be applied.”; para. 0009: “In this case, the control unit may set higher priority on a monitoring sensor which monitors an area near the traveling direction of the host vehicle [i.e., the required performance is related to improving a classification characteristic of an object corresponding to one area around the vehicle] than a monitoring sensor which monitors an area apart from the traveling direction of the host vehicle on the basis of one of the traveling state of the host vehicle and the state of the driver of the host vehicle detected by the state detection unit.”; para. 0010: “With this configuration, the control unit sets higher priority on a monitoring sensor [i.e., improve a classification characteristic] which monitors an important area near the traveling direction of the host vehicle [i.e., a part corresponding to the one area in an image acquired by the camera to a predetermined range] than a monitoring sensor which monitors a less important area apart from the traveling direction of the host vehicle on the basis of one of the traveling state of the host vehicle and the state of the driver of the host vehicle detected by the state detection unit. Therefore, it is possible to appropriately set priority in accordance with the importance of the monitoring sensors.”), in a situation in which a brake pedal is operated, the control part determines a front area of the radar and LiDAR as the specific area (Note: “a front area of the radar and LiDAR” is being interpreted as either the front, rear or sides of the vehicle upon which those sensors are mounted, depending on the direction the mounted sensors are pointing. For example, a rear-of-vehicle facing sensor has a front area of the sensor that corresponds with the rear area of the vehicle.) (Nagata, FIG. 1: rear area millimeter-wave radar 14, right side millimeter-wave radar 15, and left side millimeter-wave radar 16 (see annotated FIG. 1, below); FIG. 3: steps S107 and S109; para. 0045: "When the determination result on whether the traveling direction is front or rear is deceleration [i.e., brake pedal is operated], the obstacle detection method determination ECU 41 increments +1 in the priority flags of the rear area millimeter-wave radar 14 [i.e., rear area of the vehicle, because that is the direction radar 14 is mounted on the vehicle, but it also corresponds with the front area of the radar], the right side millimeter-wave radar 15, and the left side millimeter-wave radar 16 (S107, S109).")… PNG media_image1.png 575 406 media_image1.png Greyscale Nagata, annotated FIG. 1 …in a situation in which a steering wheel or a steering wheel pedal of the vehicle is operated, the control part determines side areas of the radar and the LiDAR as the specific areas (Nagata, FIG. 1; FIG.3: steps S103, S105, and S106; para. 0044: "When the determination result on whether the traveling direction is left or right is the right direction [i.e., steering wheel…is operated], the obstacle detection method determination ECU 41 increments +1 in the priority flags of the front right side millimeter-wave radar 12, the rear right side millimeter-wave radar 15, and the right dead angle millimeter-wave radar 17 (S103, S105). When the determination result on whether the traveling direction is left or right is the left direction [i.e., steering wheel…is operated], the obstacle detection method determination ECU41 increments +1 in the priority flags of the front left side millimeter-wave radar 13, the rear left side millimeter-wave radar 16, and the left dead angle millimeter-wave radar 18 (S103, S106).")… Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, and with a reasonable likelihood of success, to further modify the invention disclosed by Meyhofer, as modified by Stopczynski, with driver input considerations, taught by Nagata, for the benefit of improving object recognition, and thus vehicle safety (Nagata, para. 0002: “In order to improve the safety or convenience of automobiles… For this reason, a technique which enables the recognition of information relating to obstacles, such as other vehicles which are traveling around the host vehicle, with satisfactory precision has become important.”). does not appear to disclose the following: Meyhofer, Stopczynski, and Nagata do not appear to explicitly teach the following: …in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area, and… …wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Inoue teaches: …in a situation in which an accelerator pedal is operated, the control part determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area (translated document of Inoue, para. 0039: “The flowchart in FIG. 5 is for the case where the acceleration / deceleration intention detection means M4 detects the driver's intention to accelerate [i.e., an accelerator pedal is operated]... In step S23, the detection area setting means M2 [i.e., control part] sets a first detection area [i.e., the front area of the radar and the LiDAR], that is, a detection area corresponding to a normal lane width along the estimated travel path.”; para. 0040: “When the acceleration / deceleration intention detection means M4 detects the driver's intention to accelerate [i.e., an accelerator pedal is operated] in the next step S28, the detection area setting means M2 [i.e., control part] sets a second detection area narrower than the first detection area [i.e., determines a distant area farther from the vehicle than the front area of the radar and the LiDAR as the specific area] in step S29.”; Note: One of ordinary skill in the art, at the time of the application, would know that narrowing the detection area or beam of a radar is equivalent to elongating the range or distance of the detection area or beam (“Pub. 1310: Radar Navigation Manual,” (1985), pg. 19: “For a given amount of transmitted power, the main lobe of the radar beam extends to a greater distance at a given power level with greater concentration of power in narrower beam widths. To increase maximum detection range capabilities, the energy is concentrated into as narrow a beam as is feasible.”; Additionally, one of ordinary skill in the art, at the time of the application, would know that radar, cameras and lidar are obvious variants and their fields of view would similarly be controlled.), and Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Meyhofer, as modified by Stopczynski and Nagata, with the concept of increasing the detectable range or distance of a sensor, installed on a vehicle, when the vehicle accelerates, taught by Inoue, in order to detect an object as early as possible, without exceeding computational resource capability, to provide enough reaction time, or braking distance, to react to the object, which increases driving safety (translated document of Inoue, para. 0005: “The present invention has been made in view of the above-described circumstances, and an object thereof is to appropriately set the detection area of the object detection means so that an obstacle can be reliably detected.”). Meyhofer, Stopczynski, Nagata, and Inoue do not appear to explicitly teach the following: …wherein the camera includes a narrow-angle front camera, a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera, a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera, and a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera, and wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera, and wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide- angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Haight teaches: …wherein the camera includes a narrow-angle front camera (Haight, para. 0041: “The vehicle 100 is equipped with…a narrow forward camera 112a [i.e., the camera includes a narrow-angle front camera]….”), a main front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the narrow-angle front camera (Haight, FIG. 2: narrow forward camera 112a, main forward camera 112b, [i.e., a main front camera configured to acquire surrounding information at a shorter distance] see annotated figure, below; para. 0043: “The main forward camera 112 b provides a field of view wider than the narrow forward camera 112 a [i.e., over a wider angle range than the narrow-angle front camera]…”), PNG media_image2.png 421 639 media_image2.png Greyscale Haight, annotated FIG. 2 a wide-angle front camera configured to acquire surrounding information at a shorter distance and over a wider angle range than the main front camera (Haight, FIG. 2: main forward camera 112b, wide forward camera 112c, [i.e., a wide-angle front camera configured to acquire surrounding information at a shorter distance] see annotated figure, below; para. 0043: “The wide forward camera 112c [i.e., wide-angle front camera] provides a field of view wider than the main forward camera 112b [i.e., a wide-angle front camera configured to acquire surrounding information…over a wider angle range than the main front camera].”), and PNG media_image3.png 423 641 media_image3.png Greyscale Haight, annotated FIG. 2 a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera and at a longer distance than the wide-angle front camera (Haight, FIG. 2: rear view camera 112f, wide forward camera 112c, main forward camera 112b, [i.e., a rear camera configured to acquire rear surrounding information at a shorter distance than the main front camera] see annotated figure, below; para. 0041: “Note also that various maximum distances listed in FIG. 2 are illustrative and can be adjusted higher or lower, based on need, such as via using other devices, device types, or adjusting range, whether manually or automatically, including in real-time [i.e., a rear camera configured to acquire rear surrounding information…and at a longer distance than the wide-angle front camera].”; Note: it would be obvious to one of ordinary skill in the art, at the time of the application, that a rear camera with a field of view shorter than a main forward camera field of view is an obvious variant of a rear camera with a field of view longer than a main forward camera field of view.), and PNG media_image4.png 423 639 media_image4.png Greyscale Haight, annotated FIG. 2 wherein the radar is configured to acquire surrounding information within a same angle range and at a shorter distance than the narrow-angle front camera (Haight, FIG. 2: narrow forward camera 112a, radar 110, see annotated figure, below), and PNG media_image5.png 421 639 media_image5.png Greyscale Haight, annotated FIG. 2 Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Meyhofer, as modified by Stopczynski, Nagata, and Inoue, with the concept of implementing various perception sensors (i.e., cameras, RADAR, LiDAR, etc.) around a vehicle with fields of view with different ranges and distances, taught by Haight, in order to be able to detect objects in a full 360 degree zone around the vehicle and leverage the relative strengths of each type of sensor (Haight, para. 0041: “Note that this configuration provides a 360 degree monitoring zone around the vehicle 100.”; para. 0037: “The radar 110 includes a transmitter producing an electromagnetic wave such as in a radio or microwave spectrum, a transmitting antenna, a receiving antenna, a receiver, and a processor (which may be the same as the processor 104) to determine a property of a target. The same antenna may be used for transmitting and receiving as is common in the art. The transmitter antenna radiates a radio wave (pulsed or continuous) from the transmitter to reflect off the target and return to the receiver via the receiving antenna, giving information to the processor about the target's location, speed, angle, and other characteristics.”; para. 0038: “The camera 112 may capture images to enable the processor 104 to perform various image processing techniques, such as compression, image and video analysis, telemetry, or others. For example, image and video analysis can comprise object recognition, object tracking, any known computer vision or machine vision analytics, or other analysis.”). Meyhofer, Stopczynski, Nagata, Inoue and Haight not appear to explicitly teach the following: wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide-angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle. However, in the same field of endeavor, Li teaches: wherein the narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle, the radar is configured to acquire surrounding information up to a distance of 160 m in front of the vehicle, the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle, the wide-angle front camera is configured to acquire surrounding information up to a distance of 60 m in front of the vehicle, and the rear observation camera is configured to acquire surrounding information up to a distance of 100 m behind the vehicle (Li, para. 0130: “In one example, a vehicle 100 may comprise a sensing system with a radar having a first detectable range 103 a (e.g., 180 meters or more), sonar having a second detectable range 103 b (e.g., 7 meters or more), 1080p cameras having a third detectable range 103 c (e.g., 100 meters or more) [i.e., the main front camera is configured to acquire surrounding information up to a distance of 150 m in front of the vehicle,], 4 k camera having a fourth detectable range 103 d (e.g., 200 meters or more) [i.e., narrow-angle front camera is configured to acquire surrounding information up to a distance of 250 m in front of the vehicle], and/or lidar units having a fifth detectable range.”; para. 0118: “A fourth sensor type may comprise radar, such as millimeter wave radar [i.e., the radar]. One or more radar systems may be provided on-board the vehicle. The one or more radar systems may collectively have a fourth detectable range 101 d. The fourth detectable range may have a distance range of d4. The distance range d4 may represent the maximum range of the fourth detectable range. In some embodiments, d4 may be about 180 m. In some embodiments, the detectable range may have a maximum value about 20 m, 30 m, 50 m, 75 m, 100 m, 120 m, 150 m, 160 m, [i.e., the radar is configured to acquire surrounding information up to a distance of 160 m] 170 m, 180 m, 190 m, 200 m, 220 m, 250 m, 300 m, or 500 m. In some embodiments, the detectable range by encompass a front region of the vehicle [i.e., in front of the vehicle].”; para. 0120: “A sixth sensor type may comprise a camera [i.e., the wide-angle front camera], such as a monocular camera. One or more monocular cameras may be provided on-board the vehicle. The one or more monocular cameras may collectively have a sixth detectable range 101 f. The sixth detectable range may have a distance range of d6. The distance range d6 may represent the maximum range of the sixth detectable range. In some embodiments, d6 may be about 230 m. In some embodiments, the detectable range may have a maximum value about 20 m, 30 m, 50 m, [i.e., the wide-angle front camera is configured to acquire surrounding information up to a distance of 60 m] 75 m, 100 m, 120 m, 150 m, 160 m, 170 m, 180 m, 200 m, 210 m, 220 m, 225 m, 230 m, 240 m, 250 m, 270 m, 300 m, or 500 m. In some embodiments, the detectable range by encompass a front region of the vehicle [i.e., in front of the vehicle].”; para. 0121: “A seventh sensor type may comprise a second radar, such as millimeter wave radar, a second monocular camera [i.e., the rear observation camera], an additional long range lidar unit, or any other type of sensor. The sensor may be a rear-facing sensor [i.e., behind the vehicle]. The one or more rear facing sensors may collectively have a seventh detectable range 101 g. The seventh detectable range may have a distance range of d7. The distance range d7 may represent the maximum range of the fourth detectable range [i.e., the rear observation camera is configured to acquire surrounding information up to a distance of 100 m; The fourth detectable range is described in para. 0118: “The distance range d4 may represent the maximum range of the fourth detectable range...the detectable range may have a maximum value about 20 m, 30 m, 50 m, 75 m, 100 m…”]. The distance value may be any of the distance values described elsewhere herein. In some embodiments, the detectable range by encompass a rear region of the vehicle.”). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable likelihood of success to modify the invention disclosed by Meyhofer, as modified by Stopczynski, Nagata, Inoue and Haight, with the concept of using perception sensors (cameras, LiDAR, RADAR, etc.) with specific ranges and fields of view on an autonomous vehicle, taught by Li, in order to provide perception sensors with ranges and fields of view that are the most useful for detecting objects of interest around an autonomous vehicle (Li, para. 0122: “Detection ranges of a multi-sensor system are shown. Data from various sensors can be fused before feeding to a detection algorithm. As illustrated, different sensors and/or sensor types may have different detectable ranges that may collectively encompass the vehicle. Some sensors may have different distance ranges than others. For instance, some sensors may be able to reach greater distances than others. Some sensors may encompass different angular ranges than others. Some sensors may encompass wider ranges around the vehicle, while some sensors may have more narrow angular ranges. In some instances, some of the sensors with a greater distance range may focus on the front and/or rear of the vehicle. This may be useful for detecting objects of interest as the vehicle drives.”). Alternatively, claim(s) 2-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Meyhofer, in view of Stopczynski, Nagata, Inoue, Haight, Li, and Zeng (previously of record). See the rejections of claim(s) 2-4, above. Alternatively, claim(s) 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Meyhofer (previously of record), in view of Stopczynski (previously of record), Nagata (previously of record), Inoue (previously of record), Haight (previously of record), and Li (newly of record). See the rejections of claim(s) 6-8, above. Additional Relevant Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US-20200064483-A1 (2020-02-27) | Teaches sensor ranges. Relevant to amended claim 1. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Leah N Miller whose telephone number is (703)756-1933. The examiner can normally be reached M-Th 8:30am - 5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached at (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.N.M./Examiner, Art Unit 3663 /ABBY J FLYNN/Supervisory Patent Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Oct 20, 2021
Application Filed
Nov 21, 2023
Non-Final Rejection — §103
Feb 08, 2024
Interview Requested
Feb 15, 2024
Examiner Interview Summary
Feb 15, 2024
Applicant Interview (Telephonic)
Feb 23, 2024
Response Filed
May 03, 2024
Final Rejection — §103
Aug 20, 2024
Request for Continued Examination
Aug 21, 2024
Response after Non-Final Action
Sep 25, 2024
Non-Final Rejection — §103
Dec 30, 2024
Response Filed
Apr 09, 2025
Final Rejection — §103
Jul 24, 2025
Request for Continued Examination
Jul 29, 2025
Response after Non-Final Action
Aug 04, 2025
Non-Final Rejection — §103
Nov 10, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585279
Navigating a robotic mower along a guide wire
2y 5m to grant Granted Mar 24, 2026
Patent 12579894
MULTI-LANE TRAFFIC MANAGEMENT SYSTEM FOR PLATOONS OF AUTONOMOUS VEHICLES
2y 5m to grant Granted Mar 17, 2026
Patent 12565229
SYSTEM FOR CONTROLLING VEHICLE BASED ON STATE OF CONTROLLER AND SYSTEM FOR CONTROLLING VEHICLE BASED ON COMMUNICATION STATE
2y 5m to grant Granted Mar 03, 2026
Patent 12560930
IDENTIFYING TRANSPORT STRUCTURES
2y 5m to grant Granted Feb 24, 2026
Patent 12552361
HYBRID VEHICLE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
56%
Grant Probability
48%
With Interview (-8.3%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 32 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month