Prosecution Insights
Last updated: April 19, 2026
Application No. 18/816,252

AUTONOMOUS DRIVING VEHICLE AND CONTROL METHOD THEREOF

Non-Final OA §101§103
Filed
Aug 27, 2024
Examiner
PAIGE, TYLER D
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
1166 granted / 1276 resolved
+39.4% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
28 currently pending
Career history
1304
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
29.8%
-10.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1276 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to an application filed on 08/27/2024. The applicant does not submit an Information Disclosure Statement. The applicant does not make a claim for Domestic priority. The applicant does make a claim for Foreign priority to a Korean application filed on 08/31/2023. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to an abstract idea of organized human activity of evaluation without significantly more. The claims are evaluated with respect to the MPEP and the 2019 Subject Matter Guidance, herein guidance. Under the guidance, example is the reference used for evaluation Step 1 The claims recite a method of controlling an autonomous vehicle and an autonomous vehicle. The claims pass the first step by stating one of the four statutory categories. Step 2A Prong I The independent claim 1 is reproduced below with the abstract idea identified in italics and the pre/post solution activity denoted in bold. Claim 1 A method of controlling an autonomous vehicle, the method comprising: determining whether an actual lane line of a driving road is recognized by receiving sensing information from a plurality of sensors on the autonomous vehicle; (Mental process with extra solution activity of data gathering) initially recognizing a target vehicle driving behind the autonomous vehicle and on another lane based on a result of the determining; (Mental process) and secondly recognizing the target vehicle by varying priorities of the plurality of sensors based on an environment of the driving road. (Mental process with extra solution activity of data gathering) The independent claims and dependent claims do not identify the specific type of data collected for identifying an actual lane pursuant to MPEP 2106.07. The specification in paragraphs 0024 and drawings Fig 4A, 4B and Fig. 5 do not identify what data collected is used for as it relates to the operation of the vehicle. The dependent claims do not claim how the vehicle is controlled based upon the observations of the sensors camera or radar. With respect to the MPEP, the claims are evaluated under 2106.04(a)(2)(III) for evaluation under the mental concept analysis. The claims show the collection of data based upon radar and a camera. However, the claims do not identify the data collected and what the data is used for. Under the analysis of 2106.04(a)(2)(III)( B) a person is able to observe the lane markings while in the vehicle and identify where the another vehicle is in relation to the vehicle. With respect to the 2019 Guidance. Under the analysis with respect to example 40, the claims do not identify with specificity data collect. In addition, the dependent claims do not identify what the data is used for in the operation of the autonomous vehicle. Step 2A Prong II This judicial exception is not integrated into a practical application because the claimed invention does not satisfy the requirements stated in MPEP 2106.04(d)(1). The claims do not identify how the data is collected is used. The operator of the vehicle is able to observe the lane markings. Thus, the claims fail Step 2A Prong II. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims fail to state any of the features or the MPEP 2106.05(a-h). Therefore, the claims fail to state features, structure, or operations that are significantly more than an abstract idea. This, the claims fail Step 2B. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 – 3, 5 – 14, and 6 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over Maheshwari US 2021/0209941 in view of Hummelshoj US 2019/0227549. As per claim 1, A method of controlling an autonomous vehicle, the method comprising: determining whether an actual lane line of a driving road is recognized by receiving sensing information from a plurality of sensors on the autonomous vehicle; (Maheshwari paragraph 0027 discloses, “the methods of the present disclosure may provide lane marking detection and identification capabilities for autonomous vehicles.” And paragraph 0082 discloses, “the perception component 102, specifically the lane marking detection module 104, may link the pixels represented in the point cloud 600 together to create an approximation of the actual lane marking.”) initially recognizing a target vehicle driving behind the autonomous vehicle and on another lane based on a result of the determining; (Maheshwari paragraph 0115 discloses, “The lane detection module 104 may then identify the region of interest for the vehicle (e.g., +/−100 m from the vehicle's current position), and establish a source node at 100 m behind the vehicle and a target node at +100 m ahead of the vehicle. Establishing the source node behind the vehicle may yield a heightened predictive capability by ensuring consistency in the linked lane markings.”) and (Hummelshoj paragraph 0047 teaches, “In one approach, the selection module 220 prioritizes minimizing an amount of the sensor data 250 that is to be analyzed to achieve sufficient perception.”) and secondly recognizing the target vehicle by varying priorities of the plurality of sensors based on an environment of the driving road. (Hummelshoj paragraph 0021 teaches, “In one embodiment, the human-based perception model is a machine learning algorithm such as a neural network that accepts the sensor data as input and selects how the sensor data is to be efficiently processed or directly processes the sensor data according to a particular set of perception techniques.” And paragraph 0026 discloses, “the vehicle 100 includes a perception system 170 that is implemented to perform methods and other functions as disclosed herein relating to controlling the perception of the vehicle 100 by selectively adapting which perception techniques are applied to incoming sensor data according to a human-based perception model.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 2, The method of claim 1, wherein the secondly recognizing of the target vehicle comprises setting a determination weight for each of the plurality of sensors based on an error rate for each of the plurality of sensors. (Maheshwari paragraph 0038 discloses, “the system may determine an accuracy of the predicted lane marking locations, referenced in the present disclosure as a “likelihood score” (“L”).” and paragraph 0160 discloses the weight of the data) As per claim 3, The method of claim 2, further comprising, in response to the actual lane line of the driving road being not recognized, generating at least one virtual lane line based on a driving trajectory of the autonomous vehicle, wherein the at least one virtual lane line comprises a first line of a first lane along which the autonomous vehicle is driving and a second line of a neighboring lane. (Maheshwari paragraph 0046 discloses, “The lane marking detection and tracking architecture 100 also includes a lane tracking and prediction module 105, which processes the perception signals 103 to generate prediction signals 106 descriptive of one or more predicted future states of the vehicle's environment. For example, the lane tracking and prediction module 105 may analyze the positions and directions of the lane markings identified in the perception signals 103 generated by the lane marking detection module 104 to predict one or more extensions of the lane markings.”) As per claim 5, The method of claim 4, wherein the environment of the road includes one of a first environment that is a normal road environment, a second environment that is a dark road environment, and a third environment that is a heavy rain/heavy snow or diffusely reflected road environment. (Hummelshoj paragraph 0034 teaches, “the pre-processing mechanism 305 may scan for attributes that are correlated with particular conditions of the surrounding environment, traffic, and so on. For example, the attributes may include light conditions (e.g., bright vs. dark), weather (snow, rain, etc.), traffic density, road type, proximity of various objects/obstacles, patterns of motion, and so on.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 6, The method of claim 5, wherein the secondly recognizing of the target vehicle comprises, in a case of the first environment, secondly recognizing the target vehicle by operating the plurality of sensors on a same priority. (Maheshwari paragraph 0115 discloses, “The lane detection module 104 may then identify the region of interest for the vehicle (e.g., +/−100 m from the vehicle's current position), and establish a source node at 100 m behind the vehicle and a target node at +100 m ahead of the vehicle. Establishing the source node behind the vehicle may yield a heightened predictive capability by ensuring consistency in the linked lane markings.”) and (Hummelshoj paragraph 0047 teaches, “In one approach, the selection module 220 prioritizes minimizing an amount of the sensor data 250 that is to be analyzed to achieve sufficient perception.”) and (Hummelshoj paragraph 0021 teaches, “In one embodiment, the human-based perception model is a machine learning algorithm such as a neural network that accepts the sensor data as input and selects how the sensor data is to be efficiently processed or directly processes the sensor data according to a particular set of perception techniques.” And paragraph 0026 discloses, “the vehicle 100 includes a perception system 170 that is implemented to perform methods and other functions as disclosed herein relating to controlling the perception of the vehicle 100 by selectively adapting which perception techniques are applied to incoming sensor data according to a human-based perception model.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 7, The method of claim 5, wherein the secondly recognizing of the target vehicle comprises, in a case of the second environment, secondly recognizing the target vehicle by operating a radar on a first priority among the plurality of sensors. (Hummelshoj paragraph 0021 teaches, “In one embodiment, the human-based perception model is a machine learning algorithm such as a neural network that accepts the sensor data as input and selects how the sensor data is to be efficiently processed or directly processes the sensor data according to a particular set of perception techniques.” And paragraph 0026 discloses, “the vehicle 100 includes a perception system 170 that is implemented to perform methods and other functions as disclosed herein relating to controlling the perception of the vehicle 100 by selectively adapting which perception techniques are applied to incoming sensor data according to a human-based perception model.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 8, The method of claim 5, wherein the secondly recognizing of the target vehicle comprises, in a case of the third environment, secondly recognizing the target vehicle by operating a camera on a first priority among the plurality of sensors. (Maheshwari paragraph 0044 discloses, “The sensors 101 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).”) and (Hummelshoj paragraph 0021 and 0026) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 9, The method of claim 2, further comprising calculating a position of the target vehicle based on the setting of the determination weight. (Hummelshoj paragraph 0047 teaches, “In further aspects, the selection module 220 can priority a particular perception technique that is less computationally intensive and/or that operates more quickly. In still further aspects, the selection module 220 can weigh both factors when selecting the perception techniques. Of course, when implementing the human-based perception model 260, the particular weights between how the various techniques are selected are learned by the model 260 according to the training data.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 10, A non-transitory computer-readable storage medium storing instructions that, by being executed by a processor, cause the processor to perform the method of any one of claim 1. (Maheshwari paragraph 0072 discloses, “The vehicle controller 322 may include one or more CPUs, GPUs, and a non-transitory memory with persistent components (e.g., flash memory, an optical disk) and/or non-persistent components (e.g., RAM).”) As per claim 11, An autonomous vehicle comprising: one or more processors; (Maheshwari paragraph 0006 discloses, “The method further comprises partitioning, by the one or more processors,” and paragraph 0026 discloses, “The vehicle may be a fully self-driving or “autonomous” vehicle, a vehicle controlled by a human driver, or some hybrid of the two.”) and a storage medium storing computer-readable instructions that, when executed by the one or more processors, enable the one or more processors to: determine whether an actual lane line of a driving road is recognized by receiving sensing information from a plurality of sensors on the autonomous vehicle, (Maheshwari paragraph 0027 discloses, “the methods of the present disclosure may provide lane marking detection and identification capabilities for autonomous vehicles.” And paragraph 0082 discloses, “the perception component 102, specifically the lane marking detection module 104, may link the pixels represented in the point cloud 600 together to create an approximation of the actual lane marking.”) initially recognize a target vehicle driving behind the autonomous vehicle and on another lane based on a result of the determining, (Maheshwari paragraph 0115 discloses, “The lane detection module 104 may then identify the region of interest for the vehicle (e.g., +/−100 m from the vehicle's current position), and establish a source node at 100 m behind the vehicle and a target node at +100 m ahead of the vehicle. Establishing the source node behind the vehicle may yield a heightened predictive capability by ensuring consistency in the linked lane markings.”) and (Hummelshoj paragraph 0047 teaches, “In one approach, the selection module 220 prioritizes minimizing an amount of the sensor data 250 that is to be analyzed to achieve sufficient perception.”) and secondly recognize the target vehicle by varying priorities of the plurality of sensors based on an environment of the driving road. (Hummelshoj paragraph 0021 teaches, “In one embodiment, the human-based perception model is a machine learning algorithm such as a neural network that accepts the sensor data as input and selects how the sensor data is to be efficiently processed or directly processes the sensor data according to a particular set of perception techniques.” And paragraph 0026 discloses, “the vehicle 100 includes a perception system 170 that is implemented to perform methods and other functions as disclosed herein relating to controlling the perception of the vehicle 100 by selectively adapting which perception techniques are applied to incoming sensor data according to a human-based perception model.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 12, The autonomous vehicle of claim 11, wherein the instructions further enable the one or more processors to, for secondly recognizing the target vehicle, set a determination weight for each of the plurality of sensors based on an error rate for each of the plurality of sensors. (Maheshwari paragraph 0038 discloses, “the system may determine an accuracy of the predicted lane marking locations, referenced in the present disclosure as a “likelihood score” (“L”).” and paragraph 0160 discloses the weight of the data) As per claim 13, The autonomous vehicle of claim 12, wherein the instructions further enable the one or more processors to, in response to the actual lane line of the driving road being not recognized, generate at least one virtual lane line based on a driving trajectory of the autonomous vehicle. (Maheshwari paragraph 0046 discloses, “The lane marking detection and tracking architecture 100 also includes a lane tracking and prediction module 105, which processes the perception signals 103 to generate prediction signals 106 descriptive of one or more predicted future states of the vehicle's environment. For example, the lane tracking and prediction module 105 may analyze the positions and directions of the lane markings identified in the perception signals 103 generated by the lane marking detection module 104 to predict one or more extensions of the lane markings.”) As per claim 14, The autonomous vehicle of claim 13, wherein the at least one virtual lane comprises a first line of a first lane along which the autonomous vehicle is driving and a second line of a neighboring lane. (Maheshwari paragraph 0028 discloses, “A system for detecting lane edges may first receive a set of pixels associated with roadway lanes. Each received pixel may include an identification, such that a particular subset of pixels will include lane identifications.” And paragraph 0080 discloses, “Two adjacent lane boundaries define the edges of a lane in which a vehicle may travel. Moreover, a “left boundary” and a “right boundary” may correspond to the left and right edges of a lane boundary, respectively. For example, adjacent lane boundaries may be separated by a distance of approximately 12 feet, corresponding to the width of a lane, and the left and right boundaries may be separated by approximately 6 inches, corresponding to the width of typical markings defining a lane boundary.”) As per claim 16, The autonomous vehicle of claim 15, wherein the environment of the driving road comprises one of a first environment that is a normal road environment, a second environment that is a dark road environment, and a third environment that is a heavy rain/heavy snow or diffusely reflected road environment. (Hummelshoj paragraph 0034 teaches, “the pre-processing mechanism 305 may scan for attributes that are correlated with particular conditions of the surrounding environment, traffic, and so on. For example, the attributes may include light conditions (e.g., bright vs. dark), weather (snow, rain, etc.), traffic density, road type, proximity of various objects/obstacles, patterns of motion, and so on.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 17, The autonomous vehicle of claim 16, wherein the instructions further enable the one or more processors to, in a case of the first environment, secondly recognize the target vehicle by operating the plurality of sensors on a same priority. (Maheshwari paragraph 0115 discloses, “The lane detection module 104 may then identify the region of interest for the vehicle (e.g., +/−100 m from the vehicle's current position), and establish a source node at 100 m behind the vehicle and a target node at +100 m ahead of the vehicle. Establishing the source node behind the vehicle may yield a heightened predictive capability by ensuring consistency in the linked lane markings.”) and (Hummelshoj paragraph 0047 teaches, “In one approach, the selection module 220 prioritizes minimizing an amount of the sensor data 250 that is to be analyzed to achieve sufficient perception.”) and (Hummelshoj paragraph 0021 teaches, “In one embodiment, the human-based perception model is a machine learning algorithm such as a neural network that accepts the sensor data as input and selects how the sensor data is to be efficiently processed or directly processes the sensor data according to a particular set of perception techniques.” And paragraph 0026 discloses, “the vehicle 100 includes a perception system 170 that is implemented to perform methods and other functions as disclosed herein relating to controlling the perception of the vehicle 100 by selectively adapting which perception techniques are applied to incoming sensor data according to a human-based perception model.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 18, The autonomous vehicle of claim 16, wherein the instructions further enable the one or more processors to, in a case of the second environment, secondly recognize the target vehicle by operating a radar on a first priority among the plurality of sensors. (Maheshwari paragraph 0044 discloses, “The sensors 101 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).”) and (Hummelshoj paragraph 0021 and 0026) As per claim 19, The autonomous vehicle of claim 16, wherein the instructions further enable the one or more processors to, in a case of the third environment, secondly recognize the target vehicle by operating a camera on a first priority among the plurality of sensors. (Maheshwari paragraph 0044 discloses, “The sensors 101 may all be of the same type, or may include a number of different sensor types (e.g., multiple lidar devices with different viewing perspectives, and/or a combination of lidar, camera, radar, and thermal imaging devices, etc.).”) and (Hummelshoj paragraph 0021 and 0026) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 20, The autonomous vehicle of claim 12, wherein the instructions further enable the one or more processors to calculate a position of the target vehicle based on the setting of the determination weight. (Hummelshoj paragraph 0047 teaches, “In further aspects, the selection module 220 can priority a particular perception technique that is less computationally intensive and/or that operates more quickly. In still further aspects, the selection module 220 can weigh both factors when selecting the perception techniques. Of course, when implementing the human-based perception model 260, the particular weights between how the various techniques are selected are learned by the model 260 according to the training data.”) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose varying the priority of sensors based upon an environment of the driving road. Hummelshoj teaches of varying the priority of sensors based upon an environment of the driving road. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Hummelshoj et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Maheshwari US 2021/0209941 in view of Hummelshoj US 2019/0227549 in view Switkes US 2010/0145575. As per claim 4, The method of claim 3, further comprising determining a lateral position and a heading angle of the target vehicle based on outer sides of front and rear tires of the target vehicle and a distance between the target vehicle and the actual lane line of the driving road or between the target vehicle and the at least one virtual lane line. (Switkes paragraph 0046) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose the front and rear tires of a target vehicle and the distance between the target vehicle and the actual lane. Switkes the front and rear tires of a target vehicle and the distance between the target vehicle and the actual lane. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Switkes et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. As per claim 15, The autonomous vehicle of claim 13, wherein the instructions further enable the one or more processors to calculate a lateral position of the target vehicle and a heading angle of the target vehicle, based on an outer side of front and rear tires of the target vehicle and a distance between the target vehicle and the actual lane line of the driving road or between the target vehicle and the at least one virtual lane line, and recognize the target vehicle based on a result of the calculating. (Switkes paragraph 0046) Maheshwari discloses a lane detection and tracking technique for imaging systems. Maheshwari does not disclose the front and rear tires of a target vehicle and the distance between the target vehicle and the actual lane. Switkes the front and rear tires of a target vehicle and the distance between the target vehicle and the actual lane. Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Switkes et.al. into the invention of Maheshwari. Such incorporation is motivated is motivated to maintain driving function. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYLER D PAIGE whose telephone number is (571)270-5425. The examiner can normally be reached M-F 7:00am - 6:00pm (mst). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at 5712703921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER D PAIGE/Primary Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Aug 27, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597357
AUTOMATIC AIRCRAFT TAXIING
2y 5m to grant Granted Apr 07, 2026
Patent 12592102
OPERATION DATA SUPPORT SYSTEM FOR INDUSTRIAL MACHINERY
2y 5m to grant Granted Mar 31, 2026
Patent 12586424
DRIVING DIAGNOSIS DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586425
RARE EVENT DETECTION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12579849
DETECTING AN UNUSUAL OPERATION OF A VEHICLE OUTSIDE OF A TIME FENCE AND NOTIFYING NEIGHBORING VEHICLES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+8.2%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 1276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month