Prosecution Insights
Last updated: April 19, 2026
Application No. 18/336,840

DETERMINING A LOCATION OF A TARGET VEHICLE RELATIVE TO A LANE

Non-Final OA §101§102§103
Filed
Jun 16, 2023
Examiner
LEE, JUSTIN S
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
342 granted / 462 resolved
+22.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
20 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 462 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-19, and 21-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an [AltContent: connector]abstract idea without significantly more. [AltContent: connector]101 Analysis – Step 1 [AltContent: connector]Claim 1 is directed to an apparatus for determining at least one location of at least one target vehicle relative to a lane using digital image captured. Therefore, claim 1 is within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites: 1. An apparatus for determining at least one location of at least one target vehicle relative to a lane, the apparatus comprising: at least one memory; and at least one processor coupled to the at least one memory and configured to: obtain a position of a target vehicle within an image; obtain one or more positions of a lane boundary within the image; determine a distance between the target vehicle and the lane boundary based on the position of the target vehicle within the image and the one or more positions of the lane boundary within the image; and adjust a position of the target vehicle in a map based on the distance between the target vehicle and the lane boundary and a position of the lane boundary in the map. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “determine…adjust…” in the context of this claim encompasses estimating a distance between a vehicle and lane boundary by looking at an image, and updating a given map by marking a position of the vehicle using pen and paper. Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”) 1. An apparatus for determining at least one location of at least one target vehicle relative to a lane, the apparatus comprising: at least one memory; and at least one processor coupled to the at least one memory and configured to: obtain a position of a target vehicle within an image; obtain one or more positions of a lane boundary within the image; determine a distance between the target vehicle and the lane boundary based on the position of the target vehicle within the image and the one or more positions of the lane boundary within the image; and adjust a position of the target vehicle in a map based on the distance between the target vehicle and the lane boundary and a position of the lane boundary in the map. For the following reason, the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “obtain…,” the examiner submits that this limitation is insignificant extra-solution activities that merely use an obtained image to perform the process. In particular, the obtaining step is recited at a high level of generality (i.e. as a general means of gathering data from image) and merely performs its intended function, and amounts to mere data gathering, which is a form of insignificant extra-solution activity. Also, the mere presence of the memory and processor do not integrate the judicial exception into a practical application because claims merely use computer as a tool to perform an abstract idea. Also, these additional elements are specified at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “obtain…” amount to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations of “obtain…” the examiner submits that these limitations are insignificant extra-solution activities. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well- understood, routine, conventional activity in the field. The additional limitations of “obtain…” are well-understood, routine, and conventional activities because the background section of the specification (par. 3) recites that object detection used to identify objects from a digital image is well known, and the specification further does not provide any indication that the processor is anything other than a conventional computer within a vehicle (specification, paragraph 84). Dependent claims 8-12 do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception (e.g. additional insignificant extra-solution activities) and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 8-12 are not patent eligible under the same rationale as provided for in the rejection of claim 1. Dependent claim 2 further recites “planning a path of a tracking vehicle…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 3 further recites, “determine a lane association…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 5 further recites, “determine a lane association…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 6 further recites, “obtain…” which is form of an insignificant extra-solution activity. Also, limitation “associate…” can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 7 further recites, “determine a cost…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 8 further recites, “determine the one or more positions…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 9 further recites, “track…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Also, additional element of “Kalman filter” is recited at a high level of generality and in an “apply it” format, where it provides nothing more than mere instructions to implement the abstract idea on a generic computer. There is no specific recitation on how the Kalman filter functions. Therefore, this limitation does not integrate a judicial exception into a practical application and does not provide an inventive concept. Dependent claim 10 further recites, “update tracked positions…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 11 further recites, “determine the distance…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 12 further recites, “track the distance…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 13 further recites, “track…the distance” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Also, additional element of “Kalman filter” is recited at a high level of generality and in an “apply it” format, where it provides nothing more than mere instructions to implement the abstract idea on a generic computer. There is no specific recitation on how the Kalman filter functions. Therefore, this limitation does not integrate a judicial exception into a practical application nor provide an inventive concept. Dependent claim 14 further recites, “determine the position…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 14 further recites, “obtain…” which is form of an insignificant extra-solution activity. Dependent claim 15 further recites, “determine the position…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Dependent claim 16 further recites, “determine…” which can be performed via human mental process. Therefore, this limitation constitutes an abstract idea. Claims 17-19 and 21-30 are similar in scope to claims 1-3 and 5-16, therefore, they are rejected under similar rationale as set forth above. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-8, 10-12, 14-24, 26-27, and 29-30 are rejected under 35 U.S.C. 102[a][1] as being anticipated by Ichinokawa (US 9494438 B1) In regards to claim 1, Ichinokawa teaches, An apparatus for determining at least one location of at least one target vehicle relative to a lane, the apparatus comprising: (See abstract, fig. 6) at least one memory; and (See fig. 1, memory 112) at least one processor coupled to the at least one memory and configured to: (See fig. 1, vehicle control unit 104 coupled with memory 112) obtain a position of a target vehicle within an image; obtain one or more positions of a lane boundary within the image; (See col. 9, lines 53-67, upon the plurality of cameras 118 capturing the images of the surrounding environment of the vehicle 102, the vehicle camera system 108 can evaluate the images and can execute the preprogrammed camera logic to extract details regarding one or more image attributes. In one embodiment, the one or more image attributes can include lane boundary image coordinates of recognized lane boundary attributes that can include, but are not limited to, lane markers, guardrails, construction barrels, concrete dividers, and the like. In particular, the lane boundary image coordinates can be utilized by the map data verification application 106 to determine attributes regarding the right and/or left boundaries (e.g., edges) of the traveling lane. In one embodiment, the vehicle camera system 108 can digitize/package the lane boundary image coordinates into image data…figs. 6-7, col. 15, line 59 thru col. 16, line 2-6, the vehicle 102 is traveling on the roadway 600 within the traveling lane 602. Upon receiving the image data from the image/coordinate reception module 126, the lane image measurement module 130 can be configured to evaluate the image data to determine a set of image coordinates that are associated to the right lane boundary that can include the right lane marker 604 of the traveling lane 602. The lane image measurement module 130 can be configured to evaluate the image data to determine a set of image coordinates that are associated to the left lane boundary that can include the left lane marker 606 of the traveling lane 602… The lane image measurement module 130 can be configured to utilize the lane image coordinates to determine a distance 608, 614 between the right lane marker 604 (i.e., the right lane boundary) and the center portion 612 of the vehicle 102) determine a distance between the target vehicle and the lane boundary based on the position of the target vehicle within the image and the one or more positions of the lane boundary within the image; and (See figs. 6-7 and col. 16, lines 3-41, determining distance between right/left lane marker and center portion of the vehicle) adjust a position of the target vehicle in a map based on the distance between the target vehicle and the lane boundary and a position of the lane boundary in the map. (See col. 17, lines 12-20, the map data verification module 132 can be configured to update the map data based on the lane image measurement. In particular, the map data verification module 132 can be configured to update the map data stored on the map database 122 and/or the third party mapping service 124 with updated lane coordinate data that is modified based on the lane image measurement. In other words, the real time image data is used to update the map data…col. 8, lines 15-26, retrieve map data that pertains to the surrounding environment of the vehicle 102 (i.e., map data associated to the locational coordinates of the vehicle 102)… The navigation system 110 can then determine an estimated actual position of the vehicle 102 traveling within a lane of the roadway based on the vehicle's positioning on a map. This indicates that when the map is updated, vehicle’s positioning is also updated) In regards to claim 2, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to perform an operation, wherein the operation is at least one of: planning a path of a tracking vehicle based on the map; or controlling the tracking vehicle based on the map. (See col. 5, lines 34-39, The VCU 104 can communicate with the navigation system 110 of the vehicle 102 to determine vehicle locational directives (e.g., turn by turn, position to position, coordinate to coordinate directions) that can be partially utilized to guide the vehicle 102 (e.g., autonomously) based on map data that is provided to the navigation system 110.) In regards to claim 3, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to determine a lane association of the target vehicle based on the distance between the target vehicle and the lane boundary. (See figs. 6-7, and associated column and lines, lane measurement L1+R1 is determined, clearly acknowledging that target vehicle is on lane 702, and not 706 and 704) In regards to claim 4, Ichinokawa teaches the apparatus of claim 3, wherein the at least one processor is further configured to control a tracking vehicle based, at least in part, on the lane association of the target vehicle. (See col. 6, lines 12-20, the vehicle camera system 108 can supply image data to the VCU 104 to provide vehicle lane keep assist and/or a vehicle lane departure warning systems. When providing one or more of these systems, the VCU 104 can send commands to control the vehicle 102 to maintain running within the traveling lane. In other words, the VCU 104 can send commands to control the vehicle 102 to be prevented from inadvertently and/or unintentionally departing from the traveling lane. Also see col. 5, lines 19-45) In regards to claim 5, Ichinokawa teaches the apparatus of claim 1, wherein, to determine the distance between the target vehicle and the lane boundary, the at least one processor is further configured to determine a distance between a center point of a bottom plane of a bounding box of the target vehicle and the lane boundary. (See figs. 6-7 and associated column and lines, determining a distance between center point of a vehicle and lane boundary. The frame of vehicle 102 represents claimed bounding box. Examiner requests for further clarification of “bounding box” into claim language) In regards to claim 6, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to: obtain one or more prior positions of one or more lane boundaries determined based on one or more prior images; and associate the one or more positions of the lane boundary with one or more prior positions of a lane boundary of the one or more lane boundaries. (See col. 5, lines 40-45, The map data verification application 106 can verify the map data against real time image data based on images provided from the vehicle camera system 108 to ensure that the VCU 104 is not utilizing incorrect vehicle locational directives and sending incorrect commands to the numerous vehicle systems and components based on the incorrect map data.…col. 17, lines 13-20, update the map data based on the lane image measurement. In particular, the map data verification module 132 can be configured to update the map data stored on the map database 122 and/or the third party mapping service 124 with updated lane coordinate data that is modified based on the lane image measurement. In other words, the real time image data is used to update the map data. During the map verification, prior positions of the lane boundaries determined based on one or more prior images are obtained (e.g. current map obtain). See fig. 2, step 210 and col. 16, lines 53-67, upon determining the lane image measurement (at block 208), at block 210, the method includes verifying the map data based on the lane coordinate measurement and the lane image measurement. In one embodiment, upon receiving the lane coordinate measurement from the lane coordinate measurement module 128 and the lane image measurement from the lane image measurement module 130, the map data verification module 132 can be configured to compare the lane coordinate measurement and the lane image measurement to determine if the two measurements are equivalent.) In regards to claim 7, Ichinokawa teaches the apparatus of claim 6, wherein the at least one processor is further configured to determine a cost of associating the one or more positions of the lane boundary with the one or more prior positions of the lane boundary of the one or more lane boundaries, wherein the cost is proportional to a pixel distance in the image between the one or more positions of the lane boundary and the one or more prior positions of the lane boundary of the one or more lane boundaries divided by a width of the image. (See col. 17, line 21-37, the map data verification module 132 can be configured to compare the lane coordinate measurement and the lane image measurement to determine if the two measurements fall within a predetermined error range (e.g., 0.1 m). The predetermined error range can be dynamic and can account for incorrect or skewed positioning of the vehicle 102 by the global positioning sensors 114 that can occur within certain locations based on interference between the global positioning sensors 114 and the global positioning satellites. Also see col. 14, line 55-67, The lane image measurement module 130 can be configured to calculate a number of pixels based on a digitally constructed line from the right side portion of the vehicle 102 to a point (e.g., parallel point) of the right lane boundary. More specifically, the lane image measurement module 130 can be configured to determine a number of pixels of the image(s) that include the space between the right lane boundary and the right side portion of the vehicle 102. The lane image measurement module 130 can be configured to convert the number of pixels of the image(s) that includes the space between the right lane boundary and the right side portion of the vehicle 102 into a first measurement (e.g., metric measurement value) (e.g., 1.1 m). Lastly see col. 8, lines 46-50) In regards to claim 8, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to determine the one or more positions of the lane boundary within the image based on a plurality of images. (See col. 6, lines 3-11, one or more images…col. 9, lines 53-62, upon the plurality of cameras 118 capturing the images of the surrounding environment of the vehicle 102, the vehicle camera system 108 can evaluate the images and can execute the preprogrammed camera logic to extract details regarding one or more image attributes. In one embodiment, the one or more image attributes can include lane boundary image coordinates of recognized lane boundary attributes that can include, but are not limited to, lane markers, guardrails, construction barrels, concrete dividers,) In regards to claim 10, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to update tracked positions of the one or more positions of the lane boundary based on the image. (See col. 17, lines 13-20, update the map data based on the lane image measurement. In particular, the map data verification module 132 can be configured to update the map data stored on the map database 122 and/or the third party mapping service 124 with updated lane coordinate data that is modified based on the lane image measurement. In other words, the real time image data is used to update the map data.) In regards to claim 11, Ichinokawa teaches the apparatus of claim 1, wherein, to determine the distance between the target vehicle and the lane boundary, the at least one processor is further configured to determine a distance between the target vehicle and a point of the lane boundary between the one or more positions of the lane boundary. (See figs. 6-7 and associated column and lines, measuring distance at one point of the lane boundary and center point of vehicle) In regards to claim 12, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to track the distance between the target vehicle and the lane boundary over a plurality of images. (See col. 17, lines 12-20, the real time image data is used to update the map data. Map is consistently verified/updated analyzing plurality of images. See figs. 6-7, associated column and lines, measuring distance at one point of the lane boundary and center point of vehicle. Lastly see fig. 2, step 206-210) In regards to claim 14, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to obtain the image and determine the position of the target vehicle within the image. (See figs. 2, 6-7, and associated column and lines, position of the target vehicle is determined based on image, distance between lane markings) In regards to claim 15, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to determine the position of the target vehicle within the image based on a plurality of images. (See col. 17, lines 12-20, the real time image data is used to update the map data. Map is consistently verified/updated analyzing plurality of images. See figs. 6-7, col. 17, lines 21-45, col. 8, lines 30-43, detecting positioning of the vehicle at each map verification) In regards to claim 16, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to determine a bounding box associated with the target vehicle based on a plurality of images and determine the position of the target vehicle based on the bounding box. (See figs. 6-7, The frame of vehicle 102 represents claimed bounding box. Examiner requests for further clarification of “bounding box” into claim language. Based on the frame, vehicle position is measured using center of the frame (e.g. fig. 2, Step 208 and associated column and lines)) Claims 17-24, 26-27, and 29-30 are similar in scope to claims 1-8, 10-11, 14, and 16. Therefore, they are rejected under similar rationale as set forth above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 9, 13, 25 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Ichinokawa (US 9494438 B1) in view of Liu Wenzhi et al. (JP 2021508901 A). Examiner herein relies on attached copy of Liu with this Office Action for citations. In regards to claim 9, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to track, … the one or more positions of the lane boundary within the image based on a plurality of images. (See col. 6, lines 3-11, one or more images…col. 9, lines 53-62, upon the plurality of cameras 118 capturing the images of the surrounding environment of the vehicle 102, the vehicle camera system 108 can evaluate the images and can execute the preprogrammed camera logic to extract details regarding one or more image attributes. In one embodiment, the one or more image attributes can include lane boundary image coordinates of recognized lane boundary attributes that can include, but are not limited to, lane markers, guardrails, construction barrels, concrete dividers,) Ichinokawa does not specifically teach using a Kalman filter Liu further teaches, using a Kalman filter (See page 10, a tracker is created for each marking line to track the marking line, based on the marking line identified by the image of the first frame participating in the marking line detection in the video… the parameter value in the lane marking information of the image of the current frame is updated to the tracker of the same lane marking specified by the image of the previous frame, and the Kalman filter is applied to the lane marking information of the same marking line in the image of the current frame.) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the apparatus of Ichinokawa to further comprise apparatus taught by Liu in order to improve the accuracy, robustness, and stability of lane detection and tracking in dynamic driving environments. Also, safety can be further improved (see abstract). In regards to claim 13, Ichinokawa teaches the apparatus of claim 1, wherein the at least one processor is further configured to track, …the distance between the target vehicle and the lane boundary over a plurality of images. (See col. 6, lines 3-11, one or more images…col. 9, lines 53-62, upon the plurality of cameras 118 capturing the images of the surrounding environment of the vehicle 102, the vehicle camera system 108 can evaluate the images and can execute the preprogrammed camera logic to extract details regarding one or more image attributes. In one embodiment, the one or more image attributes can include lane boundary image coordinates of recognized lane boundary attributes that can include, but are not limited to, lane markers, guardrails, construction barrels, concrete dividers. Also see figs. 6-7, associated column and lines) Ichinokawa does not specifically teach using a Kalman filter Liu further teaches, using a Kalman filter (See page 10, It is possible to improve the accuracy of the lane departure information by performing Kalman filtering on the parameter value of, and subsequently accurately identify information such as the distance between the vehicle and the lane marking, and accurately deviate from the lane of the vehicle. Contributes to early warning.) Therefore, it would have been obvious by one of ordinary skilled in the art before the time the invention was effectively filed to modify the apparatus of Ichinokawa to further comprise apparatus taught by Liu in order to improve the accuracy, robustness, and stability of lane detection and tracking in dynamic driving environments. Also, safety can be further improved (see abstract). Claims 25 and 28 are similar in scope to claims 9 and 13, therefore, they are rejected under similar rationale as set forth above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUSTIN S LEE whose telephone number is (571)272-2674. The examiner can normally be reached Monday - Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMES J LEE can be reached at (571)270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUSTIN S LEE/Primary Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Jun 16, 2023
Application Filed
Oct 14, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597247
UNDERWATER DEVICE FOR ACQUIRING IMAGES OF A WATER BOTTOM
2y 5m to grant Granted Apr 07, 2026
Patent 12597300
INTEGRATED VEHICLE HEALTH MANAGEMENT SYSTEMS AND METHODS USING AN ENHANCED FAULT MODEL FOR A DIAGNOSTIC REASONER
2y 5m to grant Granted Apr 07, 2026
Patent 12596373
SYSTEM AND METHOD FOR EVALUATING THE PERFORMANCE OF A VEHICLE OPERATED BY A DRIVING AUTOMATION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583540
A METHOD FOR CONTROLLING ASSEMBLY OF A VEHICLE FROM A SET OF MODULES, A CONTROL DEVICE, A SYSTEM, A VEHICLE, A COMPUTER PROGRAM AND A COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12548456
Methods and Apparatus for Enhancing Unmanned Aerial Vehicle Management Using a Wireless Network
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+26.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 462 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month