Prosecution Insights
Last updated: April 19, 2026
Application No. 17/883,951

COLLISION AVOIDANCE METHOD AND APPARATUS

Final Rejection §102§103
Filed
Aug 09, 2022
Examiner
COOLEY, CHASE LITTLEJOHN
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hyundai Mobis Co., Ltd.
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
116 granted / 173 resolved
+15.1% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
46 currently pending
Career history
219
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 173 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in response to the amendments filed on 10/30/2025, in which claims 1, 10, 11, and 20 are amended. Claims 1-20 are rejected. Response to Arguments Applicant's arguments, see REMARKS filed 10/30/2025, with respect to the rejection of claims 11-19, under 35 USC §112(b), have been fully considered and are persuasive. Therefore, the previous rejections have been withdrawn. Applicant’s arguments with respect to the rejection of claims 1-8 and 10 rejected under 35 USC §103, have been fully considered and are persuasive. Therefore, the previous rejections under 35 USC §103 have been withdrawn. However, a new rejection in view of Rubin et al. is presented below. Applicant’s arguments with respect to the rejection of claims 11-15, 18, and 20, rejected under 35 USC §102, have been fully considered and are persuasive. Therefore, the previous rejections under 35 USC §102 have been withdrawn. However, a new rejection is presented below in view of Takabayashi et al. and Rubin et al. is presented below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 8, 10, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gallagher (US 10,595,176 B1, “Gallagher”) in view of Takabayashi et al. (US 2018/0182245 A1, “Takabayashi”) and in further view of Rubin et al. (US 2013/0293394 A1, “Rubin”). Regarding claims 1, 10, and 20, Gallagher discloses virtual lane lines for connected vehicles and teaches: A collision avoidance method, comprising: (the controller 101 can receive information and data from the various vehicle components…to provide or enhance functionality related to adaptive cruise control, automatic parking, parking assist, automatic emergency braking (AEB), etc. – See at least Col. 6, ln. 49-66) sensing, by a sensor, a forward vehicle and a lane of a front road; (The controller 101 may be in communication with various sensors, modules, and vehicle systems both within and remote from the vehicle. The system 100 may include such sensors, such as various cameras, a light detection and ranging (LIDAR) sensor, a radar sensor, an ultrasonic sensor, and other sensors for detecting information about the surrounding of the vehicle, including, for example, other vehicles, lane lines, guard rails, objects in the roadway, buildings, pedestrians, etc. In the example shown in Fig. 1, the system 100 may include a forward LIDAR sensor 103, a forward radar sensor 104, a forward camera 107; - See at least Col. 3, ln. 1-20) receiving, by a communicator comprising a transceiver and a processing unit, global positioning system (GPS) information and (The system 100 may also include a global positioning system (GPS) 113 that detects or determines the current position of the vehicle – See at least Col. 4, ln. 33-36) and vehicle specification information from the forward vehicle; (The system 100 may also include a vehicle-to-vehicle or vehicle-to infrastructure module (e.g. V2X transceiver) to send and receive data from objects proximate to the vehicle – See at least Col. 5, ln. 31-59 ) upon failing to detect the lane of the front road, generating, by a processor, a virtual lane predicted based on the trajectory and movement pattern of the forward vehicle, using real-time GPS received directly from the forward vehicle and vehicle specification information including an entire width and an entire length of the forward vehicle (Fig. 2 illustrates an example flow chart for executing a virtual lane line application in a connected vehicle that acts as a host vehicle. The application may also include virtual nearby vehicles during conditions when the driver cannot see them, or an AV cannot detect them using non-V2X sensors. For example, the lane lines may be projected when lane lines can’t be seen by a driver or identified by traditional non-DSRC sensors (e.g., LIDAR, cameras, radar, etc.). Furthermore it could help identify vehicles that the driver cannot see if visibility is good (e.g., non-line-of-sight vehicle) – See at least Col. 7, ln. 6-25) using the forward vehicle’s predicted trajectory derived from real-time GPS received directly from the forward vehicle (The system 100 may also include global position system (GPS) 113 that detects and determines a current position of the vehicle – See at least Col. 5, ln. 60-62; Examiner notes that the current position would be real-time GPS data and that the breadcrumb data, i.e., the forward vehicle path/trajectory data, is determined based on, i.e., derived from, a distance from the current vehicle e.g., within 200m.) and vehicle specification information, (The V2X data (e.g., breadcrumb data) may also be able to identify where vehicles in the vicinity of the host-vehicle are located, their direction of travel, speed of travel, etc. – See at least Col. 5, ln. 31-54) and performing, by the processor, a control operation to avoid collision with the forward vehicle based on the generated virtual lane. (For example, data collected by the in-vehicle camera 103, 109, and the forward camera 107 may be utilized in context with the GPS data and map data to provide or enhance functionality related to adaptive cruise control, automatic parking, parking assist, automatic emergency braking (AEB), etc. – See at least col.6, ln. 52-60) Gallagher does not explicitly teach that the location “bread crumbs” include real-time GPS received directly from the forward vehicle, however, Takabayashi discloses route prediction system and teaches: a processor configured to generating, by a processor, (Processing circuitry embodies each function of the observation unit 1, the vehicle detection unit 2, the hypothesis generation unit 3, the likelihood calculation unit 4, the lane detection unit 5, the tracking processing unit 6, the collision detection unit 7, the route prediction unit 8, the hypothesis likelihood calculation unit 9, and the predicted route analysis unit 10 in the route prediction system 100 described in Embodiment 1 – See at least ¶ [0027]) [] lane predicted based on the trajectory and movement pattern of the forward vehicle, (According to Embodiment 1, the vehicle detection unit 2 includes the lane detection unit 5 to detect a lane where the host vehicle is located, on the basis of the observation results observed by the observation unit 1, the tracking processing unit 6 to track the vehicles surrounding the host vehicle on the basis of the observation results observed by the observation unit 1, and the collision detection unit 7 to detect, among the surrounding vehicles tracked by the tracking processing unit 6, the at least two of the surrounding vehicles having collision possibilities – See at least ¶ [0064]) using real-time GPS received directly from the forward vehicle and vehicle specification information [] (The observation unit 1 in the route prediction system 100 in FIG. 1 observes an area including the host vehicle and other moving vehicles to measure the positions and speeds of surrounding vehicles and pedestrians, using sensors such as a millimeter wave radar, a laser radar, an optical camera, and an infrared camera, and a communication device or the like that receives GPS positions of surrounding vehicles and pedestrians (S101) – See at least ¶ [0032]; Examiner notes that because the system distinguishes between vehicles and pedestrians it is using vehicle specification information, i.e., the vehicle is a vehicle and not a pedestrian.) and perform a control operation to avoid collision with the forward vehicle based on the generated [] lane. (For example, as in FIG. 5, in a case where the host vehicle and vehicles A to E exist, the vehicles A and B collide, and the vehicles C and E collide, in order to avoid the collisions, the following vehicles A and C will take avoiding actions in accordance with the collision avoidance models – See at least ¶ [0049]) In summary, Gallagher discloses tracking vehicle locations and trajectories using a “bread crumb” strategy. Gallagher further teaches the system receives V2X communications from the environment to further help track and identify objects in the environment. While GPS based bread crumb tracking is a technique used in vehicle control1, Gallagher does not explicitly teach GPS based bread crumb tracking. However, Takabayashi discloses route prediction system and teaches the use of V2X communication to receive GPS data directly from the objects in the environment, e.g., vehicles and pedestrians. This GPS data is then used to model the objects trajectory, the environment, and to determine control to avoid collision. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher to provide for the route prediction system, as taught in Takabayashi, to calculate, in a case where plural surrounding vehicles may collide in future, predicted routes of the plural vehicles without contradiction while reducing calculation load. (At Takabayashi ¶ [0011]) The combination of Takabayashi and Gallagher does not teach that the received vehicle specification information includes an entire width and an entire length of the forward vehicle. However, Rubin discloses operational efficiency in a vehicle-to-vehicle communications system and teaches: [] vehicle specification information including an entire width and an entire length of the forward vehicle [] (The Vehicle size sub-message is described below in Table 12. The vehicle length, width, corner radius, projections, and height are in units of cm are the maximum, such that the plan-view shape is defined by these fields fully encompasses the vehicle – See at least ¶ [0391]) In summary, Gallagher discloses tracking vehicle locations and trajectories using a “bread crumb” strategy. Gallagher further teaches the system receives V2X communications from the environment to further help track and identify objects in the environment. While Takabayashi discloses receiving real-time GPS data from surrounding vehicles and objects. The combination of Gallagher and Takabayashi does not explicitly disclose that vehicle length and width is received from the forward vehicle. However, Rubin discloses operational efficiency in a vehicle-to-vehicle communications system and teaches vehicle size data being shared in the vehicle-to-vehicle network. Rubin further teaches that this size data includes vehicle lengths and widths. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher and Takabayashi to provide for the operational efficiency in a vehicle-to-vehicle communications system, as taught in Rubin, for the following reasons: first, other vehicles may take advantage of these predictions to plan their own lane changes or other behavior, such as slowing down (particularly when behind a driver who slows to turn, but fails to signal). Second, vehicles may use these predictions to improve the behavior of the driver of the vehicle. Such as automatic deployment of turn signals. Third, signals may use this information to closely estimate the number of vehicles desiring each light phase and the location of those vehicles, in advance of finalizing phase timing. (At Rubin¶ [0588]) Regarding claim 2, Gallagher further teaches: wherein the processor generates a virtual vehicle corresponding to the forward vehicle based on the GPS information (GPS with map data is used to determine lane information, e.g., lane markings, number of lanes, etc. – See at least Col. 7, ln. 35-40) and the vehicle specification information upon failing to detect the lane of the front road, and (Fig. 2 illustrates an example flow chart for executing a virtual lane line application in a connected vehicle that acts as a host vehicle. The application may also include virtual nearby vehicles during conditions when the driver cannot see them, or an AV cannot detect them using non-V2X sensors. For example, the lane lines may be projected when lane lines can’t be seen by a driver or identified by traditional non-DSRC sensors (e.g., LIDAR, cameras, radar, etc.). Furthermore it could help identify vehicles that the driver cannot see if visibility is good (e.g., non-line-of-sight vehicle) – See at least Col. 7, ln. 6-25) generates the virtual lane based on the generated virtual vehicle. (Additionally, a forward-facing camera that does not identify a vehicle ahead, but if radar or the V2X transceiver detects the vehicle, i.e., based on the generated virtual vehicle, this may activate the virtual lanes – See at least Col. 10, ln. 62-65) Regarding claim 3, Gallagher further teaches: wherein the processor generates the virtual lane based on a width of a lane in which the virtual vehicle is traveling and an entire width of the virtual vehicle. (The virtual lane lines may indicate an outer boundary of the lanes that cannot be crossed, as well as lines that may be crossed by the moving vehicle. The HUD may also output and overlay the virtual lane lines “full-sized”, i.e., based on a width of the lane, on the windshield to align where they should be aligned on the road – See at least Col. 8, ln. 45-50; Examiner notes that the system identifies the entire width of the virtual vehicle – See at least Fig. 3 A-J #307) Regarding claim 4, Gallagher further teaches: wherein the communicator receives the GPS information (The system 100 may also include a global positioning system (GPS) 113 that detects or determines the current position of the vehicle – See at least Col. 4, ln. 33-36) and the vehicle specification information from each of a plurality of forward vehicles based on presence of the plurality of forward vehicles, Fig. 2 illustrates an example flow chart for executing a virtual lane line application in a connected vehicle that acts as a host vehicle. The application may also include virtual nearby vehicles during conditions when the driver cannot see them, or an AV cannot detect them using non-V2X sensors – See at least Col. 7, ln. 6-25) wherein the processor generates a plurality of virtual vehicles corresponding to the plurality of forward vehicles, respectively, and (The virtual line application may also identify objects that may be difficult to see. For example, a first-colored box 303 (e.g., any color such as red) may indicate an object that is traveling in an opposite path (e.g., oncoming path) of the host vehicle…Additionally, the virtual lane line application may identify objects moving in the same path utilizing a dashed box 307. The dashed box may indicate other vehicles that are driving on the same road. – See at least Col. 9, ln. 16-25) generates a plurality of virtual lanes corresponding to the plurality of generated virtual vehicles, respectively. (As shown in Fig. 3F, the virtual lane lines may include two outer boundary virtual lines 301 that indicate where the lanes cannot merge or ends (e.g. where the road’s shoulder may start). The virtual lane lines may also display a dashed line 302 that represents a separable lane for driving in the same direction or could be used to delineate the lanes driving in an opposite direction – See at least Col. 11, ln. 14-20) Regarding claim 5, Gallagher further teaches: wherein the processor determines whether the plurality of virtual lanes are straight lanes. (The virtual lines are a “full-sized” representation of the actual lane lines. As shown in Fig. 3D, when the actual lane lines are straight, so are the virtual lanes that represent the actual lanes and when the actual lanes curve, so do the virtual lanes. Therefore, the system dis determining whether the plurality of virtual lines are straight.) Regarding claim 8, Gallagher further teaches wherein the processor receives curvature information of the front road from the navigation system, (Such ADAS map information may include detailed lane information, slope information, road curvature data, lane marking characteristics, etc. – See at least Col. , ln. 45-50) and generates the virtual lane in correspondence to the curvature information. (The virtual lines are a “full-sized” representation of the actual lane lines. As shown in Fig. 3B, when the actual lane lines curve, so do the virtual lanes that represent the actual lanes.) Claim(s) 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Gallagher in view of Takabayashi and Rubin, as applied to claim 1, and in further view of Mizoguchi (US 2019/0375405 A1, “Mizoguchi”). Regarding claims 6, Gallagher does not explicitly teach wherein the processor generates virtual lanes of an entire road by fusing the plurality of virtual lanes, based on the plurality of virtual lanes being the straight lanes. However, Mizoguchi discloses vehicle traveling control apparatus and teaches: wherein the processor generates virtual lanes of an entire road by fusing the plurality of virtual lanes, based on the plurality of virtual lanes being the straight lanes. (Thereafter, the external environment recognizer 10 may generate approximate models of right and left lane lines by processing time-series data on the candidate points of the lane lines in a spatial coordinate system. The time-series data may be based on a displacement of the own vehicle per unit time. The external environment recognizer 10 may recognize the lane lines on the basis of the generated approximate models of the lane lines. The approximate models of the lane lines may be generated by connecting straight line components obtained through the Hough transform or approximating into a curve of a quadratic equation, for example – See at least ¶ [0024]) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher, Takabayashi, and Rubin to provide for the vehicle traveling control apparatus, as taught in Mizoguchi, to provide a traveling control apparatus that makes it possible to recover a vehicle speed after decelerating in the curve zone without causing a feeling of strangeness of a driver. (At Mizoguchi ¶ [0018]) Regarding claim 7, Gallagher does not explicitly teach, but Mizoguchi further teaches: the processor disregards non-straight virtual lanes when some of the plurality of virtual lanes are not the straight lanes, and generates virtual lanes of an entire road by fusing a plurality of virtual lanes except for the disregarded virtual lanes. (Thereafter, the external environment recognizer 10 may generate approximate models of right and left lane lines by processing time-series data on the candidate points of the lane lines in a spatial coordinate system. The time-series data may be based on a displacement of the own vehicle per unit time. The external environment recognizer 10 may recognize the lane lines on the basis of the generated approximate models of the lane lines. The approximate models of the lane lines may be generated by connecting straight line components obtained through the Hough transform or approximating into a curve of a quadratic equation, for example – See at least ¶ [0024]; Examiner notes that the Hough transform is explicitly design to find straight lines and disregard non-straight lines.) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher, Takabayashi, and Rubin to provide for the vehicle traveling control apparatus, as taught in Mizoguchi, to provide a traveling control apparatus that makes it possible to recover a vehicle speed after decelerating in the curve zone without causing a feeling of strangeness of a driver. (At Mizoguchi ¶ [0018]) Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Gallagher in view of Takabayashi and Rubin, as applied to claim 1, and in further view of Bostick et al. (US 10,410, 523 B1, “Bostick”). Regarding claim 9, Gallagher further teaches: wherein the processor generates a [a display] based on the generated virtual lane, and performs the control operation to output the generated [display] to a front and [] a vehicle. (The virtual lane lines may indicate an outer boundary of the lanes that cannot be crossed, as well as lines that may be crossed by the moving vehicle. The HUD may also output and overlay the virtual lane lines “full-sized”, i.e., based on a width of the lane, on the windshield to align where they should be aligned on the road – See at least Col. 8, ln. 45-50) Gallagher does not explicitly teach generating a hologram or outputting the generated hologram to a front and a rear of the vehicle. However, Bostick discloses system and method for holographic communications between vehicles and teaches: wherein the processor generates a hologram based on the [vehicle data], and performs the control operation to output the generated hologram to a front and a rear of a vehicle. (a hologram showing the vehicle 1 movements can be projected (i) within vehicle 1 for viewing by Joe such as a hologram positioned near the wind shield or dashboard, or (ii) outside of the vehicle 1 for viewing by Joe such as a hologram positioned just above the hood of the car, or (iii) outside of the vehicle 1 such as being projected near the rear, front or to the sides of the vehicle 1 to allow others to view the holographic display. – See at least Col. 7, ln. 23-36) In summary, Gallagher discloses outputting the virtual lanes on a HUD or similar display device. Gallagher does not explicitly disclose that this display is a holographic display or that it may be displayed to the rear of the vehicle. However, Bostick discloses a system and method for holographic communications between vehicles and teaches that the vehicle travel and environmental information presented to the driver in a HUD may also be provided as a hologram to the exterior of the vehicle, e.g., front or rear of the vehicle. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher, Takabayashi, and Rubin to provide for the system and method for holographic communications between vehicles, as taught in Bostick, to allow others to view the holographic display, i.e., to allow others to see the missing or difficult to see lanes. (At Bostick ¶ Col. 7, ln. 35-36) Claim(s) 11-15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gallagher in view of Takabayashi. Regarding claim 11, Gallagher discloses virtual lane lines for connected vehicles and teaches: A collision avoidance apparatus, comprising: (the controller 101, i.e., a collision avoidance apparatus, can receive information and data from the various vehicle components…to provide or enhance functionality related to adaptive cruise control, automatic parking, parking assist, automatic emergency braking (AEB), etc. – See at least Col. 6, ln. 49-66) a sensor configured to sense a forward vehicle and a lane of a front road; (The controller 101 may be in communication with various sensors, modules, and vehicle systems both within and remote from the vehicle. The system 100 may include such sensors, such as various cameras, a light detection and ranging (LIDAR) sensor, a radar sensor, an ultrasonic sensor, and other sensors for detecting information about the surrounding of the vehicle, including, for example, other vehicles, lane lines, guard rails, objects in the roadway, buildings, pedestrians, etc. In the example shown in Fig. 1, the system 100 may include a forward LIDAR sensor 103, a forward radar sensor 104, a forward camera 107; - See at least Col. 3, ln. 1-20) a communicator comprising a transceiver and a processing unit configured to receive global positioning system (GPS) information (The system 100 may also include a global positioning system (GPS) 113 that detects or determines the current position of the vehicle – See at least Col. 4, ln. 33-36) and vehicle specification information from the forward vehicle; (The system 100 may also include a vehicle-to-vehicle or vehicle-to infrastructure module (e.g. V2X transceiver) to send and receive data from objects proximate to the vehicle – See at least Col. 5, ln. 31-59 ) a navigation system comprising a GPS module and a map database configured to provide map information of the front road; and (The vehicle system may include a navigation system that has GPS capabilities with map data – See at least Col. 7, ln. 34-37; The system further includes a navigation display, i.e., provides map information of the road – See at least Col. 4, ln. 55-60) a processor configured to generate a virtual lane corresponding to the forward vehicle, upon failing to detect the lane of the front road, (Fig. 2 illustrates an example flow chart for executing a virtual lane line application in a connected vehicle that acts as a host vehicle. The application may also include virtual nearby vehicles during conditions when the driver cannot see them, or an AV cannot detect them using non-V2X sensors. For example, the lane lines may be projected when lane lines can’t be seen by a driver or identified by traditional non-DSRC sensors (e.g., LIDAR, cameras, radar, etc.). Furthermore it could help identify vehicles that the driver cannot see if visibility is good (e.g., non-line-of-sight vehicle) – See at least Col. 7, ln. 6-25) using the forward vehicle’s predicted trajectory derived from real-time GPS [] (The system 100 may also include global position system (GPS) 113 that detects and determines a current position of the vehicle – See at least Col. 5, ln. 60-62; Examiner notes that the current position would be real-time GPS data and that the breadcrumb data, i.e., the forward vehicle path/trajectory data, is determined based on, i.e., derived from, a distance from the current vehicle e.g., within 200m.) and vehicle specification information of the forward vehicle, (The V2X data (e.g., breadcrumb data) may also be able to identify where vehicles in the vicinity of the host-vehicle are located, their direction of travel, speed of travel, etc. – See at least Col. 5, ln. 31-54) and perform a control operation to avoid collision with the forward vehicle based on the generated virtual lane. (For example, data collected by the in-vehicle camera 103, 109, and the forward camera 107 may be utilized in context with the GPS data and map data to provide or enhance functionality related to adaptive cruise control, automatic parking, parking assist, automatic emergency braking (AEB), etc. – See at least col.6, ln. 52-60) Gallagher does not explicitly teach that the location “bread crumbs” include real-time GPS received directly from the forward vehicle, however, Takabayashi discloses route prediction system and teaches: [] using the forward vehicle’s predicted trajectory derived from real-time GPS received directly from the forward vehicle and vehicle specification information of the vehicle [] (The observation unit 1 in the route prediction system 100 in FIG. 1 observes an area including the host vehicle and other moving vehicles to measure the positions and speeds of surrounding vehicles and pedestrians, using sensors such as a millimeter wave radar, a laser radar, an optical camera, and an infrared camera, and a communication device or the like that receives GPS positions of surrounding vehicles and pedestrians (S101) – See at least ¶ [0032]; Examiner notes that because the system distinguishes between vehicles and pedestrians it is using vehicle specification information, i.e., the vehicle is a vehicle and not a pedestrian.) In summary, Gallagher discloses tracking vehicle locations and trajectories using a “bread crumb” strategy. Gallagher further teaches the system receives V2X communications from the environment to further help track and identify objects in the environment. While GPS based bread crumb tracking is a technique used in vehicle control, Gallagher does not explicitly teach GPS based bread crumb tracking. However, Takabayashi discloses route prediction system and teaches the use of V2X communication to receive GPS data directly from the objects in the environment, e.g., vehicles and pedestrians. This GPS data is then used to model the objects trajectory, the environment, and to determine control to avoid collision. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher to provide for the route prediction system, as taught in Takabayashi, to calculate, in a case where plural surrounding vehicles may collide in future, predicted routes of the plural vehicles without contradiction while reducing calculation load. (At Takabayashi ¶ [0011]) Regarding claim 12, Gallagher further teaches: wherein the processor generates a virtual vehicle corresponding to the forward vehicle based on the GPS information (GPS with map data is used to determine lane information, e.g., lane markings, number of lanes, etc. – See at least Col. 7, ln. 35-40) and the vehicle specification information upon failing to detect the lane of the front road, and (Fig. 2 illustrates an example flow chart for executing a virtual lane line application in a connected vehicle that acts as a host vehicle. The application may also include virtual nearby vehicles during conditions when the driver cannot see them, or an AV cannot detect them using non-V2X sensors. For example, the lane lines may be projected when lane lines can’t be seen by a driver or identified by traditional non-DSRC sensors (e.g., LIDAR, cameras, radar, etc.). Furthermore it could help identify vehicles that the driver cannot see if visibility is good (e.g., non-line-of-sight vehicle) – See at least Col. 7, ln. 6-25) generates the virtual lane based on the generated virtual vehicle. (Additionally, a forward-facing camera that does not identify a vehicle ahead, but if radar or the V2X transceiver detects the vehicle, i.e., based on the generated virtual vehicle, this may activate the virtual lanes – See at least Col. 10, ln. 62-65) Regarding claims 13, Gallagher further teaches: wherein the processor generates the virtual lane based on a width of a lane in which the virtual vehicle is traveling and an entire width of the virtual vehicle. (The virtual lane lines may indicate an outer boundary of the lanes that cannot be crossed, as well as lines that may be crossed by the moving vehicle. The HUD may also output and overlay the virtual lane lines “full-sized”, i.e., based on a width of the lane, on the windshield to align where they should be aligned on the road – See at least Col. 8, ln. 45-50; Examiner notes that the system identifies the entire width of the virtual vehicle – See at least Fig. 3 A-J #307) Regarding claims 14, Gallagher further teaches: wherein the communicator receives the GPS information (The system 100 may also include a global positioning system (GPS) 113 that detects or determines the current position of the vehicle – See at least Col. 4, ln. 33-36) and the vehicle specification information from each of a plurality of forward vehicles based on presence of the plurality of forward vehicles, Fig. 2 illustrates an example flow chart for executing a virtual lane line application in a connected vehicle that acts as a host vehicle. The application may also include virtual nearby vehicles during conditions when the driver cannot see them, or an AV cannot detect them using non-V2X sensors – See at least Col. 7, ln. 6-25) wherein the processor generates a plurality of virtual vehicles corresponding to the plurality of forward vehicles, respectively, and (The virtual line application may also identify objects that may be difficult to see. For example, a first-colored box 303 (e.g., any color such as red) may indicate an object that is traveling in an opposite path (e.g., oncoming path) of the host vehicle…Additionally, the virtual lane line application may identify objects moving in the same path utilizing a dashed box 307. The dashed box may indicate other vehicles that are driving on the same road. – See at least Col. 9, ln. 16-25) generates a plurality of virtual lanes corresponding to the plurality of generated virtual vehicles, respectively. (As shown in Fig. 3F, the virtual lane lines may include two outer boundary virtual lines 301 that indicate where the lanes cannot merge or ends (e.g. where the road’s shoulder may start). The virtual lane lines may also display a dashed line 302 that represents a separable lane for driving in the same direction or could be used to delineate the lanes driving in an opposite direction – See at least Col. 11, ln. 14-20) Regarding claims 15, Gallagher further teaches: wherein the processor determines whether the plurality of virtual lanes are straight lanes. (The virtual lines are a “full-sized” representation of the actual lane lines. As shown in Fig. 3D, when the actual lane lines are straight, so are the virtual lanes that represent the actual lanes and when the actual lanes curve, so do the virtual lanes. Therefore, the system dis determining whether the plurality of virtual lines are straight.) Regarding claims 18, Gallagher further teaches wherein the processor receives curvature information of the front road from the navigation system, (Such ADAS map information may include detailed lane information, slope information, road curvature data, lane marking characteristics, etc. – See at least Col. , ln. 45-50) and generates the virtual lane in correspondence to the curvature information. (The virtual lines are a “full-sized” representation of the actual lane lines. As shown in Fig. 3B, when the actual lane lines curve, so do the virtual lanes that represent the actual lanes.) Claim(s) 16 and 17 is rejected under 35 U.S.C. 103 as being unpatentable over Gallagher in view of Takabayashi, as applied to claim 11, and in further view of Mizoguchi. Regarding claim 16, Gallagher does not explicitly teach wherein the processor generates virtual lanes of an entire road by fusing the plurality of virtual lanes, based on the plurality of virtual lanes being the straight lanes. However, Mizoguchi discloses vehicle traveling control apparatus and teaches: wherein the processor generates virtual lanes of an entire road by fusing the plurality of virtual lanes, based on the plurality of virtual lanes being the straight lanes. (Thereafter, the external environment recognizer 10 may generate approximate models of right and left lane lines by processing time-series data on the candidate points of the lane lines in a spatial coordinate system. The time-series data may be based on a displacement of the own vehicle per unit time. The external environment recognizer 10 may recognize the lane lines on the basis of the generated approximate models of the lane lines. The approximate models of the lane lines may be generated by connecting straight line components obtained through the Hough transform or approximating into a curve of a quadratic equation, for example – See at least ¶ [0024]) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher and Takabayashi to provide for the vehicle traveling control apparatus, as taught in Mizoguchi, to provide a traveling control apparatus that makes it possible to recover a vehicle speed after decelerating in the curve zone without causing a feeling of strangeness of a driver. (At Mizoguchi ¶ [0018]) Regarding claim 17, Gallagher does not explicitly teach, but Mizoguchi further teaches: the processor disregards non-straight virtual lanes when some of the plurality of virtual lanes are not the straight lanes, and generates virtual lanes of an entire road by fusing a plurality of virtual lanes except for the disregarded virtual lanes. (Thereafter, the external environment recognizer 10 may generate approximate models of right and left lane lines by processing time-series data on the candidate points of the lane lines in a spatial coordinate system. The time-series data may be based on a displacement of the own vehicle per unit time. The external environment recognizer 10 may recognize the lane lines on the basis of the generated approximate models of the lane lines. The approximate models of the lane lines may be generated by connecting straight line components obtained through the Hough transform or approximating into a curve of a quadratic equation, for example – See at least ¶ [0024]; Examiner notes that the Hough transform is explicitly design to find straight lines and disregard non-straight lines.) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher and Takabayashi to provide for the vehicle traveling control apparatus, as taught in Mizoguchi, to provide a traveling control apparatus that makes it possible to recover a vehicle speed after decelerating in the curve zone without causing a feeling of strangeness of a driver. (At Mizoguchi ¶ [0018]) Claim(s) 19 is rejected under 35 U.S.C. 103 as being unpatentable over Gallagher in view of Takabayashi, as applied to claim 11, and in further view of Bostick. Regarding claim 19, Gallagher further teaches: wherein the processor generates a [a display] based on the generated virtual lane, and performs the control operation to output the generated [display] to a front and [] a vehicle. (The virtual lane lines may indicate an outer boundary of the lanes that cannot be crossed, as well as lines that may be crossed by the moving vehicle. The HUD may also output and overlay the virtual lane lines “full-sized”, i.e., based on a width of the lane, on the windshield to align where they should be aligned on the road – See at least Col. 8, ln. 45-50) Gallagher does not explicitly teach generating a hologram or outputting the generated hologram to a front and a rear of the vehicle. However, Bostick discloses system and method for holographic communications between vehicles and teaches: wherein the processor generates a hologram based on the [vehicle data], and performs the control operation to output the generated hologram to a front and a rear of a vehicle. (a hologram showing the vehicle 1 movements can be projected (i) within vehicle 1 for viewing by Joe such as a hologram positioned near the wind shield or dashboard, or (ii) outside of the vehicle 1 for viewing by Joe such as a hologram positioned just above the hood of the car, or (iii) outside of the vehicle 1 such as being projected near the rear, front or to the sides of the vehicle 1 to allow others to view the holographic display. – See at least Col. 7, ln. 23-36) In summary, Gallagher discloses outputting the virtual lanes on a HUD or similar display device. Gallagher does not explicitly disclose that this display is a holographic display or that it may be displayed to the rear of the vehicle. However, Bostick discloses a system and method for holographic communications between vehicles and teaches that the vehicle travel and environmental information presented to the driver in a HUD may also be provided as a hologram to the exterior of the vehicle, e.g., front or rear of the vehicle. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the virtual lane lines for connected vehicles of Gallagher and Takabayashi to provide for the system and method for holographic communications between vehicles, as taught in Bostick, to allow others to view the holographic display, i.e., to allow others to see the missing or difficult to see lanes. (At Bostick ¶ Col. 7, ln. 35-36) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHASE L COOLEY whose telephone number is (303)297-4355. The examiner can normally be reached Monday-Thursday 7-5MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached on 571-270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.L.C./Examiner, Art Unit 3662 /ANISS CHAD/Supervisory Patent Examiner, Art Unit 3662 1 “GPS breadcrumbs refer to the trail of location points a vehicle leaves behind as it moves, which are recorded by GPS tracking systems. Each “breadcrumb” represents a specific time and location, creating a detailed map of the vehicle’s route.” (https://tobicloud.com/glossary/gps-breadcrumbs/)
Read full office action

Prosecution Timeline

Aug 09, 2022
Application Filed
Sep 07, 2024
Non-Final Rejection — §102, §103
Dec 09, 2024
Response Filed
Feb 21, 2025
Final Rejection — §102, §103
May 27, 2025
Response after Non-Final Action
Jul 08, 2025
Request for Continued Examination
Jul 15, 2025
Response after Non-Final Action
Jul 25, 2025
Non-Final Rejection — §102, §103
Oct 30, 2025
Response Filed
Nov 28, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592154
CONTROL DEVICE, MONITORING SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12570125
TRIP INFORMATION CONTROL SCHEME
2y 5m to grant Granted Mar 10, 2026
Patent 12545274
PEER-TO-PEER VEHICULAR PROVISION OF ARTIFICIALLY INTELLIGENT TRAFFIC ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Patent 12545302
SYSTEM, METHOD AND DEVICES FOR AUTOMATING INSPECTION OF BRAKE SYSTEM ON A RAILWAY VEHICLE OR TRAIN
2y 5m to grant Granted Feb 10, 2026
Patent 12539858
APPARATUS AND METHOD FOR DETERMINING CUT-IN OF VEHICLE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
88%
With Interview (+20.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 173 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month