Prosecution Insights
Last updated: April 19, 2026
Application No. 18/088,975

APPARATUS AND METHOD FOR CONTROLLING PLATOONING

Non-Final OA §103
Filed
Dec 27, 2022
Examiner
STRYKER, NICHOLAS F
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hyundai Mobis Co., Ltd.
OA Round
3 (Non-Final)
40%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 40% of cases
40%
Career Allow Rate
15 granted / 38 resolved
-12.5% vs TC avg
Strong +28% interview lift
Without
With
+27.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
40 currently pending
Career history
78
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
56.9%
+16.9% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/12/2025 has been entered. Claim(s) 1, 4-7, 11, 14, and 19-20 have been amended. Claim(s) 8-10 have been cancelled. Claim(s) 1-7 and 11-20 are pending examination. Applicant has not argued against the 112(f) interpretation of claims 1-2, 4-6, 8, 11, and 12, so that interpretation will stand. Response to Arguments Applicant presents the following argument(s) regarding the previous office action: Applicant asserts that the 35 USC 103 rejection of independent claims 1, 19, and 20 is improper. Applicant alleges that the cited art fails to teach the claim limitations as recited and thus the independent claims should be allowable. Dependent claims should be allowable due to their dependence on allowable subject matter. Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive. Regarding applicant’s argument A, the examiner respectfully disagrees. Applicant’s arguments appear to be directed to claim 1, this response is to those arguments. Applicant argues against the teaching of the claimed subject matter by Gu, Schuh, Iba, and Yoon. Regarding applicant’s assertion that Gu fails to teach “reward determination part outputs the feedback signal as any one of positive feedback and negative feedback according to whether a first distance between the host vehicle and the rear vehicle is comprised in a preset first range.” The examiner previously cited Gu, [n0069], [n0072], and [n0078], to teach the sending of the feedback signal after comparing the points of a follower vehicle to a main vehicle. The crux of the teachings of Gu is the sending of a feedback signal. Gu, [n0069] says, “The longitudinal reward: if the driving distance is within a preset distance range, the longitudinal reward is the maximum value; if the driving distance is less than the preset safe vehicle distance, the longitudinal reward is the minimum value.” [n0072] and [n0078] further this teaching. Regarding the teachings of Gu it explicitly teaches the idea of altering the “reward” if the distance between vehicles is outside of a range or in the optimal range. While Gu doesn’t use a “positive” and/or “negative value” that appears to be a design choice by the applicant. Gu uses a “maximum” and “minimum” reward value. The end result is the same. Gu outputs a feedback signal between vehicles based on the intra-vehicle distance. Applicant appears to be making a distinction without a difference in this argument. While the current application and Gu use different ranges for the reward, there is no difference between the end result. Both systems teach the use of a reward determination device that can output feedback. In light of this the examiner would determine that the use of a positive and/or negative feedback as a design choice as the range of values is not important and would account to a mere rearrangement of parts, see In re Kuhle, 526 F.2d 553, 188 USPQ 7 (CCPA 1975). Regarding applicant’s assertion that Yoon does not teach, “wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle,” the examiner would respectfully disagree. Looking at the cited portion of Yoon it teaches a, “communication level.” This level can be based on the received signal strength, [0268] “communication state of the group can be determined based on at least one of the congestion of the channel used for the communication of the group, the reception sensitivity, the packet error ratio (PER), the received signal strength indication (RSSI).” (Emphasis added). As Yoon further teaches in [0255], “the platooning method according to an embodiment of the present invention may include a step of determining a communication state of a group (S100), and a step of adjusting a distance between vehicles according to the communication state of the group.” [0292], [0289]-[0290], and [0318]-[0321] further this teaching. Clearly Yoon teaches the system can determine a communication state of a group of platooning vehicles based on the RSSI of the vehicles. It can then adjust intra-vehicle distances and determine that some vehicles fail a distance determination based on that. After considering the applicant’s arguments the examiner is not persuaded, applicant’s arguments appear at best to be alleging the cited art doesn’t teach the claims but there is no evidence to this. The examiner does not find these arguments to be persuasive. As such the claims 1-7 and 11-20 will remain rejected under 35 USC 103 for the reasons recited below in the section titled, “Claim Rejections - 35 USC 103.” Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-2, 4-7, 15, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu (CN-111201411-B) in view of Schuh, (US Pat 10,520,581) Iba, (US PG Pub 2023/0073287) and Yoon (US PG Pub 2019/0079540). Regarding claim 1, Gu teaches an apparatus for controlling platooning, ([n0083] teaches the use of the device in relation to vehicles following a leader, i.e. platooning) the apparatus comprising: a learning device which performs reinforcement learning based on a feedback signal ([n0018] teaches the system using a reinforcement learning device which provides a feedback signal) and controls driving of the host vehicle based on a result of the reinforcement learning ([n0065] teaches the system able to control the driving of a vehicle through throttle, brake, and directional control) a reward determination part which obtains coordinates of the rear vehicle and generates the feedback signal by comparing the coordinates of the rear vehicle with coordinates of control points for the driving trajectory of the host vehicle, ([n0043]-[n0044] teaches a vehicle comparing its position of an expected position based on a series of control points.) wherein the reward determination part outputs the feedback signal as any one of positive feedback and negative feedback according to whether a first distance between the host vehicle and the rear vehicle is comprised in a preset first range, ([n0069], [n0072], and [n0078] teach the system determining a longitudinal reward based on a driving distance, i.e. the distance between a lead and follower vehicle. The system is taught to maximize the reward when the driving distance falls in the range of a far and safe range. Gu, [n0069] says, “The longitudinal reward: if the driving distance is within a preset distance range, the longitudinal reward is the maximum value; if the driving distance is less than the preset safe vehicle distance, the longitudinal reward is the minimum value.” [n0072] and [n0078] further this teaching. Regarding the teachings of Gu it explicitly teaches the idea of altering the “reward” if the distance between vehicles is outside of a range or in the optimal range. While Gu doesn’t use a “positive” and/or “negative value” that appears to be a design choice by the applicant. Gu uses a “maximum” and “minimum” reward value. The end result is the same. Gu outputs a feedback signal between vehicles based on the intra-vehicle distance. Both Gu and the instant application teach the use of a reward determination device that can output feedback. In light of this the examiner would determine that eh use of a positive and/or negative feedback as a design choice as the range of values is not important and would account to a mere rearrangement of parts, see In re Kuhle, 526 F.2d 553, 188 USPQ 7 (CCPA 1975).) Gu does not teach video information output from a camera provided in each of a host vehicle and a rear vehicle which are platooning, and such that the rear vehicle can follow a driving trajectory of the host vehicle, wherein when the first distance is not comprised in the preset first range, the learning device controls driving speed of the host vehicle such that the first distance is comprised in the preset first range, and wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle. However, Schuh teaches “video information output from a camera provided in each of a host vehicle and a rear vehicle which are platooning;” (Col.2, lines 8-16 teach the system using a first sensor to determine information about a lead and following vehicle, this sensor can be a camera. .Col. 23, lines 26-34; further this teaching. Col. 26, lines 39-56; teach that the system can be utilized with both a rear and forward facing cameras from both a leading and following vehicle) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu and Schuh; and have a reasonable expectation of success. Both relate to the control of vehicles as they travel and attempt to follow a lead vehicle. As Schuh teaches in Col 1, Background the use of vehicles in platooning allows for significant fuel savings, but can be dangerous when not done correctly. By using an autonomous system the safety is increased as the vehicles can communicate and even share sensor data. In Col. 1, Summary Schuh teaches that fusing sensor data collected from the vehicles in a platoon leads to in better automatic control of the system. Gu and Schuh fail to teach …controls the driving of the host vehicle…such that the rear vehicle can follow a driving trajectory of the host vehicle; and when the first distance is not comprised in the preset first range, the learning device controls driving speed of the host vehicle such that the first distance is comprised in the preset first range, and wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle. However, Iba teaches “…controls the driving of the host vehicle…such that the rear vehicle can follow a driving trajectory of the host vehicle.” ([0031] teaches the lead vehicle of a platoon as setting a trajectory based on the way the following vehicle can interact with the world, such as a max speed) and “wherein when the first distance is not comprised in the preset first range, the learning device controls driving speed of the host vehicle such that the first distance is comprised in the preset first range.” ([0063], [0066], and [0071]-[0077] teach the system of a lead vehicle with brakes and drive sources as able to adjust the intervehicle distance in order to maintain the distance between vehicles as within a preset range) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu and Schuh with Iba; and have a reasonable expectation of success. All relate to the control of vehicles in relation to a platooning system. Iba teaches in [0004] that there is a known issue that sometimes the lead vehicle of a platoon may enter a situation where its driving is not followable by following vehicles in a platoon. [0005] teaches that the system of Iba works to prevent this from occurring by measuring the speeds of a lead vehicle are not excessive and that following vehicles can match the trajectory of the lead vehicle. The combination of Gu, Schuh, and Iba does not teach wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle. However, Yoon teaches “wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle.” ([0302]-[0306] teaches the system determining the communication state, or level, of the vehicle in the platoon group, i.e. wireless strength. The state is compared to a reference value to determine the health of the platoon communication. This level can be based on the received signal strength, [0268] “communication state of the group can be determined based on at least one of the congestion of the channel used for the communication of the group, the reception sensitivity, the packet error ratio (PER), the received signal strength indication (RSSI).” (Emphasis added). As Yoon further teaches in [0255], “the platooning method according to an embodiment of the present invention may include a step of determining a communication state of a group (S100), and a step of adjusting a distance between vehicles according to the communication state of the group.” [0292], [0289]-[0290], and [0318]-[0321] further this teaching.) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, and Iba with Yoon; and have a reasonable expectation of success. All relate to the control of vehicles moving in an environment and possible use of platooning methods. As Yoon teaches in [0309] the use of a communication determination to figure out the state of the platoon allows the platoon to act quickly and efficiently. If the platoon determines that the communication is poor it can quickly determine to break up or take some form of corrective action. Claim 19 is substantially similar and would be rejected for the same rationale as above. Regarding claim 2, Gu teaches the apparatus of claim 1, wherein the reward determination part transmits the coordinates of the control points to the rear vehicle such that the rear vehicle follows the driving trajectory of the host vehicle based on the control points. ([n0044] teaches the sending of control points, i.e. coordinates, between a following and followed vehicle) Regarding claim 4, Gu teaches the apparatus of claim 1, wherein when the coordinates of the rear vehicle are outside a driving lane compared to the coordinates of the control points, the reward determination part outputs the feedback signal as the negative feedback. ([n0070] teach the system including a lateral reward which is analogous to a deviation from the control point in a lateral direction, i.e. outside a driving lane. The reward can be higher or lower based on the distance from the control point. The lateral reward is based on an adjustment value which is defined in [n0060] as a line segment on each side of the control point. [n0053] teaches the following vehicle being in a traveling lane of 1.2 times the width of the lead vehicle and the control points are based in this lane) Regarding claim 5, Gu teaches the apparatus of claim 1, wherein when the coordinates of the rear vehicle are outside a preset hazard distance from the coordinates of the control points, the reward determination part outputs the feedback signal as the negative feedback. ([n0073] teaches the system having a lateral reward determination based on a safety distance, i.e. hazard distance. The system can adjust the reward amount based on the idea of a distance from the control point. The lateral reward is based on an adjustment value which is defined in [n0060] as a line segment on each side of the control point. [n0053] teaches the following vehicle being in a traveling lane of 1.2 times the width of the lead vehicle and the control points are based in this lane) Regarding claim 6, Gu teaches the apparatus of claim 1, wherein when the coordinates of the rear vehicle are inside a driving lane compared to the coordinates of the control points and are inside a preset hazard distance from the coordinates of the control points, the reward determination part outputs the feedback signal as the positive feedback. ([n0070] and [n0073] teach the system having a lateral reward component which can include a maximum reward when the vehicle is within a safety distance and closest to the control point, which is analogous to the driving lane and hazard distance. The lateral reward is based on an adjustment value which is defined in [n0060] as a line segment on each side of the control point. [n0053] teaches the following vehicle being in a traveling lane of 1.2 times the width of the lead vehicle and the control points are based in this lane) Regarding claim 7, Gu teaches the apparatus of claim 1, wherein when the coordinates of the rear vehicle are outside a driving lane compared to the coordinates of the control points or are outside a preset hazard distance from the coordinates of the control points, the learning device controls one of driving direction, the driving speed of the host vehicle and a combination thereof such that the driving trajectory of the host vehicle corresponds to a driving trajectory of the rear vehicle. ([n0029], [n0056], and [n0065] teach the system as able to adjust the vehicle control strategy on the basis of the vehicle trajectory determination. The adjustment of the control strategy can come from adjust the throttle, brake and/or driving direction) Regarding claim 15, Gu teaches the apparatus of claim 1, wherein the learning device controls the driving of the host vehicle through output of a steering control signal, a braking control signal, and an acceleration control signal of the host vehicle. ([n0065] teaches the system able to control the driving of a vehicle through throttle, brake, and directional control) Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu, Schuh, Iba, and Yoon in view of Oniwa (US PG Pub 2019/0179330). Regarding claim 3, the combination of Gu, Schuh, Iba, and Yoon teaches the apparatus of claim 1. The combination of Gu, Schuh, Iba, and Yoon does not teach wherein the control points correspond to points which control a shape of a spline curve corresponding to the driving trajectory of the host vehicle. However, Oniwa teaches “wherein the control points correspond to points which control a shape of a spline curve corresponding to the driving trajectory of the host vehicle.” ([0145] teaches a plethora of vehicle traveling and modeling the trajectory of said vehicles as a spline curve divided at distinct time points, i.e. control points) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, Iba, and Yoon with Oniwa; and have a reasonable expectation of success. All relate to the control of vehicles on roadways. Use of a spline of control points would allow the vehicle system to maintain a valid control strategy of lane keeping. As [0127] explains the system can maintain its current plan even in the event that a change of the world state occurs. This would improve master control over the vehicle and prevent further deviations. Claim(s) 11-14, 17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu, Schuh, Iba, and Yoon in view of Kim (KR10-2020-0119924). Regarding claim 11, the combination of Gu, Schuh, Iba, and Yoon teaches the apparatus of claim 1. The combination of Gu, Schuh, Iba, and Yoon does not teach wherein the reward determination part outputs the feedback signal by considering whether a separate vehicle which is not platooning behind the host vehicle is recognized. However, Kim teaches “wherein the reward determination part outputs the feedback signal by considering whether a separate vehicle which is not platooning behind the host vehicle is recognized.” (Highlight in [Equation 2] teaches the system determining an interloper vehicle, 11, that has been detected behind the lead vehicle of the platoon) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, Iba, and Yoon with Kim; and have a reasonable expectation of success. All relate to the control of vehicles in the environment with regards to possible platooning needs. As Kim teaches in the Background Art there is a need to be able to recognize a vehicle position within a cluster driving, platoon, section of cars. Determining that a separate vehicle has intruded in the platoon allows the system to further determine how to act. Regarding claim 12, Gu teaches the apparatus of claim 11, wherein when the separate vehicle is recognized, the reward determination part outputs the feedback signal as any one of positive feedback and negative feedback according to whether a ratio of the first distance to a second distance between coordinates of the host vehicle and coordinates of the separate vehicle is comprised in a preset second range. ([n0069], [n0072], and [n0078] teach the system determining a longitudinal reward based on a driving distance, i.e. the distance between a lead and follower vehicle. The system is taught to maximize the reward when the driving distance falls within a range. This range is understood to be between two vehicles, the two vehicles could be a leading vehicle and an interloping vehicle) Regarding claim 13, the combination of Gu, Schuh, Iba, and Yoon teaches the apparatus of claim 12. The combination Gu, Schuh, Iba, and Yoon does not teach wherein the second distance is determined based on one of rear video information output from a rear camera provided in the host vehicle, a detection result of radar provided in the host vehicle and a combination thereof. However, Kim teaches “wherein the second distance is determined based on one of rear video information output from a rear camera provided in the host vehicle, a detection result of radar provided in the host vehicle and a combination thereof.” (The section teaching on figure 5, highlighted and boxed, teaches the systems determining the distance information between lead and following vehicle based off of a radar information, this can be combined with camera information and further is understood to be able to determine the distance of the interloping vehicle as shown in Fig. 4) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, Iba, and Yoon with Kim; and have a reasonable expectation of success. All relate to the control of vehicles in the environment with regards to possible platooning needs. As Kim teaches in the Background Art there is a need to be able to recognize a vehicle position within a cluster driving, platoon, section of cars. Determining that a separate vehicle has intruded in the platoon allows the system to further determine how to act. Regarding claim 14, the combination of Gu and Schuh teaches the apparatus of claim 12. The combination of Gu and Schuh does not teach wherein when the ratio of the first distance to the second distance is not comprised in the second range, the learning device controls the driving speed of the host vehicle such that the ratio of the first distance to the second distance is comprised in the preset second range. However, Iba teaches “wherein when the ratio of the first distance to the second distance is not comprised in the second range, the learning device controls the driving speed of the host vehicle such that the ratio of the first distance to the second distance is comprised in the preset second range.” ([0063], [0066], and [0071]-[0077] teach the system of a lead vehicle with brakes and drive sources as able to adjust the intervehicle distance in order to maintain the distance between vehicles as within a preset range) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu and Schuh with Iba; and have a reasonable expectation of success. All relate to the control of vehicles in relation to a platooning system. Iba teaches in [0004] that there is a known issue that sometimes the lead vehicle of a platoon may enter a situation where its driving is not followable by following vehicles in a platoon. [0005] teaches that the system of Iba works to prevent this from occurring by measuring the speeds of a lead vehicle are not excessive and that following vehicles can match the trajectory of the lead vehicle. Regarding claim 17, Gu teaches the apparatus of claim 1. Gu does not teach wherein the video information comprises rear video information output from a rear camera of the host vehicle and front video information output from a front camera of the rear vehicle, and the learning device determines mutually overlapping parts of the rear video of the host vehicle and the front video of the rear vehicle based on the rear video information and the front video information, and uses an overlapping degree of the rear video and the front video according to a result of the determination as learning data for the reinforcement learning. However, Schuh teaches “wherein the video information comprises rear video information output from a rear camera of the host vehicle and front video information output from a front camera of the rear vehicle,” (Col.2, lines 8-16 teach the system using a first sensor to determine information about a lead and following vehicle, this sensor can be a camera. .Col. 23, lines 26-34; further this teaching. Col. 26, lines 39-56; teach that the system can be utilized with both a rear and forward facing cameras from both a leading and following vehicle) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu and Schuh; and have a reasonable expectation of success. Both relate to the control of vehicles as they travel and attempt to follow a lead vehicle. As Schuh teaches in Col 1, Background the use of vehicles in platooning allows for significant fuel savings, but can be dangerous when not done correctly. By using an autonomous system the safety is increased as the vehicles can communicate and even share sensor data. In Col. 1, Summary Schuh teaches that fusing sensor data collected from the vehicles in a platoon leads to in better automatic control of the system. Gu and Schuh do not teach the learning device determines mutually overlapping parts of the rear video of the host vehicle and the front video of the rear vehicle based on the rear video information and the front video information, and uses an overlapping degree of the rear video and the front video according to a result of the determination as learning data for the reinforcement learning. However, Kim teaches “the learning device determines mutually overlapping parts of the rear video of the host vehicle and the front video of the rear vehicle based on the rear video information and the front video information, and uses an overlapping degree of the rear video and the front video according to a result of the determination as learning data for the reinforcement learning.” (Highlight in [Equation 2] teaches the system determining an interloper vehicle that has been detected by both the front and rear detectors of a vehicle, this could be a camera as established in the highlight and circle teaching the sensors. The section teaching on figure 5, highlighted and boxed, teaches the system as comparing the front and rear radar information between a lead and following vehicle in a way that can determine the amount of overlap and the degree of reliability with the sensors) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, and Iba with Kim; and have a reasonable expectation of success. All relate to the control of vehicles in the environment with regards to possible platooning needs. As Kim teaches in the Background Art there is a need to be able to recognize a vehicle position within a cluster driving, platoon, section of cars. Determining that a separate vehicle has intruded in the platoon allows the system to further determine how to act. Regarding claim 20, Gu teaches a method for controlling platooning, [n0083] teaches the use of the device in relation to vehicles following a leader, i.e. platooning) the method comprising: determining whether a ratio of a first distance between coordinates of a host vehicle and coordinates of a front vehicle in platooning to a second distance between the coordinates of the host vehicle and coordinates of ([n0069], [n0072], and [n0078] teach the system determining a longitudinal reward based on a driving distance, i.e. the distance between a lead and follower vehicle. The system is taught to maximize the reward when the driving distance falls within a range. This range is understood to be between two vehicles, the two vehicles could be a leading vehicle and an interloping vehicle) generating a feedback signal according to a result of the determination; ([n0069], [n0072], and [n0078] teach the system determining a longitudinal reward based on a driving distance, i.e. the distance between a lead and follower vehicle. The system is taught to maximize the reward when the driving distance falls within a range. This range is understood to be between two vehicles, the two vehicles could be a leading vehicle and an interloping vehicle) performing reinforcement learning based on the feedback signal ([n0018] teaches the system using a reinforcement learning device which provides a feedback signal) and wherein the method further comprises; outputting the feedback signal as any one of positive feedback and negative feedback according to whether a first distance between the host vehicle and the rear vehicle is comprised in a preset first range, ([n0069], [n0072], and [n0078] teach the system determining a longitudinal reward based on a driving distance, i.e. the distance between a lead and follower vehicle. The system is taught to maximize the reward when the driving distance falls in the range of a far and safe range. Gu, [n0069] says, “The longitudinal reward: if the driving distance is within a preset distance range, the longitudinal reward is the maximum value; if the driving distance is less than the preset safe vehicle distance, the longitudinal reward is the minimum value.” [n0072] and [n0078] further this teaching. Regarding the teachings of Gu it explicitly teaches the idea of altering the “reward” if the distance between vehicles is outside of a range or in the optimal range. While Gu doesn’t use a “positive” and/or “negative value” that appears to be a design choice by the applicant. Gu uses a “maximum” and “minimum” reward value. The end result is the same. Gu outputs a feedback signal between vehicles based on the intra-vehicle distance. Both Gu and the instant application teach the use of a reward determination device that can output feedback. In light of this the examiner would determine that eh use of a positive and/or negative feedback as a design choice as the range of values is not important and would account to a mere rearrangement of parts, see In re Kuhle, 526 F.2d 553, 188 USPQ 7 (CCPA 1975).) Gu does not teach a separate vehicle is comprised in a preset range when the separate vehicle other than the platooning front vehicle is recognized from a front of the host vehicle in platooning; video information output from a camera provided in each of the host vehicle and the front vehicle; and controlling driving speed of the host vehicle such that the ratio of the first distance to the second distance is comprised in the preset range based on a result of the reinforcement learning; when the first distance is not comprised in the preset first range, the learning device controls driving speed of the host vehicle such that the first distance is comprised in the preset first range, and wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle. However, Schuh teaches “video information output from a camera provided in each of a host vehicle and a rear vehicle which are platooning;” (Col.2, lines 8-16 teach the system using a first sensor to determine information about a lead and following vehicle, this sensor can be a camera. .Col. 23, lines 26-34; further this teaching. Col. 26, lines 39-56; teach that the system can be utilized with both a rear and forward facing cameras from both a leading and following vehicle) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu and Schuh; and have a reasonable expectation of success. Both relate to the control of vehicles as they travel and attempt to follow a lead vehicle. As Schuh teaches in Col 1, Background the use of vehicles in platooning allows for significant fuel savings, but can be dangerous when not done correctly. By using an autonomous system the safety is increased as the vehicles can communicate and even share sensor data. In Col. 1, Summary Schuh teaches that fusing sensor data collected from the vehicles in a platoon leads to in better automatic control of the system. Gu and Schuh fail to teach a separate vehicle is comprised in a preset range when the separate vehicle other than the platooning front vehicle is recognized from a front of the host vehicle in platooning; and controlling driving speed of the host vehicle such that the ratio of the first distance to the second distance is comprised in the preset range based on a result of the reinforcement learning; when the first distance is not comprised in the preset first range, the learning device controls driving speed of the host vehicle such that the first distance is comprised in the preset first range, and wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle. However, Iba teaches “controlling driving speed of the host vehicle such that the ratio of the first distance to the second distance is comprised in the preset range based on a result of the reinforcement learning.” ([0031] teaches the lead vehicle of a platoon as setting a trajectory based on the way the following vehicle can interact with the world, such as a max speed) and “when the first distance is not comprised in the preset first range, the learning device controls driving speed of the host vehicle such that the first distance is comprised in the preset first range.” ([0063], [0066], and [0071]-[0077] teach the system of a lead vehicle with brakes and drive sources as able to adjust the intervehicle distance in order to maintain the distance between vehicles as within a preset range) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu and Schuh with Iba; and have a reasonable expectation of success. All relate to the control of vehicles in relation to a platooning system. Iba teaches in [0004] that there is a known issue that sometimes the lead vehicle of a platoon may enter a situation where its driving is not followable by following vehicles in a platoon. [0005] teaches that the system of Iba works to prevent this from occurring by measuring the speeds of a lead vehicle are not excessive and that following vehicles can match the trajectory of the lead vehicle. The combination of Gu, Schuh, and Iba do not teach a separate vehicle is comprised in a preset range when the separate vehicle other than the platooning front vehicle is recognized from a front of the host vehicle in platooning. However, Kim teaches “a separate vehicle is comprised in a preset range when the separate vehicle other than the platooning front vehicle is recognized from a front of the host vehicle in platooning;” (Highlight in [Equation 2] teaches the system determining an interloper vehicle, 11, that has been detected behind the lead vehicle of the platoon) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, and Iba with Kim; and have a reasonable expectation of success. All relate to the control of vehicles in the environment with regards to possible platooning needs. As Kim teaches in the Background Art there is a need to be able to recognize a vehicle position within a cluster driving, platoon, section of cars. Determining that a separate vehicle has intruded in the platoon allows the system to further determine how to act. The combination of Gu, Schuh, Iba, and Kim does not teach wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle. However, Yoon teaches “wherein the first distance is determined based on a reception strength of a wireless signal received from the rear vehicle.” ([0302]-[0306] teaches the system determining the communication state, or level, of the vehicle in the platoon group, i.e. wireless strength. The state is compared to a reference value to determine the health of the platoon communication. This level can be based on the received signal strength, [0268] “communication state of the group can be determined based on at least one of the congestion of the channel used for the communication of the group, the reception sensitivity, the packet error ratio (PER), the received signal strength indication (RSSI).” (Emphasis added). As Yoon further teaches in [0255], “the platooning method according to an embodiment of the present invention may include a step of determining a communication state of a group (S100), and a step of adjusting a distance between vehicles according to the communication state of the group.” [0292], [0289]-[0290], and [0318]-[0321] further this teaching.) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, Iba, and Kim with Yoon; and have a reasonable expectation of success. All relate to the control of vehicles moving in an environment and possible use of platooning methods. As Yoon teaches in [0309] the use of a communication determination to figure out the state of the platoon allows the platoon to act quickly and efficiently. If the platoon determines that the communication is poor it can quickly determine to break up or take some form of corrective action. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu, Schuh, Iba, and Yoon in view of Baba (US PG Pub 2023/0391333). Regarding claim 16, the combination of Gu, Schuh, Iba, and Yoon teaches the apparatus of claim 1. The combination of Gu, Schuh, Iba, and Yoon does not teach wherein when controlling the driving speed of the host vehicle, the learning device considers whether there is a front obstacle located within a predetermined range from a front of the host vehicle. However, Baba teaches “wherein when controlling the driving speed of the host vehicle, the learning device considers whether there is a front obstacle located within a predetermined range from a front of the host vehicle.” ([0068] teaches the host vehicle detecting an obstacle a certain distance ahead of the vehicle. [0070] teaches the vehicle being controlled on the basis of the detected forward obstacle) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, Iba, and Yoon with Baba; and have a reasonable expectation of success. All relate to the control of vehicles in an environment with the possible use of platooning. As Baba teaches in [0004] the use of a safety envelope around a vehicle allows the system to most effectively determine what potential actions it may take. The determination of an obstacle ahead of the vehicle may cause it to turn, decelerate, or perform some other form of avoidance maneuver. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gu, Schuh, Iba, and Yoon in view of Chen (CN-105035085-A). Regarding claim 18, the combination of Gu, Schuh, and Yoon teaches the apparatus of claim 1. The combination of Gu, Schuh, and Yoon does not teach, an inference neural network device that updates a parameter for a neural network comprised in the learning device, receives the video information based on the updated parameter, and controls the host vehicle such that the rear vehicle can follow the driving trajectory of the host vehicle. However, Chen teaches “an inference neural network device that updates a parameter for a neural network comprised in the learning device, receives the video information based on the updated parameter;” ([0163] teaches an inference neural network that can update parameters based on learning) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, Iba, and Yoon with Chen; and have a reasonable expectation of success. All relate to the control of vehicles in an environment and the possible use of platoon control strategies. As Chen teaches in [0163] the use of dual neural networks allows the system to maximize control strategy by having one related to action and one related to evaluation. The system can recognize wrong or improper control strategies and generate a corrective signal to prevent them from being used in the future. This would ensure the vehicle is always acting as safely as possible. Gu, Schuh, and Chen do not teach controls the host vehicle such that the rear vehicle can follows the driving trajectory of the host vehicle. However, Iba teaches “controls the host vehicle such that the rear vehicle can follow the driving trajectory of the host vehicle.” ([0031] teaches the lead vehicle of a platoon as setting a trajectory based on the way the following vehicle can interact with the world, such as a max speed) It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date, to incorporate the teachings of Gu, Schuh, and Chen with Iba; and have a reasonable expectation of success. All relate to the control of vehicles in relation to a platooning system. Iba teaches in [0004] that there is a known issue that sometimes the lead vehicle of a platoon may enter a situation where its driving is not followable by following vehicles in a platoon. [0005] teaches that the system of Iba works to prevent this from occurring by measuring the speeds of a lead vehicle are not excessive and that following vehicles can match the trajectory of the lead vehicle. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kang (US PG Pub 2022/0375354) teaches a method of determining a location for swarm flight using UWB, the method including: computing a reference location from GPS information in a case where the location is measured; sending out a pulling signal, preset according to a two-way ranging format, according to slave ranging scheduling corresponding to each formation, and receiving a pushing signal from a neighboring flight vehicle and performing ranging; computing a relative location in the formation on a master-slave basis from a ranged pull-push relationship using TWR time information, and computing the relative location in the formation on a slave-slave basis using a received signal strength indicator and time of arrival; generating a fingerprint map in a manner that varies with each formation, using all the computed relative locations in the formation on the master-slave basis; and computing the location of the swarm flight vehicle using the generated fingerprint map. Lei (US PG Pub 2023/0195105) teaches a system includes a plurality of vehicles and at least one first processor in a first vehicle and at least one second processor in each other of the plurality of vehicles. The first vehicle wirelessly receives remote driving commands, from a remote computing system, instructing control of the first vehicle and executes the remote driving commands to control the first vehicle in accordance with the remote driving commands. The first vehicle wirelessly broadcasts the remote driving commands, including a location of the first vehicle where a given of the driving commands was executed. The second vehicle wirelessly receives the broadcast remote driving commands, stores the received remote driving commands in sequence, and executes the given of the driving commands when a location of the second vehicle corresponds to the location of the first vehicle where the given of the driving commands was executed. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS STRYKER whose telephone number is (571)272-4659. The examiner can normally be reached Monday-Friday 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.S./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Dec 27, 2022
Application Filed
Mar 21, 2025
Non-Final Rejection — §103
Jul 01, 2025
Response Filed
Aug 08, 2025
Final Rejection — §103
Oct 14, 2025
Response after Non-Final Action
Nov 12, 2025
Request for Continued Examination
Nov 18, 2025
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524021
FAULT TOLERANT MOTION PLANNER
2y 5m to grant Granted Jan 13, 2026
Patent 12492903
NAVIGATION DEVICE AND METHOD OF MANUFACTURING NAVIGATION DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12475526
COMPUTING SYSTEM WITH A MAP AUTO-ZOOM MECHANISM AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Nov 18, 2025
Patent 12455576
INFORMATION DISPLAY SYSTEM AND INFORMATION DISPLAY METHOD
2y 5m to grant Granted Oct 28, 2025
Patent 12449822
GROUND CLUTTER AVOIDANCE FOR A MOBILE ROBOT
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
67%
With Interview (+27.6%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month