Prosecution Insights
Last updated: April 19, 2026
Application No. 18/089,857

VIRTUAL TRACK DETECTION SYSTEM AND METHOD

Non-Final OA §103
Filed
Dec 28, 2022
Examiner
FABER, DAVID
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Automotive Research & Testing Center
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 8m
To Grant
88%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
274 granted / 531 resolved
-3.4% vs TC avg
Strong +37% interview lift
Without
With
+36.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
41 currently pending
Career history
572
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 531 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the Request for Continued Examination filed on 2 October 2025 This office action is made Non Final. Claims 1 and 7 have been amended. Claims 1-13 are pending. Claims 1 and 7 are independent claims. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/2/25 has been entered. Specification The amendment to the specification filed on 10/2/25 has not been entered for the following reason(s): Fails to comply with 37 CFR 1.121(b)(1)(i) that states " An instruction, which unambiguously identifies the location, to delete one or more paragraphs of the specification, replace a paragraph with one or more replacement paragraphs, or add one or more paragraphs". The provided “instruction” by the Applicant is not considered an instruction that unambiguously identifies the location as stated in the MPEP. Applicant’s instruction indicates to amend the paragraph beginning on page 1, line 35 of the filed specification. However, there is no line 35 on page 1 of Applicant’s specification. Each page, including page 1, of Applicant’s specification only goes up to line 25. Furthermore, the paragraph of text that Applicant’s wishes to amend isn’t even on page 1. Thus, the amendment to the specification does not meet the requirements of 37 CFR 1.121 (b)(1) and is not entered. Because the amendment to the paragraphs of the specification was not entered, the amendment to the abstract is not also not entered since the amendments are not entered in part. Therefore, the original abstract filed on 12/28/22 is viewed as the current abstract. In regards to the original abstract, the abstract remains objected for the following reasons: the abstract involves language that is not particularly in narrative form since it repeats the language/wording/phrasing(s) of the independent claims. The abstract should be a summary of the claim invention that allows the Office and the public to quickly determine, from a cursory inspection, the nature and gist of the technical disclosure. The abstract should be a summary of the claim invention; not a repeat of the exact/similar wording that is written/used in the independent claims. Correction is required. See MPEP § 608.01(b). Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Drawings Because the amendment to the paragraphs of the specification was not entered, the drawings remain objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "30" and "12" have both been used to designate a vehicle. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: 30. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 6-8 remain rejected under 35 U.S.C. 103 as being unpatentable over Kim (US20150199577, 2015) in further view of Yamamoto et al (US20220340202, EFD 2019) in further view of Hwang et al (US 20220176989, 6/2022) As per independent claim 1, Kim discloses a system comprising: a plurality of positioning elements, disposed along a curved section of a track pattern of a track, wherein the track pattern is drawn along the track; (0011, 0022, 0027: identifying the location/positioning of guardrails of a curve of a road/track; FIG 2; 0022-0024: discloses a controller receiving the images and recognizes lane information within the images. The images are taken from a camera installed in front of the vehicle which the vehicle is on the road. The lane detector of the vehicle analyzes the images to detect a lane based on shade features (generally, lanes are shown light) or geometric features (location and thickness etc.) of lanes. Based on the identifying of the features representing a lane, it able to detect lanes. In addition, the image is taken from a vehicle while on a road, thus, image of the road. The detected lane is based on recognizing the features (feature points) in the image. Therefore, identified features, making up a lane, of the image are therefore features/lanes of the road itself. Thus, the detect lane (information) are identified lane information on the road. Thus, the lane information on the road is a form of a track pattern “drawn” on the road since it is implicit lanes are painted/drawn on the road. The detect lane (information) is used to identify a curvature in the road. 0046, 0050 discloses using the images to analyze the lane information to calculate a curvature) at least one image capture device, installed in a vehicle and capturing front road images, wherein the front road images include the track pattern of the track; (0011, 0022, 0024: using a camera to take images to identify/detect lane information within the images. The images are taken from a camera installed in front of the vehicle which the vehicle is on the road. The lane detector of the vehicle analyzes the images to detect a lane (lane information) based on shade features (generally, lanes are shown light) or geometric features (location and thickness etc.) of lanes. Based on the identifying of the features representing a lane, its able to detect lanes (lane information). In addition, the image is taken from a vehicle while on a road, thus, image of the road. The detected lane is based on recognizing the features (feature points) in the image. Therefore, identified features, making up a lane, of the image are therefore features/lanes of the road itself. Thus, the detect lane (information) are identified lane information on the road. The lane information on the road is a form of a track pattern of a track) a sensor, installed in the vehicle, and used to detect positions of the positioning elements; and (0011, 0022, 0024-0025, 0032: LIDAR sensor used to detecting the locations of the guardrails) at least one processor, disposed in a vehicle system, connected with the at least one image capture device and the sensor, wherein the at least one processor receives the front road images, recognizes the track pattern, and (FIG 2; 0022-0024: discloses a controller receiving the images and recognizes lane information within the images) determine a driving path is curved based on the track pattern, wherein if the driving path is curved, the at least one processor works out a curvature of the curved section according to the positions of the positioning elements, calculates a turning speed and a modification angle according to the curvature, and outputs the turning speed and the modification angle to the dynamic control end; then the dynamic control end drives the vehicle according to the turning speed and the modification angle. (0027, 0029, 0032, 0034, 0037-0038 :curvature of the curves is determined based on the identified elements; 0039-0041: turning angle and turning speed are calculated based on the identified curvature) However, the cited art fails to specifically disclose wherein the track pattern is drawn along a center line of the track and a center of a head of the vehicle coincides with a center of the track pattern and determines whether a driving path of the vehicle is straight or curved based on the track pattern, wherein the track pattern serves to indicate a type of a road section or an upcoming driving maneuver of the vehicle. However, Yamamoto discloses a vehicle following a target route (driving path) made up of a collection of spaced markers on a road includes sections of straight section (0078, 0087; FIG 10) and curvatures (0084, 102; FIG 14). The collection of markers that are laid out in the road (0087; FIG 10) are considered a form of track pattern, since the collection of the markers form a route. (0078. 0087: target route 1R represented by the route data matches a laying line of magnetic markers) Yamamoto discloses that the target route (consisting of markers on the road)is used to identify if a section of the road ahead is either straight or curved. For example, 0084 clearly states “Traveling control unit 11 identifies, for example, a curvature of a curve ahead or the like as a route specification, which is a shape specification of the route of vehicle 5 ahead. Here, the shape of the route ahead means a route shape several meters ahead of vehicle 5. Note that, in place of the one several meters ahead, the route shape may be a route shape at a location several tens of meters ahead or at a location vehicle 5 has reached (zero meter ahead)”. In addition, 0092-0094 discloses using determining if the route ahead is straight or curved. Thus, Yamamoto discloses the identified markers, within the route, are positioning makers that identify if the section(s) ahead on the route are either straight or curved. Thus, Yamamoto discloses a track pattern is used to identify if a curve or straight section of the road is ahead, wherein a curved section or a straight section are types of road sections. Furthermore, the markers are identified by the vehicle and control the vehicle’s direction when moving on the road. (0065, 0079, 0087) In addition, the collection of markers that are laid out in the road, form of a track pattern as explained, are actually in the center of the road (0087; FIG 10). The target route (claimed “center line of the track”)) comprises a laying line of the collection of markers wherein the collection of markers (claimed “track pattern”) are centered on the road (claimed “track”’) (0078, 0087; FIG 10). In other words, the collection of the markers forms a line along the center of the road/track. Furthermore, FIG 10 and 0087 discloses that the vehicle remains centered while traveling on this target route (“center line of the track”) inasmuch as the head of the vehicle is centered along the center of the target route. FIG 14 discloses that the vehicle remains centered on the track pattern while on the curvature portion of the road also. It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the features of Kim with the disclosed features of Yamamoto et al since it would have provided the benefit of a highly-versatile automatic-steering control method and control system that can adjust the traveling route of the vehicle without greatly changing the control specifications. (0016) However, the cited art fails to specifically disclose wherein if the driving path is straight, the at least one processor works out a linear equation of the driving path and outputs the linear equation to a dynamic control end of the vehicle system; then the dynamic control end drives the vehicle according to the linear equation. However, Hwaung et al discloses if the driving path is straight or curved based on the images. (FIG 1A; 0023, 0068: discloses receiving lane images from a camera, identifying the lane information to determine if the lane on the road is straight or curved). Furthermore, FIG. 1B, 5; 0076 discloses using a linear equation (see FIG. 5 y=ax+b), classifies the lane into the type of the straight lane 200-1 when all points of the segment are within the straight model region generated by the start point and the end point (based on the linear equation) wherein the output of the linear equation is sent to an autonomous vehicle control. This outputted information received by the autonomous vehicle control is used to control the motion of the autonomous vehicle traveling the road, a form a control information to the drive the vehicle. (0090-0092) It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the features of Kim with the disclosed features of Hwaung et al since it would have provided the benefit of improving the precise positioning precision and stability by the removal of the under-constrained shape matching error applying the under-constrained shape classification, and the covariance estimation. (0095) As per dependent claim 2, Kim discloses wherein after receiving the front road images, the at least one processor searches the front road images for a plurality of feature points of the track and recognizes the track pattern according to the feature points of the track. (0022, 0024: detects lane information/markings to identify the pattern. The lane detector of the vehicle analyzes the images to detect a lane (lane information) based on shade features (generally, lanes are shown light) or geometric features (location and thickness etc.) of lanes. Based on the identifying of the features representing a lane, its able to detect lanes (lane information). Thus, Kim discloses a form detecting feature points in the images) As per dependent claim 6, based on the rejection of Claim 1 and the rationale, along with the motivation, incorporated, Yamamoto et al discloses the positioning elements are magnetic positioning elements, and the sensor is a magnetic sensor. (0041: magnetic markers arranged in a lane; 0087: discloses the use of a magnetic sensor in the controlling the traveling of a vehicle in the lane) As per independent claim 7, Claim 7 recites similar limitations as in Claim 1 and is rejected under similar rationale. Furthermore, Kim discloses a system comprising: track (FIG 1) vehicle runs on a/the track (FIG 1, 0040) wherein the track pattern is drawn along the track; a plurality of positioning elements, disposed along a curved section of a track pattern of a track; (0011, 0022, 0027: identifying the location/positioning of guardrails of a curve of a road/track; FIG 2; 0022-0024: discloses a controller receiving the images and recognizes lane information within the images. The images are taken from a camera installed in front of the vehicle which the vehicle is on the road. The lane detector of the vehicle analyzes the images to detect a lane based on shade features (generally, lanes are shown light) or geometric features (location and thickness etc.) of lanes. Based on the identifying of the features representing a lane, it able to detect lanes. In addition, the image is taken from a vehicle while on a road, thus, image of the road. The detected lane is based on recognizing the features (feature points) in the image. Therefore, identified features, making up a lane, of the image are therefore features/lanes of the road itself. Thus, the detect lane (information) are identified lane information on the road. Thus, the lane information on the road is a form of a track pattern “drawn” on the road since it is implicit lanes are painted/drawn on the road. The detect lane (information) is used to identify a curvature in the road. 0046, 0050 discloses using the images to analyze the lane information to calculate a curvature) using at least one image capture device to capture front road images; (0011, 0022, 0024: using a camera to take images to identify/detect the lane information within the images.) using at least one processor receives the front road images, recognizes the track pattern; (FIG 2; 0022-0024: discloses a controller receiving the images and recognizes lane information within the images. The images are taken from a camera installed in front of the vehicle which the vehicle is on the road. The lane detector of the vehicle analyzes the images to detect a lane based on shade features (generally, lanes are shown light) or geometric features (location and thickness etc.) of lanes. Based on the identifying of the features representing a lane, it able to detect lanes. In addition, the image is taken from a vehicle while on a road, thus, image of the road. The detected lane is based on recognizing the features (feature points) in the image. Therefore, identified features, making up a lane, of the image are therefore features/lanes of the road itself. Thus, the detect lane (information) are identified lane information on the road. Thus, the lane information on the road is a form of a track pattern “drawn” on the road since it is implicit lanes are painted/drawn on the road.) using a sensor to detect the positioning elements if the driving path is curved; determine driving path is curved; and (0011, 0022, 0024-0025, 0032: LIDAR sensor used to detecting the locations of the guardrails) calculating a curvature of the curved section according to the positions of the positioning elements, using the curvature to calculate a turning speed and a modification angle, and outputting the turning speed and the modification angle to the dynamic control end to enable the dynamic control end to drive the vehicle according to the turning speed and the modification angle. (0027, 0029, 0032, 0034, 0037-0038 :curvature of the curves is determined based on the identified elements; 0039-0041: turning angle and turning speed are calculated based on the identified curvature) However, the cited art fails to specifically disclose wherein the track pattern is drawn along a center line of the track and a center of a head of the vehicle coincides with a center of the track pattern and determines whether a driving path of the vehicle is straight or curved according to the track pattern, wherein the track pattern serves to indicate a type of a road section or an upcoming driving maneuver of the vehicle. However, Yamamoto discloses a vehicle following a target route made up of a collection of spaced markers on a road includes sections of straight section (0078, 0087; FIG 10) and curvatures (0084, 102; FIG 14). The collection of markers that are laid out in the road (0087; FIG 10) are considered a form of track pattern, since the collection of the markers form a route. (0078. 0087: target route 1R represented by the route data matches a laying line of magnetic markers) Yamamoto discloses that the target route (consisting of markers on the road)is used to identify if a section of the road ahead is either straight or curved. For example, 0084 clearly states “Traveling control unit 11 identifies, for example, a curvature of a curve ahead or the like as a route specification, which is a shape specification of the route of vehicle 5 ahead. Here, the shape of the route ahead means a route shape several meters ahead of vehicle 5. Note that, in place of the one several meters ahead, the route shape may be a route shape at a location several tens of meters ahead or at a location vehicle 5 has reached (zero meter ahead)”. In addition, 0092-0094 discloses using determining if the route ahead is straight or curved. Thus, Yamamoto discloses the identified markers, within the route, are positioning makers that identify if the section(s) ahead on the route are either straight or curved. Thus, Yamamoto discloses a track pattern is used to identify if a curve or straight section of the road is ahead, wherein a curved section or a straight section are types of road sections. Furthermore, the markers are identified by the vehicle and control the vehicle’s direction when moving on the road. (0065, 0079, 0087) In addition, the collection of markers that are laid out in the road, form of a track pattern as explained, are actually in the center of the road (0087; FIG 10). The target route (claimed “center line of the track”)) comprises a laying line of the collection of markers wherein the collection of markers (claimed “track pattern”) are centered on the road (claimed “track”’) (0078, 0087; FIG 10). In other words, the collection of the markers forms a line along the center of the road/track. Furthermore, FIG 10 and 0087 discloses that the vehicle remains centered while traveling on this target route (“center line of the track”) inasmuch as the head of the vehicle is centered along the center of the target route. FIG 14 discloses that the vehicle remains centered on the track pattern while on the curvature portion of the road also. It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the features of Kim with the disclosed features of Yamamoto et al since it would have provided the benefit of a highly-versatile automatic-steering control method and control system that can adjust the traveling route of the vehicle without greatly changing the control specifications. (0016) However, the cited art fails to specifically disclose calculating a linear equation of the driving path if the driving path is straight, and outputting the linear equation to a dynamic control end of a vehicle system to enable the dynamic control end to drive the vehicle according to the linear equation. However, Hwaung et al discloses if the driving path is straight or curved based on the images. (FIG 1A; 0023, 0068: discloses receiving lane images from a camera, identifying the lane information to determine if the lane on the road is straight or curved). Furthermore, FIG. 1B, 5; 0076 discloses using a linear equation (see FIG. 5 y=ax+b), classifies the lane into the type of the straight lane 200-1 when all points of the segment are within the straight model region generated by the start point and the end point (based on the linear equation) wherein the output of the linear equation is sent to an autonomous vehicle control. This outputted information received by the autonomous vehicle control is used to control the motion of the autonomous vehicle traveling the road, a form a control information to the drive the vehicle. (0090-0092) It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the features of Kim with the disclosed features of Hwaung et al since it would have provided the benefit of improving the precise positioning precision and stability by the removal of the under-constrained shape matching error applying the under-constrained shape classification, and the covariance estimation. (0095) As per dependent claim 8, Claim 8 recites similar limitations as in Claim 2 and is rejected under similar rationale. Claim(s) 3, 10 remain rejected under 35 U.S.C. 103 as being unpatentable over Kim in further view of Yamamoto et al in further view of Hwang et al in further view of Kobach (US20230136214, EFD 10/29/21) As per dependent claim 3, based on the rejection of Claim 1 and the rationale, along with the motivation, incorporated, Hwaung et al discloses next, the at least one processor calculates the linear equation according to the track pattern; then, the processor determines whether the driving path is straight or curved (0070-0071, FIG. 5: FIG. 5 discloses a classification algorithm is used to map a set of position of set points, based on the vehicle, to derive either a linear equation (y=ax+b) or nonlinear/curved equation (x-a) as shown in FIG. 5. The selected equation is used to determine if the lane is straight or curved as depicted in FIG.5) However, the cited art fails to specifically disclose wherein the at least one processor takes a center of a head of the vehicle as an origin and lets the center of the head of the vehicle coincide with a center of the track. However, Kobach discloses a center of a head of the vehicle as an origin and lets the center of the head of the vehicle coincide with a center of the track (0037: origin of the coordinate system 304 is located at the center of the vehicle's front bumper at ground level; FIG 3B: discloses center of the vehicle in the center of the lane) It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the cited art with the disclosed features of Kobach et al since it would have provided the benefit of performing highly-accurate and self-adjusting imaging sensor auto-calibration for an in-vehicle ADAS system. (0001) As per dependent claim 10, Claim 10 recites similar limitations as in Claim 3 and is rejected under similar rationale. Claim(s) 4, 11, and 13 remain rejected under 35 U.S.C. 103 as being unpatentable over Kim in further view of Yamamoto et al in further view of Hwang et al in further view of Kuramochi (US 20230227055, filed 12/16/22, EFD1/18/22) As per dependent claim 4, Kim discloses calculates the curvature of the curved section according to the positions of the positioning elements; (0026-0029,0034, 0037-0038) the at least one processor calculates the turning speed of the vehicle according to the curvature of the curved section (0039); the at least one processor calculates the modification angle according to the curvature of the curved section and the turning speed (0039-0041).However, the cited art fails to specifically disclose calculates the curvature of the curved section according to the positions of the positioning elements and an image equation. (Note: It is noted that term “image equation” is not defined by the claim language and the specification do not provide an explicit definition of what a “image equation” is.) However, Kuramochi discloses calculates the curvature of the curved section according to the positions of the positioning elements and an image equation (0026: determine the road curvature, the image recognition ECU 13 binarizes the traveling environment information (traveling environment images 0024) by using a luminance difference to recognize the right and left lane lines, and solves a curve approximation equation using the least squares method to determine the curvature of the right and left lane lines for each predetermined section) It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the cited art with the disclosed features of Kuramochi et al since it would have provided the intrinsic advantage of understanding the shape and form of a surface or curve so that the vehicle can property adjust the steering angle and turning speed according. As per dependent claim 11, Claim 11 recites similar limitations as in Claim 4 and is rejected under similar rationale. As per dependent claim 13, Claim 13 recites similar limitations as in Claim 7 and is rejected under further rationale. Furthermore, Kim discloses the at least one processor recognizing a scenario according to the track pattern (0024-0026, 0039-0041: determine the vehicle is going into a curve and calculate a turning speed and angle for the curve). Furthermore, Kim discloses calculate the curvature of a curved road from information received from the forward-looking sensor, which includes a camera and LIDAR sensor. (0020) With the camera, Kim calculates a curvature on the basis of the track of a detected lane wherein calculating implies a form of an equation is being used. (0024) In addition, with LIDAR, Kim calculates the curvature of a guardrail by fitting a 3D-curve or curved surface for the detection point of the extracted guardrail section (0029) or fitting 3D-curves Q1 and Q2 or curved surfaces to the depth information (0031-0032) wherein calculating implies a form of an equation is being used. The curvatures determined by the camera and LIDAR are combined (form of an equation) to form a resultant curvature (0029, 0035) However, the cited art fails to specifically disclose using an image equation to calculate the curvature of the curved section according to the track pattern recognized from the front road images and a position of the positioning element that is triggered firstly. (Note: It is noted that term “image equation” is not defined by the claim language and the specification do not provide an explicit definition of what a “image equation” is.) Therefore, the broadest reasonable interpretation is applied for this term) However, based on the rejection of Claim 4 and the rationale, along with the motivation, incorporated, Kuramochi discloses using an image equation to calculate the curvature of the curved section according to the track pattern recognized from the front road images and a position of the positioning element that is triggered firstly (0025-0026: discloses determines the road curvature (1/m) of the right and left lane lines defining the traveling path (vehicle traveling path) along which the vehicle M travels; the image recognition ECU 13 binarizes the traveling environment information by using a luminance difference to recognize the right and left lane lines, and solves a curve approximation equation using the least squares method to determine the curvature of the right and left lane lines for each predetermined section. Thus, an equation is used based on the image data to calculate the curvature of a curve based on the positioning of the left and right lanes which these lanes define the traveling path of the vehicle (form of according to track pattern). Claim(s) 5, 12 remain rejected under 35 U.S.C. 103 as being unpatentable over Kim in further view of Yamamoto et al in further view of Hwang et al in further view of Meng et al (US 20210331666, 2021) As per dependent claim 5, Kim discloses a steel wheel sensor associated with a steering angle detector such that detect the steering angle of a vehicle on the basis of steering information received from a steering wheel sensor. Thus, it is implicit that a steering wheel is on the vehicle and is set at a particular angle at the time of detector readings. However, the cited art fails to disclose a transverse control system; the transverse control system controls an angle of a steering wheel of the vehicle according to the modification angle and controls an accelerator of the vehicle and a brake of the vehicle according to the turning speed; the vehicle drives according to the turning speed and the modification angle. However, Meng et al discloses a system that having steering wheel such the steering wheel is rotated to a designated direction and angle. (0012) In addition, Meng et al discloses the system comprising a accelerator and brake such that the accelerator and brake are controlled during the turning of the vehicle (angle speed)(0027, 0029) It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the cited art with the disclosed features of Meng et al since changing speeds of the vehicle with the accelerator and brake would have provided the benefit of navigating the turn safely. As per dependent claim 12, Claim 12 recites similar limitations as in Claim 5 and is rejected under similar rationale. Claim(s) 9 remain rejected under 35 U.S.C. 103 as being unpatentable over Kim in further view of Yamamoto et al in further view of Hwang et al in further view of Okada (US 20210318690, 2021) As per dependent claim 9, Claim 9 recites similar limitations as in Claim 8 and is rejected under similar rationale. Furthermore, Kim discloses enclosing a region of interest (ROI) in the front road images (0022, 0024, 0046: detecting lanes from images. Form of enclosing a region of interest for features in consideration of the shade features (generally, lanes are shown light) or geometric features (location and thickness etc.) of lanes since its looks for a particular area/region of a particular “feature”). However, Kim fails to specifically disclose finding out a group of points having peak color values matching color values of the feature points. However, Okada discloses “the controller 4 extracts, as the feature points, pixels or a pixel group whose brightness value or color makes the pixels or pixel group distinguishable from surrounding pixels or pixel group.” (0040) In other words, Okada identifying a set of points, from the image, and extracting neighboring/surrounding points whose colors values are similar. These extracted points having the similar colors value are collective viewed as a feature point (group of pixels) It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the cited art with the disclosed features of Okada et al since it would have provided the intrinsic advantage of increasing the efficiency of the feature point matching by identifying distinguishable features in an image based on color values. Response to Arguments Applicant's arguments filed 10/2/25 have been fully considered but they are not persuasive. In response to Applicant’s remarks to the amended Abstract and Drawings on page 10, the Examiner respectfully states these objections remain since the amendment(s) to the specification were not entered for not meeting the requirements of 37 CFR 1.121(b)(1)(i) as fully explained above. In addition, the amendment to the Abstract if entered would not overcome the objection to the abstract/specification since the Examiner respectfully states that the non-entered replacement abstract is not written in the narrative form since it similarly repeats the language/wording/phrasing(s) of the independent claims. In other words, The Examiner respectfully states the non-entered replacement abstract is merely a combination of a number of the limitations from the independent claims. The Examiner respectfully states that the Applicant did not provide any explanation how the non-entered replacement Abstract is considered in narrative form and not a slight rewording of the claim limitations from the independent claims. As stated, the Examiner respectfully states the abstract should be a summary of the claim invention that allows the Office and the public to quickly determine, from a cursory inspection, the nature and gist of the technical disclosure. The abstract should be a summary of the claim invention; not a repeat of the exact/similar wording that is written/used in the independent claims and/or written like a claim. In addition, the abstract contains over 150 words. Therefore, the objection to the Abstract would still remain if non-entered replacement Abstract was entered. On pages 11-14, in regards to independent claims 1 and 7, Applicant argues that the cited art, Kim, Yamamoto, and Hwang does not the amendment “receives the front road images, recognizes the track pattern, and determines whether a driving path of the vehicle is straight or curved based on the track pattern, wherein the track pattern serves to indicate a type of a road section or an upcoming driving maneuver of the vehicle”. Applicant argues “Accordingly, the "track pattern" recited in amended claims 1 and 7 is NOT a lane line, a shadow, or a geometric feature, but rather a pattern with "different symbols" or "different icons" that characterize different road conditions such as straight sections, curves, or entry/exit sections. The special icons are used to inform the vehicle in advance that it is about to enter a curve or a specific section. The vehicle is NOT informed based on computation of curvature or linear equations”. Applicant states Kim, Yamamoto, and Hwang do NOT disclose directly determining whether the path is straight or curved based on the "type of pattern (such as a special icon)” since these arts rely on image feature analysis (such as lane shadows or geometric features) to compute curve curvature or path equations in order to control the vehicle. However, the Examiner disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., is NOT a lane line, a shadow, or a geometric feature, but rather a pattern with "different symbols" or "different icons"; a special icon) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The Examiner respectfully states that the argued limitation “determines whether a driving path of the vehicle is straight or curved based on the track pattern, wherein the track pattern serves to indicate a type of a road section or an upcoming driving maneuver of the vehicle” is silent and/or broad for a number of elements. The limitation does not disclose the track pattern is a pattern with "different symbols" or "different icons" in any way. The language only states the track pattern comprises positioning elements; however, does not limit or define what these elements are. In fact, the language does not prevent the elements from being a lane line, a shadow, or a geometric feature. In addition, the language does not the positioning elements of the pattern to be "different symbols" or "different icons". Therefore, the broadest reasonable interpretation is applied. Furthermore, the language is silent on how exactly a path is straight or curved based on the track pattern. The language provides no explanation on how the pattern is analyzed or used to determine if the path is straight or curved. Therefore, the broadest reasonable interpretation is applied. Furthermore, the language additionally only states that the track pattern is to served to indicate a type of a road section. The language does not limit or define what “type of a road section” is. Therefore, the broadest reasonable interpretation is applied. Furthermore, the Examiner refers the Applicant to MPEP 904.01 (b) that states "All subject matter that is the equivalent of the subject matter as defined in the claim, even though specifically different from the definition in the claim, must be considered unless expressly excluded by the claimed subject matter." In other words, while the prior art cited may not explicitly use the same terminology as disclosed in the claim limitations, it doesn't mean the art doesn't teach it and can't be considered to reject Applicant’s claimed invention. Thus, examiner submits that what is taught by the references of the cited art is considered functionally equivalent to that which is claimed discussed below. Therefore, based on the broadest reasonable interpretation of language of the limitation, Kim teaches the subject matter of wherein the at least one processor receives the front road images, recognizes the track pattern; (FIG 2; 0022-0024 of Kim discloses a controller receiving the images and recognizes lane information within the images. The images are taken from a camera installed in front of the vehicle which the vehicle is on the road. The lane detector of the vehicle analyzes the images to detect a lane based on shade features (generally, lanes are shown light) or geometric features (location and thickness etc.) of lanes. Based on the identifying of the features representing a lane, it able to detect lanes. In addition, the image is taken from a vehicle while on a road, thus, image of the road. The detected lane is based on recognizing the features (feature points) in the image. Therefore, identified features, making up a lane, of the image are therefore features/lanes of the road itself. Thus, the detect lane (information) are identified lane information on the road. Thus, the lane information on the road is a form of a track pattern “drawn” on the road since it is implicit lanes are painted/drawn on the road.) In addition, Kim discloses determine a driving path is curved based on the track pattern; and (0027, 0029, 0032, 0034, 0037-0038: curvature of the curves is determined based on the identified elements; 0039-0041: turning angle and turning speed are calculated based on the identified curvature) However, the cited art fails to specifically disclose determines whether a driving path of the vehicle is straight or curved based on the track pattern, wherein the track pattern serves to indicate a type of a road section or an upcoming driving maneuver of the vehicle. However, Yamamoto discloses a vehicle following a target route made up of a collection of spaced markers on a road includes sections of straight section (0078, 0087; FIG 10) and curvatures (0084, 102; FIG 14). The collection of markers that are laid out in the road (0087; FIG 10) are considered a form of track pattern, since the collection of the markers form a route. (0078. 0087: target route 1R represented by the route data matches a laying line of magnetic markers) Yamamoto discloses that the target route (consisting of markers on the road)is used to identify if a section of the road ahead is either straight or curved. For example, 0084 clearly states “Traveling control unit 11 identifies, for example, a curvature of a curve ahead or the like as a route specification, which is a shape specification of the route of vehicle 5 ahead. Here, the shape of the route ahead means a route shape several meters ahead of vehicle 5. Note that, in place of the one several meters ahead, the route shape may be a route shape at a location several tens of meters ahead or at a location vehicle 5 has reached (zero meter ahead)”. In addition, 0092-0094 discloses using determining if the route ahead is straight or curved. Thus, Yamamoto discloses the identified markers, within the route, are positioning makers that identify if the section(s) ahead on the route are either straight or curved. Thus, Yamamoto discloses a track pattern is used to identify if a curve or straight section of the road is ahead, wherein a curved section or a straight section are types of road sections. It would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the features of Kim with the disclosed features of Yamamoto et al since it would have provided the benefit of a highly-versatile automatic-steering control method and control system that can adjust the traveling route of the vehicle without greatly changing the control specifications. (0016) Therefore, combination of Kim and Yamamoto teaches the amended independent claims. On page 15, in regards to claims 3 and 10, Applicant merely argues Claims 3 and 10 are directed to recognizing a specific track pattern and then determining whether the path is straight. This is distinct from Kobach, which merely performs lane line detection or geometric feature analysis. However, the Examiner disagrees. After consideration of Applicants arguments of claim(s) 3 and 10, the Examiner respectfully states Applicant’s remarks are not persuasive to overcome the cited rejections and respectfully direct the Applicant to the rejection explained above for the reasons why the claim remains rejected under the same grounds of rejection. On page 15, in regards to claim 13, Applicant merely argues Kuramochi is silent with the feature "calculate the curvature of the curved section according to the track pattern recognized from the front road images and a position of the positioning element that is triggered firstly". Applicant states Kuramochi performs computation based on overall image features or the contours of a lane line, rather than based on the timing sequence in which positioning elements are triggered. However, the Examiner disagrees. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. After consideration of Applicants arguments of claim(s) 13, the Examiner respectfully states Applicant’s remarks are not persuasive to overcome the cited rejections and respectfully direct the Applicant to the rejection explained above for the reasons why the claim remains rejected under the same grounds of rejection. On page 16, in regards to claims 5 and 12, Applicant merely states the subject matter Claims 5 and 12 discloses the transverse control system drives the transverse control system to perform integrated coordination control of the steering wheel and vehicle speed, rather than merely performing steering or speed adjustment. Applicant merely states Okada extracts feature points or pixel groups from an image based on brightness or color differences relative to surrounding pixels. However, Okada merely focuses on identifying regions that differ significantly from the background. Okada is silent with finding out a group of points having peak color values matching color values of the feature points. However, the Examiner disagrees. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. After consideration of Applicants arguments of claim(s) 5 and 12, the Examiner respectfully states Applicant’s remarks are not persuasive to overcome the cited rejections and respectfully direct the Applicant to the rejection explained above for the reasons why the claim remains rejected under the same grounds of rejection. Conclusion If the Applicant chooses to amend the claims in future filings, the Examiner kindly states any new limitation(s) added to the claims must be described in the specification in such a way as to reasonably convey to one skilled in the relevant art in order to meet the written description requirement of 35 USC 112, first paragraph. To help expedite prosecution, promote compact prosecution and prevent a possible 112(a)/first paragraph rejection, the Examiner respectfully requests for each new limitation added to the claims in a future filing by the Applicant that the Applicant would cite the location within the specification showing support for that new limitation within the remarks. In addition, MPEP 2163.04(I)(B) states that a prima facie under 112(a)/first paragraph may be established if a claim has been added or amended, the support for the added limitation is not apparent, and applicant has not pointed out where added the limitation is supported. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID FABER whose telephone number is (571)272-2751. The examiner can normally be reached Monday - Thursday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at 5712724140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM M QUELER/Supervisory Patent Examiner, Art Unit 2172 /D.F/Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Dec 28, 2022
Application Filed
Feb 19, 2025
Non-Final Rejection — §103
May 20, 2025
Response Filed
Jul 01, 2025
Final Rejection — §103
Oct 02, 2025
Request for Continued Examination
Oct 10, 2025
Response after Non-Final Action
Jan 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12571650
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR UPDATING MAP
2y 5m to grant Granted Mar 10, 2026
Patent 12561512
METHODS AND SYSTEMS FOR PROMPTING LARGE LANGUAGE MODEL TO GENERATE FORMATTED OUTPUT
2y 5m to grant Granted Feb 24, 2026
Patent 12541296
FINANCIAL SERVICE PROVIDING METHOD USING VISUALIZED FINANCIAL RELATIONSHIP CONTENT-BASED UI, FINANCIAL SERVICE PROVIDING APPARATUS FOR PERFORMING SAME, AND RECORDING MEDIUM HAVING SAME RECORDED THEREIN
2y 5m to grant Granted Feb 03, 2026
Patent 12522242
MAP EVALUATION APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12497029
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
88%
With Interview (+36.7%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 531 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month