Prosecution Insights
Last updated: April 19, 2026
Application No. 18/254,637

DIVIDING LINE RECOGNITION DEVICE

Final Rejection §102
Filed
May 26, 2023
Examiner
MILLER, RONDE LEE
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Hitachi Astemo, Ltd.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+10.7% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 – 7 have been amended. Claims 1 – 7, all of the claims pending in this application, have been rejected. The amendment filed 9/03/2025 overcomes the objection to the specification and the invoked 112(f) claim interpretations. The controller of claim 1 is understood to correspond to the microcontroller of [0013]. Response to Arguments Applicant’s arguments were filed 9/03/2025 regarding amendments to claims 1 – 7. Applicant’s main argument is that HUBERMAN et al. (hereinafter HUBERMAN) does not disclose a second dividing line being an extension of the first dividing line as claimed in claim 1, due to HUBERMAN’S use of lane geometry information being derived from grouping the road marks. Examiner disagrees and reiterates that Figure 24A and 24B of HUBERMAN disclose points associated with lane marking used in the detection of the first dividing line. HUBERMAN further teaches (using Figure 24C as reference) using these points and predetermined interval spacing to extend the line, therefore generating the second dividing line (being an extension of the first, as claimed). Applicant further argues that the distance calculated between the snail trail and a road polynomial is so that HUBERMAN could “only” determine if a leading vehicle was changing lanes and was not being used to generate a third dividing line (using a traveling trajectory of a lead vehicle and the first dividing line. The Examiner disagrees and interprets the projected trajectory of the leading vehicle that is changing lanes in HUBERMAN as the claimed third dividing line since that projected trajectory is estimated based on the distance (claimed positional relationship) between the detected trajectory of the leading vehicle (claimed travel trajectory of the other vehicle) and the detected road polynomials (claimed first dividing line). Nothing precludes this interpretation, particularly since the claimed dividing line is not further defined beyond being an estimated line (upon which HUBERMAN’s projected trajectory of the leading vehicle that is changing lanes reads). Notably, the subsequently-recited constructing step requires only one of the first, second, or third dividing lines. A possible response would be to elaborate further as to how the host vehicle of HUBERMAN could determine that the leading vehicle is likely changing lanes without having generated a lane line to know the lead vehicle was crossing over in the first place, as argued. If Applicant believes that HUBERMAN's projected trajectory of the leading vehicle that is changing lanes differs from the third dividing line of the subject invention, the Examiner recommends amending the claim language to further define the claimed third dividing line in a manner that distinguishes over the above interpretation. The Examiner maintains that the teachings of HUBERMAN do indeed teach the features of the claims, as detailed below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 – 7 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Publication 2021/0316751 A1 to HUBERMAN et al. (hereinafter HUBERMAN). Claim 1 Regarding Claim 1, HUBERMAN teaches a dividing line recognition device comprising a controller configured to: acquire dividing line information around an own vehicle detected by a dividing line detection sensor mounted on the own vehicle ;(Figure 5C, #550 and Figures 24C - 24D; "FIG. 5C is a flowchart showing an exemplary process 500C for detecting road marks and/or lane geometry information in a set of images, consistent with disclosed embodiments. Processing unit 110 may execute monocular image analysis module 402 to implement process 500C. At step 550, processing unit 110 may detect a set of objects by scanning one or more images. To detect segments of lane markings, lane geometry information, and other pertinent road marks, processing unit 110 may filter the set of objects to exclude those determined to be irrelevant (e.g., minor potholes, small rocks, etc.). At step 552, processing unit 110 may group together the segments detected in step 550 belonging to the same road mark or lane mark.", Paragraph [0190]; "Furthermore, in some embodiments, redundancy and validation of received data may be supplemented based on information received from one more sensors (e.g., radar, lidar, acoustic sensors, information received from one or more transceivers outside of a vehicle, etc.).", Paragraph [0173]); PNG media_image1.png 718 433 media_image1.png Greyscale acquire target information around the own vehicle detected by a target detection sensor mounted on the own vehicle ("As described in connection with FIG. 6 below, stereo image analysis module 404 may include instructions for detecting a set of features within the first and second sets of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, hazardous objects, and the like.", Paragraph [0178]); estimate a state of another vehicle on a basis of the target information ("In one embodiment, navigational response module 408 may store software executable by processing unit 110 to determine a desired navigational response based on data derived from execution of monocular image analysis module 402 and/or stereo image analysis module 404. Such data may include position and speed information associated with nearby vehicles, pedestrians, and road objects, target position information for vehicle 200, and the like.", Paragraph [0180]), where "state" is defined in the specification as quoted: "The other vehicle state estimation unit 104 estimates at least a position and speed of another vehicle as a state of the other vehicle and outputs the estimated state to the processing unit in the subsequent stage."; recognize a dividing line from the dividing line information and generate the recognized dividing line as a first dividing line (";FIGS. 24A-24D illustrate exemplary point locations that may be detected by vehicle 200 to represent particular lane marks. Similar to the landmarks described above, vehicle 200 may use various image recognition algorithms or software to identify point locations within a captured image. For example, vehicle 200 may recognize a series of edge points, corner points or various other point locations associated with a particular lane mark. FIG. 24A shows a continuous lane mark 2410 that may be detected by vehicle 200. Lane mark 2410 may represent the outside edge of a roadway, represented by a continuous white line. As shown in FIG. 24A, vehicle 200 may be configured to detect a plurality of edge location points 2411 along the lane mark. Location points 2411 may be collected to represent the lane mark at any intervals sufficient to create a mapped lane mark in the sparse map. For example, the lane mark may be represented by one point per meter of the detected edge, one point per every five meters of the detected edge, or at other suitable spacings. In some embodiments, the spacing may be determined by other factors, rather than at set intervals such as, for example, based on points where vehicle 200 has a highest confidence ranking of the location of the detected points. Although FIG. 24A shows edge location points on an interior edge of lane mark 2410, points may be collected on the outside edge of the line or along both edges. Further, while a single line is shown in FIG. 24A, similar edge points may be detected for a double continuous line. For example, points 2411 may be detected along an edge of one or both of the continuous lines.", Paragraph [0356]; "Vehicle 200 may also represent lane marks differently depending on the type or shape of lane mark. FIG. 24B shows an exemplary dashed lane mark 2420 that may be detected by vehicle 200. Rather than identifying edge points, as in FIG. 24A, vehicle may detect a series of corner points 2421 representing corners of the lane dashes to define the full boundary of the dash. While FIG. 24B shows each corner of a given dash marking being located, vehicle 200 may detect or upload a subset of the points shown in the figure. For example, vehicle 200 may detect the lending edge or leading corner of a given dash mark, or may detect the two corner points nearest the interior of the lane. Further, not every dash mark may be captured, for example, vehicle 200 may capture and/or record points representing a sample of dash marks (e.g., every other, every third, every fifth, etc.) or dash marks at a predefined spacing (e.g., every meter, every five meters, every 10 meters, etc.) Corner points may also be detected for similar lane marks, such as markings showing a lane is for an exit ramp, that a particular lane is ending, or other various lane marks that may have detectable corner points. Corner points may also be detected for lane marks consisting of double dashed lines or a combination of continuous and dashed lines.", Paragraph [0357]), where the first dividing line can be a single line, dashed line, double dashed line, etc., as just stated; PNG media_image2.png 346 332 media_image2.png Greyscale PNG media_image3.png 328 267 media_image3.png Greyscale estimate a dividing line by extending the first dividing line and generate the estimated dividing line as a second dividing line ("In some embodiments, the points uploaded to the server to generate the mapped lane marks may represent other points besides the detected edge points or corner points. FIG. 24C illustrates a series of points that may represent a centerline of a given lane mark. For example, continuous lane 2410 may be represented by centerline points 2441 along a centerline 2440 of the lane mark. In some embodiments, vehicle 200 may be configured to detect these center points using various image recognition techniques, such as convolutional neural networks (CNN), scale-invariant feature transform (SIFT), histogram of oriented gradients (HOG) features, or other techniques. Alternatively, vehicle 200 may detect other points, such as edge points 2411 shown in FIG. 24A, and may calculate centerline points 2441, for example, by detecting points along each edge and determining a midpoint between the edge points. Similarly, dashed lane mark 2420 may be represented by centerline points 2451 along a centerline 2450 of the lane mark. The centerline points may be located at the edge of a dash, as shown in FIG. 24C, or at various other locations along the centerline. For example, each dash may be represented by a single point in the geometric center of the dash. The points may also be spaced at a predetermined interval along the centerline (e.g., every meter, 5 meters, 10 meters, etc.). The centerline points 2451 may be detected directly by vehicle 200, or may be calculated based on other detected reference points, such as corner points 2421, as shown in FIG. 24B. A centerline may also be used to represent other lane mark types, such as a double line, using similar techniques as above.", Paragraph [0358]), where the points used in the detection of the first dividing line of Figures 24A and 24B are used using predetermined intervals to extend the lines (estimated), as seen in Figure 24C; PNG media_image4.png 373 470 media_image4.png Greyscale estimate a dividing line on a basis of a positional relationship between a traveling trajectory of the other vehicle and the first dividing line and generate the estimated dividing line as a third dividing line ("FIG. 5F is a flowchart showing an exemplary process 500F for determining whether a leading vehicle is changing lanes, consistent with the disclosed embodiments. At step 580, processing unit 110 may determine navigation information associated with a leading vehicle (e.g., a vehicle traveling ahead of vehicle 200). For example, processing unit 110 may determine the position, velocity (e.g., direction and speed), and/or acceleration of the leading vehicle, using the techniques described in connection with FIGS. 5A and 5B, above. Processing unit 110 may also determine one or more road polynomials, a look-ahead point (associated with vehicle 200), and/or a snail trail (e.g., a set of points describing a path taken by the leading vehicle), using the techniques described in connection with FIG. 5E, above.", Paragraph [0201]; "At step 582, processing unit 110 may analyte the navigation information determined at step 580. In one embodiment, processing unit 110 may calculate the distance between a snail trail and a road polynomial (e.g., along the trail). If the variance of this distance along the trail exceeds a predetermined threshold (for example, 0.1 to 0.2 meters on a straight road, 0.3 to 0.4 meters on a moderately curvy road, and 0.5 to 0.6 meters on a road with sharp curves), processing unit 110 may determine that the leading vehicle is likely changing lanes. In the case where multiple vehicles are detected traveling ahead of vehicle 200, processing unit 110 may compare the snail trails associated with each vehicle. Based on the comparison, processing unit 110 may determine that a vehicle whose snail trail does not match with the snail trails of the other vehicles is likely changing lanes. Processing unit 110 may additionally compare the curvature of the snail trail (associated with the leading vehicle) with the expected curvature of the road segment in which the leading vehicle is traveling. The expected curvature may be extracted from map data (e.g., data from map database 160), from road polynomials, from other vehicles' snail trails, from prior knowledge about the road, and the like. If the difference in curvature of the snail trail and the expected curvature of the road segment exceeds a predetermined threshold, processing unit 110 may determine that the leading vehicle is likely changing lanes.", Paragraph [0202]), where Examiner reiterates that the distance between snail trail and road polynomials of the lead vehicle (as stated above) provides essentially an extension of dividing lines 1 and 2 (similarly compared to applicants Figure 6) that allows a user (in the host vehicle in the prior art) to determine (by use of a threshold) if the lead vehicle is changing lanes; and construct an output dividing line to be output as a dividing line using at least one of the first dividing line, the second dividing line, or the third dividing line (Figure 25A; "At step 2624, process 2600E may include analyzing the at least one image to identify the at least one lane mark. Vehicle 200, for example, may use various image recognition techniques or algorithms to identify the lane mark within the image, as described above. For example, lane mark 2510 may be detected through image analysis of image 2500, as shown in FIG. 25A.", Paragraph [0379]; " At step 2625, process 2600B may include determining an actual lateral distance to the at least one lane mark based on analysis of the at least one image. For example, the vehicle may determine a distance 2530, as shown in FIG. 25A, representing the actual distance between the vehicle and lane mark 2510. The camera angle, the speed of the vehicle, the width of the vehicle, the position of the camera relative to the vehicle, or various other factors may be accounted for in determining distance 2530.", Paragraph [0380]), wherein the output lane marking (2510) is the output dividing line that correlates to either the first, second, or third dividing line. PNG media_image5.png 522 672 media_image5.png Greyscale PNG media_image6.png 257 415 media_image6.png Greyscale Claim 2 Regarding Claim 2, dependent on claim 1, HUBERMAN teaches the invention as claimed in claim 1. HUBERMAN further teaches wherein the controller is further configured to divide each of the first dividing line, the second dividing line, and the third dividing line into a plurality of sections and selects one of the first dividing line, the second dividing line, and the third dividing line as a dividing line in each section on a basis of reliability (Figure 5C; "At step 556, processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments across consecutive image frames and accumulating frame-by-frame data associated with detected segments. As processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and associated with an increasingly higher confidence level. Thus, by performing steps 550, 552, 554, and 556, processing unit 110 may identify road marks appearing within the set of captured images and derive lane geometry information.", Paragraph [0192]). Claim 3 Regarding Claim 3, dependent on claim 2, HUBERMAN teaches the invention as claimed in claim 2. HUBERMAN further teaches wherein the controller is further configured to add reliability information to each of information on the first dividing line, information on the second dividing line, and information on the third dividing line and selects a dividing line having high reliability as a dividing line in each section among the first dividing line, the second dividing line, and the third dividing line (Rejected as applied to claim 2). Claim 4 Regarding Claim 4, dependent on claim 1, HUBERMAN teaches the invention as claimed in claim 1. HUBERMAN further teaches wherein the controller is further configured to acquire positioning information detected by a positioning sensor mounted on the own vehicle (Figure 1, #130; "For example, the vehicle may use GPS data, sensor data (e.g., from an accelerometer, a speed sensor, a suspension sensor, etc.), and/or other map data to provide information related to its environment while the vehicle is traveling, and the vehicle (as well as other vehicles) may use the information to localize itself on the model.", Paragraph [0107]; Figure 1, # 130; "Position sensor 130 may include any type of device suitable for determining a location associated with at least one component of system 100. In some embodiments, position sensor 130 may include a GPS receiver. Such receivers can determine a user position and velocity by processing signals broadcasted by global positioning system satellites. Position information from position sensor 130 may be made available to applications processor 180 and/or image processor 190.", Paragraph [0119]); PNG media_image7.png 618 573 media_image7.png Greyscale estimate a position/posture of the own vehicle on a basis of the positioning information ("The disclosed systems and methods may include other features. For example, the disclosed systems may use local coordinates, rather than global coordinates. For autonomous driving, some systems may present data in world coordinates. For example, longitude and latitude coordinates on the earth surface may be used. In order to use the map for steering, the host vehicle may determine its position and orientation relative to the map. It seems natural to use a GPS device on board, in order to position the vehicle on the map and in order to find the rotation transformation between the body reference frame and the world reference frame (e.g., North, East and Down).", Paragraph [0325]); accumulate the position/posture of the own vehicle and information on the first dividing line as a history and generate a fourth dividing line obtained by extending the first dividing line rearward of the own vehicle (Figure 11B; "As discussed above, system 100 may provide drive assist functionality that uses a multi-camera system. The multi-camera system may use one or more cameras facing in the forward direction of a vehicle. In other embodiments, the multi-camera system may include one or more cameras facing to the side of a vehicle or to the rear of the vehicle.", Paragraph [0165]; “In some embodiments, one or more of image capture devices 122, 124, and 126 may be configured to acquire image data from an environment in front of vehicle 200, behind vehicle 200, to the sides of vehicle 200, or combinations thereof.”, Paragraph [0153]; Paragraphs [0248 - 0249]; "FIG. 5A is a flowchart showing an exemplary process 500A for causing one or more navigational responses based on monocular image analysis, consistent with disclosed embodiments. At step 510, processing unit 110 may receive a plurality of images via data interface 128 between processing unit 110 and image acquisition unit 120. For instance, a camera included in image acquisition unit 120 (such as image capture device 122 having field of view 202) may capture a plurality of images of an area forward of vehicle 200 (or to the sides or rear of a vehicle, for example) and transmit them over a data connection (e.g., digital, wired, USB, wireless, Bluetooth, etc.) to processing unit 110. Processing unit 110 may execute monocular image analysis module 402 to analyze the plurality of images at step 520, as described in further detail in connection with FIGS. 5B-5D below. By performing the analysis, processing unit 110 may detect a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, and the like." Paragraph [0182]; Paragraphs [0248 - 0249]), wherein the processes of detecting dividing lines are also utilized by the rear facing sensors; and use the third dividing line and the fourth dividing line to construct, behind the own vehicle, an output dividing line that is a boundary between an own lane on which the own vehicle is traveling and an adjacent lane adjacent to the own lane (Rejected as applied directly above). Claim 5 Regarding Claim 5, dependent on claim 1, HUBERMAN teaches the invention as claimed in claim 1. HUBERMAN further teaches wherein the controller is further configured to: estimate states of a plurality of other vehicles around the own vehicle, estimate a plurality of third dividing lines on a basis of a positional relationship between the first dividing line and a traveling trajectory of each of the plurality of other vehicles (Rejected as applied to claim 1), and construct an output dividing line to be output as a dividing line using at least one of the first dividing line, the second dividing line, or the plurality of third dividing lines (Rejected as applied to claim 1). Claim 6 Regarding Claim 6, dependent on claim 5, HUBERMAN teaches the invention as claimed in claim 5. HUBERMAN further teaches wherein in a case where the plurality of third dividing lines are compared with each other and adjacent third sections have a distance equal to or less than a threshold, the controller is further configured to adopt any one of the plurality of third dividing lines or a dividing line obtained by integrating the plurality of third dividing lines, as the third dividing line (Rejected as applied to claim 1). Claim 7 Regarding Claim 7, dependent on claim 4, HUBERMAN teaches the invention as claimed in claim 4. HUBERMAN further teaches a vehicle control device that determines whether a traveling lane of another vehicle traveling behind the own vehicle is the own lane or the adjacent lane by using information on the output dividing line behind the own vehicle constructed by the dividing line recognition device according to claim 4 (Rejected as applied to claim 4). Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Mr. Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONDE LEE MILLER/Examiner, Art Unit 2663 /GREGORY A MORSE/ Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Jun 13, 2025
Non-Final Rejection — §102
Sep 03, 2025
Response Filed
Nov 06, 2025
Final Rejection — §102
Apr 09, 2026
Request for Continued Examination
Apr 15, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573215
LEARNING APPARATUS, LEARNING METHOD, OBJECT DETECTION APPARATUS, OBJECT DETECTION METHOD, LEARNING SUPPORT SYSTEM AND LEARNING SUPPORT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12548114
METHOD FOR CODE-LEVEL SUPER RESOLUTION AND METHOD FOR TRAINING SUPER RESOLUTION MODEL THEREFOR
2y 5m to grant Granted Feb 10, 2026
Patent 12524833
X-RAY DIAGNOSIS APPARATUS, MEDICAL IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12502905
SECURE DOCUMENT AUTHENTICATION
2y 5m to grant Granted Dec 23, 2025
Patent 12505581
ONLINE TRAINING COMPUTER VISION TASK MODELS IN COMPRESSION DOMAIN
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+37.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month