Prosecution Insights
Last updated: April 19, 2026
Application No. 18/292,862

METHOD FOR PREDICTING TURN POINTS

Non-Final OA §102§103
Filed
Jan 26, 2024
Examiner
BHATNAGAR, ANAND P
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Harman International Industries, Incorporated
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
648 granted / 710 resolved
+29.3% vs TC avg
Minimal +2% lift
Without
With
+2.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
18 currently pending
Career history
728
Total Applications
across all art units

Statute-Specific Performance

§101
20.9%
-19.1% vs TC avg
§103
26.0%
-14.0% vs TC avg
§102
34.2%
-5.8% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 710 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8, 13, 16-18, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kopp et al. (U.S. patent pub. 2019/0311298 A1). Regarding claim 1: Kopp et al. discloses a computer-implemented method for predicting one or more turn points related to a road a vehicle is travelling on (fig. 1 and paragraphs 0037-0038), the one or more turn points indicating locations where the vehicle can change direction (paragraphs 0020, 0037, and 0038, the system monitor road lanes, edges, shoulders, intersections, etc. The intersections are turning points as well as roads/lanes that are curved/bent, i.e. a slight turn. The intersections and road/lane curvature/bent are locations where the vehicle changes direction.) the method comprising: obtaining training images of roads and their environment (paragraphs 0020 and 0024-0026); receiving labels associated with the roads on the training images, each label comprising a training turn marker (paragraphs 0020-0024, 0038, 0039, and 0057-0060); training an artificial neural network on a training dataset to predict one or more turn points, wherein the training dataset comprises the received labels and the obtained training images (paragraphs 0057-060); recording at least one road image of a road and its environment (paragraphs 0073-0074); and processing the road image by the artificial neural network to predict one or more turn points on the road image (paragraphs 0073-0074). Regarding claim 2: The method of claim 1, wherein each training turn marker comprises a turn point (paragraphs 0038-0039, wherein the lanes, roadways, signage, paint markings, etc. of the roads are obtained.). Regarding claim 3: The method of claim 1, wherein each training turn marker comprises a turn line indicative of a road border section where a vehicle can change travelling direction (paragraphs 0038-0039, wherein the lanes, roadways, signage, paint markings, etc. of the roads are obtained. If the signs or paint markings on the road have a turn signal then this would be obtained by the system and this would be a change travelling direction.). Regarding claim 4 The method of claim 3, further comprising determining, for each turn line, a turn point at the centre of the turn line (paragraphs 0038-0039). Regarding claim 5: The method of claim 3, wherein a turn line indicates the road border section only if the beginning and the end of the road border section are visible on the training image (paragraphs 0038-0039). Regarding claim 6: The method of claim 3, wherein a turn line indicates a road border section comprising a section of a road border of a main road where the main road forms a junction, an intersection, or a crossroads to a crossed road (paragraphs 0038-0039, the system detects all critical data needed for safe navigation (see last line of paragraph 0038). This is read as any objects, markers, etc., in the images are acquired and determined for a traveling vehicle.); and/or an exit of a main road is blocked by a temporary barrier, a physical separating strip, and/or traffic signage blocking traffic (paragraphs 0038-0039, the system detects all critical data needed for safe navigation (see last line of paragraph 0038). This is read as any objects, markers, lanes, vehicles, objects, entry/exit of the roads, etc., in the images are acquired and determined for a traveling vehicle.). Regarding claim 7: The method of claim 5, wherein a turn line does not indicate a road border section of a main road (paragraphs 0038-0039, the system detects all critical data needed for safe navigation (see last line of paragraph 0038). This is read as any objects, markers, etc., in the images are acquired and determined for a traveling vehicle.) where one or more of the following apply: the main road is crossed by a crosswalk, the main road is curved without comprising any of a junction, intersection or crossroads, an edge of a road border section is invisible on the training image (paragraphs 0038-0039, the system detects all critical data needed for safe navigation (see last line of paragraph 0038). This is read as any objects, markers, etc., in the images are acquired and determined for a traveling vehicle.), and/or an edge of a road that is not an edge of a carriageway of the road (paragraphs 0038-0039, the system detects all critical data needed for safe navigation (see last line of paragraph 0038). This is read as any objects, markers, etc., in the images are acquired and determined for a traveling vehicle.). Regarding claim 8: The method of claim 1, wherein the training images include training images with randomly added shadows, colour transformations, horizontal flipping, blurring, random resizing and/or random cropping (paragraphs 0048 and 0059, scaling=resizing and light conditions=shadows. ). Regarding claim 13: The method of claim 1, wherein the steps of recording the road mage, and processing the road image are executed by a computer attached to or comprised in a mobile device (paragraphs 0037-0038). Regarding claim 16: The method of claim 1, wherein the training dataset further comprises: at least one label indicating a boundary of at least one image segment and/or for each image segment, a label for a segment type indicating an object represented by the segment; and wherein training and/or processing includes predicting boundaries and/or types of image segments (paragraphs 0037, 0038, 0059 and 0062). Regarding claim 17: The method of claim 16, wherein predicting boundaries and types of image segments comprises application of online hard example mining (paragraphs 0019-0021, 0037, 0038, and 0062, as stated it is alternative form and examiner reads the binary cross-entropy loss function is applied for the training and not for the online hard example mining), and/or wherein training comprises applying a binary cross-entropy loss function). Regarding claim 18: The method of claim 1, further comprising recording the training and/or inference images by a vehicle-mounted camera (paragraphs 0037-0038). Regarding claim 20: Kopp et al. discloses a system for predicting turn points indicating locations where the vehicle can change direction, the system comprising means for executing the steps of claim 13 (see claim 13). Note: Regarding claim 20, it is believed that this claim does not fall under 35USC 112(f)/sixth paragraph interpretation since this claim includes the subject matter of claim 13. Claim 13 states that “wherein the steps of recording the road mage, and processing the road image are executed by a computer attached to or comprised in a mobile device,” i.e. hardware performing the steps. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. A.) Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Kopp et al. (U.S. patent pub. 20190311298 A1), and further in view of Choe et al. (U.S. patent pub. 2020/0026282 A1). Regarding claim 9: Kopp et al. does not teach the feature of “the artificial neural network comprises an output layer comprising outputs indicative of heat maps indicative of one or more of turn lines, turn points, and road segments.” Choe et al. teaches “the artificial neural network comprises an output layer comprising outputs indicative of heat maps indicative of one or more of turn lines, turn points, and road segments” (Choe et al.; see paragraph 0058). It would have been obvious to one ordinary skilled in the art to combine the teaching of Choe et al. to the system of Kopp et al. since they are in the same field of endeavor. One ordinary skilled in the art would have been motivated to combine the teaching of Choe et al. to the system of Kopp et al. to have “a reliable vision-based perception system for ADVs when LIDAR sensors may not be available” (Choe et al.; paragraph 0003). Regarding claim 10: Neither Kopp et al. nor Choe et al. teaches “for each heat map: applying a step function to set values below a predefined threshold to zero and all other values to one, determining one or more contiguous zones of non-zero values, and for each zone, determining a centre of mass position of the zone as a turn point.” This is a well-known feature in the art of image processing. Examiner takes OFFICIAL NOTICE. It would have been obvious to one ordinary skilled in the art to incorporate this well-known feature. One ordinary skilled in the art would have been motivated to incorporate this feature in order to have a more accurate road analysis system by using a zone and object analysis n different regions of interest. B.) Claims 11, 12, 14, 15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kopp et al. (U.S. patent pub. 20190311298 A1). Regarding claim 11: Kopp et al. does not teach “applying a Gaussian filter to the labels of the training dataset.” This is a well-known feature in the art of image processing. Examiner takes OFFICIAL NOTICE. It would have been obvious to one ordinary skilled in the art to incorporate this well-known feature. One ordinary skilled in the art would have been motivated to incorporate this feature in order to have a more accurate road analysis system. Regarding claim 12: Kopp et al. does not teach “wherein training the artificial neural network comprises minimizing a mean squared error of the predicted turn points with respect to the training turn markers.” This is a well-known feature in the art of image processing. Examiner takes OFFICIAL NOTICE. It would have been obvious to one ordinary skilled in the art to incorporate this well-known feature. One ordinary skilled in the art would have been motivated to incorporate this feature in order to have a more accurate road analysis system. Regarding claim 14: Kopp et al. does not teach “comprising determining a confidence value for the predicted turn points on one or more road images; comparing the confidence value to a predetermined threshold; and including the one or more road images into the training dataset as training images if the confidence value is below the threshold.” Examiner takes OFFICIAL NOTICE. It would have been obvious to one ordinary skilled in the art to incorporate this well-known feature. One ordinary skilled in the art would have been motivated to incorporate this feature in order to have a more accurate road analysis system. Regarding claim 15: The method of claim 12, further comprising: displaying the road image and/or other environmental data, superimposed with graphical and/or text output based on the predicted turn points (Kopp et al.; paragraphs 0080-0082, “Some devices 122 show detailed maps on displays outlining the route.” This outlining is read as a graphical output.). Regarding claim 19: Kopp et al. does not teach “comprising recording the training images at a fixed frame rate, and removing each training image if it depicts the same junction as a second training image.” The recording speed and deleting of images is a matter of design choice. One ordinary skilled in the art would have been motivated to include these features to make the system more efficient and speed up the training process by removing redundant images. Contact Information 4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANAND BHATNAGAR whose telephone number is (571)272-7416. The examiner can normally be reached on M-F 7:30am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on 571-272-4650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANAND P BHATNAGAR/ Primary Examiner, Art Unit 2668 January 8, 2026
Read full office action

Prosecution Timeline

Jan 26, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597282
IMAGE PROCESSING APPARATUS, CONTROL METHOD OF IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597172
DECODING ATTRIBUTE VALUES IN GEOMETRY-BASED POINT CLOUD COMPRESSION
2y 5m to grant Granted Apr 07, 2026
Patent 12592003
Methods for the compression and decompression of a digital terrain model file; associated compressed and decompressed files and associated computer program product
2y 5m to grant Granted Mar 31, 2026
Patent 12592053
METHOD FOR ADJUSTING A REGION OF INTEREST IN A DYNAMIC IMAGE FOR ADVANCED DRIVER-ASSISTANCE SYSTEM, AND IN-VEHICLE ELECTRONIC DEVICE FOR IMPLEMENTING THE METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12579716
MRI RECONSTRUCTION BASED ON CONTRASTIVE LEARNING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
94%
With Interview (+2.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 710 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month