Prosecution Insights
Last updated: April 19, 2026
Application No. 18/958,895

NAVIGATION METHOD AND SYSTEM BASED ON TERRAIN FEATURE

Non-Final OA §102§103
Filed
Nov 25, 2024
Examiner
KIM, ANDREW SANG
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hyundai Autoever Corp.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
87%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
146 granted / 175 resolved
+31.4% vs TC avg
Minimal +4% lift
Without
With
+3.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
22 currently pending
Career history
197
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 175 resolved cases

Office Action

§102 §103
DETAILED ACTION Claims 1-20 received on 11/25/2024 are considered in this office action. Claims 1-20 are pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/25/2024 is being considered by the examiner. Examiner’s Note - 35 USC § 101 Regarding the additional limitation of outputting navigation information including the determined target object and movement direction and transmitting precise driving route data including data related to the determined target object and the determined movement direction to the communication terminal applies or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, thus integrating the judicial exception into a practical application as supported by para. [0006], [0051] and [00123] of the specification, which is reproduced below: [0006] a navigation method and system that provides an intuitive and precise driving route based on a terrain feature located ahead of a vehicle on an actual driving road. [0051] Turn right at the traffic light at an intersection in front of the vehicle" may be output. Through this navigation information 1, the driver may intuitively recognize the need to turn right based on the traffic light. Accordingly, navigation information that may be easily identified may be provided to the driver who cannot accurately identify the map provided by the navigation system, thereby preventing the driver from entering an incorrect route, and allowing the vehicle to drive along the intended driving route [00123] In some embodiments, the computing system 1000 as described with reference to FIG. 12 may be configured using one or more physical servers included in a server farm based on cloud technology such as virtual machines Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 4-6, 10-12 and 16-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Golding (US20170314954A1). Regarding claim 1, Golding teaches a navigation method performed by a computing system (Claim 1: “A method for generating navigation directions […] by one or more processors […]”), the method comprising: acquiring an image of a surrounding located in front of a vehicle using a sensing device (FIG. 2; para. [0024]: “a standard monocular camera mounted on the dashboard or windshield […] faces the road similar to a dashboard camera”; para. [0045]: “At block 106, real-time imagery is collected at the vehicle approximately from the vantage point of the driver”); determining a target object acting as a terrain feature among a plurality of objects included in the acquired image of the surrounding located in front of the vehicle (FIG. 2; para. [0046]: “The real-time imagery of the scene then is processed at block 108. […] The processing at block 108 can include comparing the captured scene to the pre-stored imagery of the landmarks obtained at block 106. The processing can produce an indication of which of the visual landmarks identified at block 104 can be identified in the captured scene, and thus probably are visible to the driver.”; para. [0048]: “course, if more than the necessary number of visual landmarks (typically one) are determined to be visible for a single maneuver”; para. [0058]-[0059]: “Example Image Processing Techniques: […] landmark selection system 18 compares the captured real-time imagery to pre-stored images to detect a match or absence of a match”; , wherein selected visual landmarks for navigational guidance indicate target object acting as a terrain feature among a plurality of objects); determining a movement direction in which a driver drives along a driving route at a position of the target object (FIG. 2; para. [0010]: “determining a position of the physical object relative to a point on the route, and providing, to the driver, navigation directions describing the route, the navigation directions including a reference to the identified physical object”; para. [0061]: “In this manner, the landmark selection system 18 can describe the position of an identified object relative to static geographic features and generate navigation instructions of the type “turn where the sports car is now turning.”; para. [0004]: “Thus, a system can generate such navigation directions as “in one fourth of a mile, you will see a McDonald's restaurant on your right; make the next right turn onto Maple Street.”; para. [0019]: “For example, the system can modify the instruction “turn left in 200 feet” to “turn left by the red truck.””, wherein generating navigation direction relative to the landmark indicates determining a movement direction in which a driver drives along a driving route at a position of the target object); and outputting navigation information including the determined target object and movement direction (FIG. 2; para. [0047]: “At block 110, navigation directions referencing the one or more visible visual landmarks are provided to the driver, whereas the visual landmarks identified at block 104 but not located within the scene captured at block 106 are omitted”; para. [0004]: “Thus, a system can generate such navigation directions as “in one fourth of a mile, you will see a McDonald's restaurant on your right; make the next right turn onto Maple Street.””). Regarding claim 4, Golding teaches the navigation method of claim 1. Golding further teaches wherein determining the target object includes: recognizing the plurality of objects included in the image of the surrounding located in front of the vehicle (FIG. 2; FIG. 4; FIG. 5; para. [0044]: “At block 104, indications of landmarks corresponding to prominent physical objects disposed along the route are retrieved”; para. [0061]: “The landmark selection system 18 then places the identified objects within the geographic model 200 of the corresponding area. Moreover, the landmark selection system 18 can determine the spatial orientation of these objects […] these visual landmarks are only candidate visual landmarks for the current navigation sessions, and it can be determined that some or all of these visual landmarks are not visible”; para. [0068]: “At block 306, the landmark selection system 18 can identify objects of certain pre-defined types within the captured scene”); determining of points of each of the recognized plurality of objects based on a type of each of the recognized plurality of objects (para. [0034]: “The visual landmark database 52 can store information regarding prominent geographic entities that can be visible when driving (or bicycling, walking, or otherwise moving along a navigation route) and thus serve as visual landmarks”; para. [0036]: “overall numeric metric for a visual landmark that can be used to assess whether the visual landmark should be referenced in navigation directions at all”; para. [0038]: “the user profile database 54 can store user preferences regarding the types of visual landmarks they prefer to see”; para. [0069]: “At block 308, the landmark selection system 18 can determine which of the detected objects appear prominently within the scene […] The landmark selection system 18 accordingly can assess the prominent of visual landmarks relative to the rest of the scene based on the difference in color, for example.”); and determining the target object among the plurality of objects, based on the determined points of each of the plurality of objects (para. [0033]: “landmark selection system 18 then can select a subset of these visual landmarks in accordance with the likelihood the driver can actually see the landmarks when driving, and/or dynamically identify visual landmarks that were not previously stored in the visual landmark database 52”; para. [0048]: “necessary number of visual landmarks (typically one) are determined to be visible for a single maneuver, the visual landmarks can be further filtered based on other signals”; para. [0069]: “More particularly, the landmark selection system 18 can determine that the car enclosed by the box 206 is bright red, and that the rest of the scene 60 lacks bright patches of color. The car enclosed by the box 206 thus can be determined to be a potentially useful visual landmark.”). Regarding claim 5, Golding teaches the navigation method of claim 4. Golding further teaches wherein determining the points of each of the plurality of objects (para. [0006]: “driver with navigation directions using visual landmarks that are likely to be visible at the time when the driver reaches the corresponding geographic location. In one implementation, the system selects visual landmarks from a relatively large and redundant set of previously identified visual landmarks. To make the selection, the system can consider one or more of the time of day”) includes: identifying a specific time range including a current time among a predetermined plurality of time ranges (para. [0020]: “Further, the system can receive signals indicative of current time, weather conditions, etc. from other sources, such as a weather service, and select landmarks suitable for the current environmental conditions.”; para. [0020]: “The system can assess usefulness at different times and under weather conditions, so that a certain billboard can be marked as not useful during daytime but useful when illuminated at night”, wherein daytime or night indicates predetermined plurality of time ranges); and determining the points of each of the plurality of objects, based on object type-specific points data related to the identified specific time range (para. [0034]: “However, the multiple views of the visual landmark can differ according to the time of day, weather conditions, season, etc. The data record can include metadata that specifies these parameters for each image. For example, the data record may include a photograph of a billboard at night when it is illuminated along with a timestamp indicating when the photograph was captured and another photograph of the billboard at daytime from the same vantage point along with the corresponding timestamp.”; para. [0036]: “assess whether the visual landmark should be referenced in navigation directions at all, separate numeric metrics for different times of day”; para. [0053]: “can adjust the metric for a particular time of day”). Regarding claim 6, Golding teaches the navigation method of claim 4. Golding further teaches wherein determining the points of each of the plurality of objects includes: assigning first points to each of the plurality of objects with reference to first points data in which points of each object type are recorded (para. [0036]: “the visual landmark database 52 in one example implementation stores an overall numeric metric for a visual landmark that can be used to assess whether the visual landmark should be referenced in navigation directions at all, separate numeric metrics for different times of day, different weather conditions, etc. and/or separate numeric metrics for different images”); assigning second points to each of the plurality of objects with reference to second points data in which points of each object type are recorded, wherein the second points data are different from the first points data (para. [0038]: “the user profile database 54 can store user preferences regarding the types of visual landmarks they prefer to see.”); and determining the points of each of the plurality of objects, based on the first points and the second points assigned to each of the plurality of objects (para. [0038]: “The landmark selection system 18 can use user preferences as at least one of the factors when selecting visual landmarks from among redundant visual landmarks. In some implementations, the user provides an indication that he or she allows the landmark selection system 18 may utilize this data.”, wherein the overall numeric metric and user preferences are both used to select the landmarks to reference). Regarding claim 10, Golding teaches a navigation method performed by a computing system (para. [0022]: “A landmark selection system 18 configured to select visual landmarks using real-time imagery and/or time of day, season, weather, conditions, etc. can be implemented in the mobile system 12, the server system 14, or partially in mobile system 12 and partially in the server system 14.”), the method comprising: acquiring an image of a surrounding located in front of a vehicle using a sensing device (FIG. 2; para. [0024]: “a standard monocular camera mounted on the dashboard or windshield […] faces the road similar to a dashboard camera […] the mobile system 12 in some implementations uses multiple cameras to collected redundant imagery in real time”; para. [0045]: “At block 106, real-time imagery is collected at the vehicle approximately from the vantage point of the driver”); transmitting the image of the surrounding located in front of the vehicle to an external device (para. [0040]: “The mobile system 12 accordingly can capture photographs and/or video and provide the captured imagery to the server system 14, where the visual landmark selection module executes a video processing pipeline.”); receiving precise driving route data generated based on the image of the surrounding located in front of the vehicle from the external device (para. [0061]: “In this manner, the landmark selection system 18 can describe the position of an identified object relative to static geographic features and generate navigation instructions of the type “turn where the sports car is now turning.”; para. [0004]: “Thus, a system can generate such navigation directions as “in one fourth of a mile, you will see a McDonald's restaurant on your right; make the next right turn onto Maple Street.”; para. [0019]: “For example, the system can modify the instruction “turn left in 200 feet” to “turn left by the red truck.””; para. [0022]: “A landmark selection system 18 configured to select visual landmarks using real-time imagery and/or time of day, season, weather, conditions, etc. can be implemented in the mobile system 12, the server system 14, or partially in mobile system 12 and partially in the server system 14.”, wherein implemented by the server system indicates that these processes are performed in an external device and then displayed guidance on a mobile device indicates receiving precise driving route data); and outputting navigation information for guiding a driver to drive in a movement direction at an object acting as a terrain feature, based on the precise driving route data (FIG. 2; para. [0047]: “At block 110, navigation directions referencing the one or more visible visual landmarks are provided to the driver, whereas the visual landmarks identified at block 104 but not located within the scene captured at block 106 are omitted”; para. [0004]: “Thus, a system can generate such navigation directions as “in one fourth of a mile, you will see a McDonald's restaurant on your right; make the next right turn onto Maple Street.””). Regarding claim 11, Golding teaches the navigation method of claim 10. Golding further teaches wherein acquiring the image of the surrounding located in front of the vehicle using the sensing device includes: determining whether a precise navigation-related event has occurred (FIG. 2; FIG. 3; para. [0067]: “vehicle approaches the location of the next maneuver”); and upon determination that the precise navigation related event has occurred, acquiring the image of the surrounding located in front of the vehicle using the sensing device (para. [0067]: “Next, at block 304, the landmark selection system 18 can receive real-time imagery for a scene, collected at a certain location of the vehicle. Typically but not necessarily, the real-time imagery is collected when the vehicle approaches the location of the next maneuver.”, wherein approaching the location of the next maneuver indicates precise navigation related event has occurred). Regarding claim 12, Golding teaches the navigation method of claim 11. Golding further teaches wherein determining whether the precise navigation-related event has occurred includes: measuring a position of the vehicle (para. [0008]: “a positioning module configured to determine a current geographic location of the vehicle,”); calculating a remaining distance to a turning point, based on the driving route and the measured position of the vehicle (para. [0032]: “The navigation instructions generator 42 can use the one or more routes generated by the routing engine 40 and generate a sequence of navigation instructions. Examples of navigation instructions include “in 500 feet, turn right on Elm St.” and “continue straight for four miles.””; para. [0042]: “The navigation application accordingly generates the audio message “turn left at the bus stop you will see on your left” when the driver is approximately 200 feet away from the intersection”, wherein “approximately 200 feet away from the intersection” indicates calculating a remaining distance to a turning point); and when the calculated remaining distance is smaller than or equal to a predetermined threshold distance, determining that the precise navigation-related event has occurred (para. [0067]: “vehicle approaches the location of the next maneuver”; para. [0042]: “The navigation application accordingly generates the audio message “turn left at the bus stop you will see on your left” when the driver is approximately 200 feet away from the intersection”, wherein “approximately 200 feet away from the intersection” indicates predetermined threshold distance). Regarding claim 16, Golding teaches a system comprising: one or more processors; and a memory configured to store a computer program executed by the one or more processors, wherein the computer program comprises instructions for performing operations (para. [0043]: “The method 100 can be implemented as a set of software instructions stored on a non-transitory computer-readable medium and executable by one or more processors, for example”; para. [0022]: “A landmark selection system 18 configured to select visual landmarks using real-time imagery and/or time of day, season, weather, conditions, etc. can be implemented in the mobile system 12, the server system 14, or partially in mobile system 12 and partially in the server system 14.”, wherein server system performing the method indicates comprising of processors and a memory, and the method can be performed in the vehicle or in the server or in combination) comprising receiving an image of a surrounding located in front of a vehicle from a communication terminal (FIG. 2; para. [0024]: “a standard monocular camera mounted on the dashboard or windshield […] faces the road similar to a dashboard camera […] the mobile system 12 in some implementations uses multiple cameras to collected redundant imagery in real time”; para. [0045]: “At block 106, real-time imagery is collected at the vehicle approximately from the vantage point of the driver”; para. [0040]: “The mobile system 12 accordingly can capture photographs and/or video and provide the captured imagery to the server system 14, where the visual landmark selection module executes a video processing pipeline.”); determining a target object acting as a terrain feature from among a plurality of objects included in the received image of the surrounding located in front of the vehicle (FIG. 2; para. [0046]: “The real-time imagery of the scene then is processed at block 108. […] The processing at block 108 can include comparing the captured scene to the pre-stored imagery of the landmarks obtained at block 106. The processing can produce an indication of which of the visual landmarks identified at block 104 can be identified in the captured scene, and thus probably are visible to the driver.”; para. [0048]: “course, if more than the necessary number of visual landmarks (typically one) are determined to be visible for a single maneuver”; para. [0058]-[0059]: “Example Image Processing Techniques: […] landmark selection system 18 compares the captured real-time imagery to pre-stored images to detect a match or absence of a match”; , wherein selected visual landmarks for navigational guidance indicate target object acting as a terrain feature among a plurality of objects); determining a movement direction in which a driver can drive along a driving route at a location of the target object (FIG. 2; para. [0010]: “determining a position of the physical object relative to a point on the route, and providing, to the driver, navigation directions describing the route, the navigation directions including a reference to the identified physical object”; para. [0061]: “In this manner, the landmark selection system 18 can describe the position of an identified object relative to static geographic features and generate navigation instructions of the type “turn where the sports car is now turning.”; para. [0004]: “Thus, a system can generate such navigation directions as “in one fourth of a mile, you will see a McDonald's restaurant on your right; make the next right turn onto Maple Street.”; para. [0019]: “For example, the system can modify the instruction “turn left in 200 feet” to “turn left by the red truck.””, wherein generating navigation direction relative to the landmark indicates determining a movement direction in which a driver drives along a driving route at a position of the target object); and transmitting precise driving route data including data related to the determined target object and the determined movement direction to the communication terminal (FIG. 1; FIG. 2; para. [0047]: “At block 110, navigation directions referencing the one or more visible visual landmarks are provided to the driver, whereas the visual landmarks identified at block 104 but not located within the scene captured at block 106 are omitted”; para. [0004]: “Thus, a system can generate such navigation directions as “in one fourth of a mile, you will see a McDonald's restaurant on your right; make the next right turn onto Maple Street.””, wherein navigation instructions being displayed indicates that these instructions were processed and transmitted from the server). Regarding claim 17, it recites the system performing claim limitations similar to those of the method claim 4, and therefore is rejected on the same basis. Regarding claim 18, it recites the system performing claim limitations similar to those of the method claim 5, and therefore is rejected on the same basis. Regarding claim 19, it recites the system performing claim limitations similar to those of the method claim 6, and therefore is rejected on the same basis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Golding, in view of WANG (US 20230213351 A1). Regarding claim 2, Golding teaches the navigation method of claim 1. Golding further teaches wherein acquiring the image of the surrounding located in front of the vehicle includes (FIG. 2): measuring a position of the vehicle (para. [0008]: “a positioning module configured to determine a current geographic location of the vehicle,”); calculating a residual distance to a turning point based on the driving route and the measured position of the vehicle (para. [0032]: “The navigation instructions generator 42 can use the one or more routes generated by the routing engine 40 and generate a sequence of navigation instructions. Examples of navigation instructions include “in 500 feet, turn right on Elm St.” and “continue straight for four miles.””; para. [0042]: “The navigation application accordingly generates the audio message “turn left at the bus stop you will see on your left” when the driver is approximately 200 feet away from the intersection”, wherein “approximately 200 feet away from the intersection” indicates calculating a residual distance to a turning point); and when the calculated residual distance is smaller than or equal to a predetermined threshold distance, generating message (FIG. 2; para. [0042]: “The navigation application accordingly generates the audio message “turn left at the bus stop you will see on your left” when the driver is approximately 200 feet away from the intersection”, wherein “approximately 200 feet away from the intersection” indicates the calculated residual distance is smaller than or equal to a predetermined threshold distance), but fails to specifically teach acquiring the image of the surrounding located in front of the vehicle using the sensing device when the calculated residual distance is smaller than or equal to a predetermined threshold distance. However, in the same field of endeavor, WANG teaches acquiring the image of the surrounding located in front of the vehicle using the sensing device when the calculated residual distance is smaller than or equal to a predetermined threshold distance (FIG. 6; para. [0058]: “(h) When the vehicle approaches an action intersection, the processing module 15 starts to detect the real-time visual landmark image collected by camera 11 in real-time”; para. [0072]: “When the user approaches the action point notified by the navigation engine, the processing module will find the corresponding visual anchor by comparing the features of the visual anchor with the features of the sign/landmark image in the video”, wherein approach indicate calculated residual distance is smaller than or equal to a predetermined threshold distance). Golding and WANG are both considered to be analogous to the claimed invention because they are in the same field providing navigational guidance based on objects in the vicinity of the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Golding to incorporate the teachings of WANG and acquire images as the driver approaches the action point. Doing so would aid the driver by providing guidance on the real-time scene, thus being simple and easy to understand (WANG, para. [0005]). Regarding claim 14, it recites a navigation method with claim limitations similar to those performed in the navigation method of claim 2, and therefore is rejected on the same basis. Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Golding, in view of WANG, and further in view of Faaborg (US20160116297A1). Regarding claim 3, The navigation method of claim 1. Golding further teaches wherein acquiring the image of the surrounding located in front of the vehicle includes (FIG. 2 106: Obtain real-time imagery): generating message when approaching an intersection (FIG. 2; para. [0042]: “The navigation application accordingly generates the audio message “turn left at the bus stop you will see on your left” when the driver is approximately 200 feet away from the intersection”), but fails to specifically teach measuring a position and a speed of the vehicle; calculating a remaining time until reaching a turning point based on the driving route, the measured position and speed of the vehicle; and when the calculated remaining time is smaller than or equal to a predetermined threshold time, acquiring the image of the surrounding located in front of the vehicle using the sensing device. However, in the same field of endeavor, WANG teaches when the calculated residual distance is smaller than or equal to a predetermined threshold distance, acquiring the image of the surrounding located in front of the vehicle using the sensing device (FIG. 6; para. [0058]: “(h) When the vehicle approaches an action intersection, the processing module 15 starts to detect the real-time visual landmark image collected by camera 11 in real-time”; para. [0072]: “When the user approaches the action point notified by the navigation engine, the processing module will find the corresponding visual anchor by comparing the features of the visual anchor with the features of the sign/landmark image in the video”, wherein approach indicate calculated residual distance is smaller than or equal to a predetermined threshold distance). Golding and WANG are both considered to be analogous to the claimed invention because they are in the same field providing navigational guidance based on objects in the vicinity of the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Golding to incorporate the teachings of WANG and acquire images as the driver approaches the action point. Doing so would aid the driver by providing guidance on the real-time scene, thus being simple and easy to understand (WANG, para. [0005]). Golding in view of WANG fails to specifically teach measuring a position and a speed of the vehicle; calculating a remaining time until reaching a turning point based on the driving route, the measured position and speed of the vehicle; and when the calculated remaining time is smaller than or equal to a predetermined threshold time. However, in the same field of endeavor, Faaborg teaches measuring a position and a speed of the vehicle (para. [0067]: “As another example, in some implementations, at (306) the navigational device 210 can identify both the current position and speed of the device or device user.”); calculating a remaining time until reaching a turning point based on the driving route, the measured position and speed of the vehicle (para. [0067]: “both the current position and speed of the device or device user. Based on such information, the device can determine at (306) which of the sequence of navigational maneuvers the user is expected to reach within a threshold amount of time.”); and when the calculated remaining time is smaller than or equal to a predetermined threshold time (FIG. 1; para. [0070]: “At (308) a plurality of indicators respectively representing the upcoming maneuvers determined at (306) can be displayed on a user interface.”; Claim 22: “the sequence of indicators provided in the user interface represent only the navigational maneuvers that the user is expected to reach within the threshold amount of time”). Faaborg is considered to be analogous to the claimed invention because they are in the same field providing navigational guidance to the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have substitute the remaining distance threshold of Golding in view of WANG with a remaining time threshold of Faaborg, as both parameters are related by speed and used to express the degree of how close the user is to the destination/way point (Faaborg; para. [0029]: “representative of distance (e.g. physical distance, travel time, current expected travel time, etc.)”; para. [0073]: “As another example, the distance between each pair of navigational maneuvers can be a current expected travel time”). Regarding claim 15, it recites a navigation method with claim limitations similar to those performed in the navigation method of claim 3, and therefore is rejected on the same basis. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Golding, in view of Schpok (US20180112993A1). Regarding claim 7, Golding teaches the navigation method of claim 6, but fails to specifically teach wherein determining the points of each of the plurality of objects includes: applying a first weight to the first points; applying a second weight to the second points; summing the first points to which the first weight has been applied and the second points to which the second weight has been applied; and determining the points of each object based on the summing result. However, Schpok teaches wherein determining the points of each of the plurality of objects includes (FIG. 3; para. [0047]: “Next, at blocks 154-158, various numeric metrics for the candidate navigation landmark can be determined”): applying a first weight to the first points (para. [0054]: “The metrics determined at blocks 154, 156 and 158 can be weighed in any suitable manner to generate an overall score.”); applying a second weight to the second points (para. [0054]: “The metrics determined at blocks 154, 156 and 158 can be weighed in any suitable manner to generate an overall score.”); summing the first points to which the first weight has been applied and the second points to which the second weight has been applied (para. [0054]: “The metrics determined at blocks 154, 156 and 158 can be weighed in any suitable manner to generate an overall score.”, wherein overall score indicates summing); and determining the points of each object based on the summing result (FIG. 3; para. [0054]: “At block 160, one or several landmarks can be selected for use with initial navigation instructions in view of the overall score or, if desired, only or two of the metrics determined at blocks 154, 156 and 158.” ). Golding and Schpok are both considered to be analogous to the claimed invention because they are in the same field providing navigational guidance based on objects in the vicinity of the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Golding to incorporate the teachings of Schpok and select landmark based on overall metric generated from various weights and metrics. Doing so aids the driver by providing landmarks based on overall metric that accounts for observability, prominence, and uniqueness of navigation landmarks (Schpok, para. [0019]). Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Golding, in view of KIRIT (EP 1367368 A1). Regarding claim 8, Golding teaches the navigation method of claim 1. Golding further teaches wherein determining the target object includes determining coordinates on a map where the target object is located (para. [0010]: “determining a position of the physical object relative to a point on the route, and providing, to the driver, navigation directions describing the route”; para. [0034]: “For each visual landmark, the visual landmark database 52 can store one or several photographs, geographic coordinates, a textual description, remarks submitted by users, and numeric metrics indicative of usefulness of the visual landmark and/or of a particular image of the visual landmark”), wherein determining the movement direction includes determining a turn angle at which the driver can drive along the driving route at the determined coordinates as the movement direction (FIG. 3; para. [0054]: “After block 164, the flow proceeds to block 166, where the next maneuver is selected.”; para. [0032]: “The navigation instructions generator 42 can use the one or more routes generated by the routing engine 40 and generate a sequence of navigation instructions. Examples of navigation instructions include “in 500 feet, turn right on Elm St.” and “continue straight for four miles.””, wherein “turn right” indicates determining a turn), but fails to specifically teach turn angle. However, KIRIT teaches determining the movement direction includes determining a turn angle at which the driver can drive along the driving route (para. [0006]: “facilitate determining the turn angle required to proceed through an intersection, the geographic database used by a navigation system can include data that indicate the bearing of a road segment at each of its endpoints. The bearing of a road segment at each of its endpoints indicates the angle made by the road segment at that endpoint with a predetermined direction (e.g., north)”; para. [0015]: “The bearing data stored in a geographic database can facilitate determining the turn angle required to proceed through an intersection. I”; para. [0002]: “As an example, the guidance may indicate that the driver should keep to the left and make a sharp left turn at the upcoming intersection”; para. [0039]: “The combination of the primary bearing and secondary bearing data can be used when a road segment changes direction, e.g., "Turn slight right to go left onto street B." The secondary bearing can also be used when providing routing guidance at roundabouts or special traffic figures”). Golding and KIRIT are both considered to be analogous to the claimed invention because they are in the same field providing navigational guidance to the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Golding to incorporate the teachings of KIRIT and obtain turn angles. Doing so would improve navigational guidance by calculating the turn angle based on how the road curvature is perceived by the driver as he/she approaches the intersection (KIRIT, para. [0046]) and providing appropriate guidance. Regarding claim 20, it recites the system performing claim limitations similar to those of the method claim 8, and therefore is rejected on the same basis. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Golding, in view of Mayster (US20200349368A1). Regarding claim 9, Golding teaches the navigation method of claim 1. Golding further teaches, but fails to specifically teach wherein determining the target object includes determining an auxiliary object located (FIG. 2; FIG. 5; para. [0048]: “Of course, if more than the necessary number of visual landmarks (typically one) are determined to be visible for a single maneuver, the visual landmarks”; para. [0071]: “At block 312, the landmark selection system 18 can include in the navigation directions a reference to the one or more prominent objects identified at block 306.”, wherein one or more prominent objects indicate plurality of objects), wherein the navigation information further includes the multiple object (para. [0047]: “At block 110, navigation directions referencing the one or more visible visual landmarks are provided to the driver,”; para. [0071]: “At block 312, the landmark selection system 18 can include in the navigation directions a reference to the one or more prominent objects identified at block 306.”), but fails to specifically teach an auxiliary object located between the target object and the vehicle and the navigation information guides the driver to pass by the auxiliary object, and then, drive in the movement direction at the target object. However, Mayster teaches an auxiliary object located between the target object and the vehicle and the navigation information guides the driver to pass by the auxiliary object, and then, drive in the movement direction at the target object (para. [0063]: “By way of example, larger, more easily spotted visual guides, such as landmarks, buildings, or various other features of the urban landscape (parks, billboards, lamp posts, etc.) can be used to provide better orientation to the driver and tell him/her when to make the turn. The result is a more verbose set of instructions, such as “Turn left on 15th St., after a large billboard on your right” or “Begin slowing down near the tall brown building on your right, and prepare to turn right before the next such tall building,” assuming of course that in the latter example these two buildings stand out from the rest.”, wherein slowing down near the tall brown building on your right indicates pass by the auxiliary object and prepare to turn right before the next such tall building indicates drive in the movement direction at the target object, thus indicating auxiliary object located between the target object and the vehicle). Golding and Mayster are both considered to be analogous to the claimed invention because they are in the same field providing navigational guidance based on objects in the vicinity of the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Golding to incorporate the teachings of Mayster and provide navigational guidance comprising of a landmark in between the user and another landmark. Doing so would aid guiding the driver to their destination by augmenting instructions with visual clues that are expected or known to provide good orientation on their route (Mayster, para. [0062]). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Golding, in view of Faaborg. Regarding claim 13, Golding teaches the navigation method of claim 11. Golding further teaches generating message when approaching an intersection (FIG. 2; para. [0042]: “The navigation application accordingly generates the audio message “turn left at the bus stop you will see on your left” when the driver is approximately 200 feet away from the intersection”), but fails to specifically teach measuring a position of the vehicle; calculating a remaining distance to a turning point, based on the driving route of the vehicle and the measured position of the vehicle; and when the calculated remaining distance is smaller than or equal to a predetermined threshold distance, acquiring the image of the surrounding located in front of the vehicle, using the sensing device.. However, in the same field of endeavor, Faaborg teaches measuring a position and a speed of the vehicle (para. [0067]: “As another example, in some implementations, at (306) the navigational device 210 can identify both the current position and speed of the device or device user.”); calculating a remaining time until reaching a turning point, based on the driving route, the measured position and speed of the vehicle (para. [0067]: “both the current position and speed of the device or device user. Based on such information, the device can determine at (306) which of the sequence of navigational maneuvers the user is expected to reach within a threshold amount of time.”, wherein navigational maneuvers comprise of a turning point); and when the calculated remaining time is smaller than or equal to a predetermined threshold time, determining that the precise navigation-related event has occurred (FIG. 1; para. [0070]: “At (308) a plurality of indicators respectively representing the upcoming maneuvers determined at (306) can be displayed on a user interface.”; Claim 22: “the sequence of indicators provided in the user interface represent only the navigational maneuvers that the user is expected to reach within the threshold amount of time”). Golding and Faaborg are considered to be analogous to the claimed invention because they are in the same field providing navigational guidance to the driver. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have substitute the remaining distance threshold of Golding with a remaining time threshold of Faaborg, as both parameters are related by speed and used to express the degree of how close the user is to the destination/way point (Faaborg; para. [0029]: “representative of distance (e.g. physical distance, travel time, current expected travel time, etc.)”; para. [0073]: “As another example, the distance between each pair of navigational maneuvers can be a current expected travel time”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nakamura (US20130261969A1) teaches navigation process guides the navigation point after omitting a landmark corresponding to the characteristic object from the at least one landmark extracted through the extracting process if the determination process determines that the characteristic object cannot be recognized. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW S KIM whose telephone number is (571)272-7356. The examiner can normally be reached Mon - Fri 8AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James J Lee can be reached on (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW SANG KIM/Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Nov 25, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594949
NOTIFICATION DEVICE, NOTIFICATION METHOD, AND NONTRANSITORY RECORDING MEDIUM PROVIDED WITH COMPUTER PROGRAM FOR NOTIFICATION DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12594940
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12589725
VEHICLE AND CONTROL METHOD FOR DETERMINING AN EMERGENCY SITUATION
2y 5m to grant Granted Mar 31, 2026
Patent 12583487
APPARATUS FOR CONTROLLING AUTONOMOUS DRIVING AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12565331
FIRE DETECTION SYSTEM AND METHOD FOR MONITORING AN AIRCRAFT COMPARTMENT AND SUPPORTING A COCKPIT CREW WITH TAKING REMEDIAL ACTION IN CASE OF A FIRE ALARM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
87%
With Interview (+3.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 175 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month