Prosecution Insights
Last updated: April 19, 2026
Application No. 18/623,854

METHOD AND ELECTRONIC DEVICE FOR VOICE-BASED NAVIGATION

Final Rejection §103§112
Filed
Apr 01, 2024
Examiner
PARK, KYLE S
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
97%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
92 granted / 140 resolved
+13.7% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
170
Total Applications
across all art units

Statute-Specific Performance

§101
25.7%
-14.3% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
25.1%
-14.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims This Final action is in response to the applicant’s amendment/response of December 11, 2025. Claims 5 and 15 have been canceled. Claims 1-4, 6-14, and 16-20 are pending and have been considered as follows. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 9, 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant’s arguments/amendments with respect to the objection to the claims have been fully considered and are persuasive. Therefore, the objection to the claims as presented in the Office Action of September 12, 2025 has been withdrawn. However, new objection to the claims is presented below based on the amendments to the claims presented in the Amendment of 11 December 2025. Applicant’s arguments/amendments with respect to the interpretation of claims under 35 USC §112(f) have been fully considered and are persuasive. Therefore, the interpretation of claims under 35 USC §112(f) has been withdrawn. Applicant’s arguments/amendments with respect to the rejection of claims under 35 USC §112(b) have been fully considered and are persuasive. Therefore, the rejection of claims under 35 USC §112(b) as presented in the Office Action of September 12, 2025 has been withdrawn. However, new rejection of claims under 35 USC §112(b) is presented below based on the amendments to the claims presented in the Amendment of 11 December 2025. Applicant’s arguments/amendments with respect to the rejection of claims under 35 USC § 101 have been fully considered and are persuasive. Therefore, the rejection of claims under 35 USC § 101 has been withdrawn. Applicant’s arguments/amendments with respect to the rejection of claims under 35 USC § 102 have been fully considered and are persuasive. Therefore, the rejection of claims under 35 USC § 102 has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Moore, SEIDLER, and CHRISTENSEN. Claim Objections Claims 1, 3, 4, 8, 14, and 18 are objected to because of the following informalities: Claim 1, line 10, “electronic device” should read “the electronic device”. Claim 3, lines 8-9, “one or more ranking parameters” should read “the one or more ranking parameters”. Claim 4, line 8, “one or more visible areas” should read “the one or more visible areas”. Claim 8, lines 3-4, “information associated with each candidate PoI” should read “the information associated with each candidate PoI”. Claim 14, line 7, “one or more visible areas” should read “the one or more visible areas”. Claim 18, lines 3-4, “information associated with each candidate PoI” should read “the information associated with each candidate PoI”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-4, 6-14, and 16-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As to claim 1, the limitation “the input” at lines 18-19 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “an input”. Further, the limitation “the information associated with each candidate PoI” at lines 20-21 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “information associated with each candidate PoI”. As to claim 11, the limitation “the input” at line 27 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “an input”. Further, the limitation “the information associated with each candidate PoI” at lines 29-30 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “information associated with each candidate PoI”. As to claim 12, the limitation “the one or more sensors” at line 4 is unclear. There is insufficient antecedent basis for this limitation in the claim. For purposes of examination, the Examiner is interpreting the limitation to be “one or more sensors”. Claims 2-4, 6-10, 13, 14, and 16-20 are rejected as being dependent upon a rejected claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-8, 11-14, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Moore et al., US 2016/0033280 A1, hereinafter referred to as Moore, in view of SEIDLER, EP 3722750 A2, hereinafter referred to as SEIDLER, and further in view of CHRISTENSEN, US 2014/0025287 A1, hereinafter referred to as CHRISTENSEN, respectively. As to claim 1, Moore teaches a method performed by an electronic device configured to be used by a user for voice-based navigation, the method comprising: receiving, from the user, a command for the voice-based navigation (see at least paragraphs 48 and 52 regarding the user may give a voice command, “Take me to building X in Y campus.”. The microphone 131 may be a microphone or other device capable of receiving sounds, such as voice activation/commands or other voice actions from the user, and may be integrated with or external to the earpiece 100. The microphone 131 may also provide input as part of the sensor array 120, Moore); determining a field of view (FoV) of the user using one or more sensors associated with the electronic device (see at least Abstract regarding the earpiece includes a processor connected to the IMU, the GPS unit and the at least one camera. The processor can recognize an object in the surrounding environment by analyzing the image data based on the stored object data and at least one of the inertial measurement data or the location data. See also at least paragraphs 40-42 regarding the pair of stereo cameras 121a may face forward, in front of a user, to establish a field of view (FOV). The pair of stereo cameras 121a may have, for example, an FOV of around 90 degrees. The pair of stereo cameras 121a provides 3D information such as depth in front of the user, Moore); determining one or more points of interest (PoIs) with respect to the determined FoV of the user (see at least paragraphs 46-47 regarding the map data may contain points of interest to the user, and as the user walks, the stereo cameras 121a and/or cameras 121 may recognize additional points of interest and update the map data as they enter into the field of view of the camera 121. See also at least paragraphs 92-99 regarding recognizing points of interest and other features, such as stairs, empty seats or buildings. For example, the object recognition may be utilized to determine an empty seat without presence of a person. A seat can be recognized as a collection of category objects that make up an empty chair. For example, a seat can be recognized as a substantially horizontally positioned surface positioned on 4 legs recognized by straight vertical lines with a back rest positioned on the surface (which is detected as a collection of primitive shapes that make up a seat). The components of the seat and the relative positioning of the components can be compared to stored objects in the database to recognize the seat, Moore); generating a response for the voice-based navigation based on at least one of the one or more PoIs and the determined FoV of the user (see at least paragraphs 40-43 regarding the cameras 121 can detect moving objects in the user's periphery. The stereo cameras 121a and/or the cameras 121 continuously recognize objects in the environment. Working in conjunction with the other sensors in the sensor array 120, the earpiece 100 provides the user with guidance and navigation commands by way of audio and haptic feedback. See also at least paragraphs 155-169 regarding in block 616, the processor 111 determines a desired destination based on the determined desirable action or event. For example, the earpiece 100 may direct the user to an empty seat, or may remember the user's specific seat in order to navigate the user away and subsequently return to the same seat. Other points of interest may be potential hazards, descriptions of surrounding structures, alternate routes, and other locations. Additional data and points of interest can be downloaded and/or uploaded to mobile devices and other devices, social networks, or the cloud, through Bluetooth or other wireless networks. … In block 619, the processor 111 determines a path over which the user can travel. The output data from block 615 may be conveyed to the user using various outputs of the interface array 130. Multimode feedback is provided to the user to guide the user on the path. This feedback is also provided to guide the user towards the desired destination/object and is presented via a combination of speech, vibration, mechanical feedback, electrical stimulation, display, etc., Moore); and forwarding, via a speaker of electronic device, the generated response to the user (see at least paragraphs 51 and 166-170 regarding the output data from block 615 may be conveyed to the user using various outputs of the interface array 130. Multimode feedback is provided to the user to guide the user on the path. This feedback is also provided to guide the user towards the desired destination/object and is presented via a combination of speech, vibration, mechanical feedback, electrical stimulation, display, etc., Moore), wherein the generated response comprises at least one of spoken directions, instructions, or guidance to assist the user in navigating to a desired destination or interacting with the one or more determined PoIs (see at least paragraphs 166-170 regarding the output data from block 615 may be conveyed to the user using various outputs of the interface array 130. Multimode feedback is provided to the user to guide the user on the path. This feedback is also provided to guide the user towards the desired destination/object and is presented via a combination of speech, vibration, mechanical feedback, electrical stimulation, display, etc. The user may give a voice command, “Take me to building X in Y campus.” The intelligent earpiece 100 may then download or retrieve from memory a relevant map, or may navigate based on perceived images from the camera 121. As the user follows the navigation commands from the intelligent earpiece 100, the user may walk by a coffee shop in the morning, and the intelligent earpiece 100 would recognize the coffee shop, Moore). Moore does not explicitly teach generating a customized digital elevation model (DEM) after receiving a location of the user; or determining one or more visible areas and one or more non-visible areas associated with the customized DEM generated after receiving the location of the user as the input, based on the determined FoV of the user. However, SEIDLER teaches generating a customized digital elevation model (DEM) after receiving a location of the user (see at least paragraphs 15-19 regarding using information associated with locations of one or more observers, for example, obtained from observer data 110, to determine the FOV of the observers relative to a target area. Identifying one or more observers, including the locations of the observers, using observer data 110. Observer data 110 may include information associated with the identity and/or locations of one or more observers obtained from known sources, such as surveillance feeds, satellite imaging, ground teams, or other intelligence gathering resources. Terrain/elevation data 112 may be obtained from a geographic information system (GIS). The information from GIS may include digital maps of a target area that include various terrain features (e.g., streams, lakes, rivers, ponds, forests, and other natural land formations), man-made features (e.g., highways, roads, streets, buildings, and other types of structures), ground conditions, including unpassable areas, and elevation data for every position (e.g., measured in meters above sea-level (MASL)). See also at least paragraphs 38-39 regarding using an elevation value for each cell (e.g., grid square) of a digital elevation model to determine visibility to or from a particular location (i.e., another cell or grid square)); and determining one or more visible areas and one or more non-visible areas associated with the customized DEM generated after receiving the location of the user as the input, based on the determined FoV of the user (see at least paragraphs 7-10 regarding causing the processor to: obtain a location of one or more observers within a target area, the target area comprised of a plurality of grid squares; apply elevation data to the plurality of grid squares within the target area; determine a field-of-view (FOV) for each of the one or more observers, the FOV based on the elevation data, wherein the FOV includes one or more of the plurality of grid squares that are visible from the location of the observer. See also at least paragraphs 15-19 regarding FOV determination module 102 uses the information from observer data 110 and terrain/elevation data 112 to determine the FOV for each of the one or more observers within the target area. See also at least paragraphs 53-55 regarding an example embodiment of using elevation data to determine a FOV from an observer location is illustrated. In this embodiment, an observer 600 has a first height H1 above the ground and is located a distance D away from a target object 610. For example, target object 610 may be subject vehicle 120, described above. Target object 610 has a second height H2 above the ground at its location. The elevation data is applied to the terrain within the target area that includes observer 600 and target object 610. The slope of at least one portion of the terrain is higher than the slope of target object 610. As a result, the portion of the terrain in front of observer 600 extending to a visibility distance Dvis can be seen from the observer’s location. However, because of a high point 602 in the terrain, the portion of the terrain past high point 602, illustrated as an obscured distance DOBS, is not visible to observer 600. As a result, target object 610 located second height H2 above the ground is within obscured distance DOBS and is therefore not within the FOV of observer 600). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of SEIDLER which teaches generating a customized digital elevation model (DEM) after receiving a location of the user; and determining one or more visible areas and one or more non-visible areas associated with the customized DEM generated after receiving the location of the user as the input, based on the determined FoV of the user with the system of Moore as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of generating a customized digital elevation model (DEM) after receiving a location of the user; and determining one or more visible areas and one or more non-visible areas associated with the customized DEM generated after receiving the location of the user as the input, based on the determined FoV of the user and would have predictably applied it to improve the system of Moore. Moore, as modified by SEIDLER, does not explicitly teach determining a list of candidate PoIs and the information associated with each candidate PoI based on at least one of the generated customized DEM, the determined one or more visible areas, and the determined one or more non-visible areas; determining a priority of each candidate PoI from the list of candidate PoIs, based on one or more ranking parameters; or determining the one or more PoIs based on the determined priority. However, CHRISTENSEN teaches determining a list of candidate PoIs and the information associated with each candidate PoI based on at least one of the generated customized DEM, the determined one or more visible areas, and the determined one or more non-visible areas (see at least paragraphs 181-207 regarding the user has requested the personal navigation system 10 to provide information on a POI in the viewing direction of the user, and possibly, the user has specified the types of POIs to be considered, e.g. historical sites. In response to the user request, the personal navigation system has identified the historical POI 1 and POI 2 to reside within the field of view and inside the first distance threshold. The system has further determined POI 1 to be closest to the current centre of the field of view. If the user looks upward to have a look at the top of POI 3, e.g. a tower or a high building behind POI 1 in the viewing field, the processor is configured for, in the event that POI 1 positioned closest to the centre of the field of view of the user within the first distance threshold obstructs the view of a lower part of POI 3; POI 3 however having a height that is larger than the height of POI 1, select POI 3, provided that the determined head pitch is larger than a predetermined pitch threshold, e.g. 15.degree. The processor may be configured for determining directions towards each of POI 1, POI 2, and POI 3 with relation to the determined geographical position and head yaw of the user, selecting pairs of filters with Head-Related Transfer Functions corresponding to the determined directions, and controlling the sound generator for sequentially outputting audio signals with spoken information on each of POI 1, POI 2, and POI 3 in sequence through the respective selected pairs of filters so that the user hears spoken information on each of POI 1, POI 2, and POI 3 from the respective directions towards the respective POIs. During or after the narrated presentation, the user may request the personal navigation system to guide the user to a selected geographical position, such as a new site with one or more POIs along a guided tour. The processor will then determine a direction towards a selected geographical destination and guide the user towards that geographical destination as previously described); determining a priority of each candidate PoI from the list of candidate PoIs, based on one or more ranking parameters (see at least FIG. 6 and paragraphs 181-207 regarding the personal navigation system 10 may provide the option that the user can select more than one POI within the user's field of view to be presented to the user by the system, and the user may specify the maximum number of POIs to be presented. If this option is selected in FIG. 6, the processor controls the sound generator to output spoken information on POI 1, POI 2, and POI 3 sequentially, e.g. in the order of proximity to the user. The personal navigation system 10 may transmit the current position of the system to the remote server and requesting information on nearby POIs, preferably of one or more selected categories, and preferably sequenced in accordance with a selected rule of priority, such as proximity, popularity, user ratings, professional ratings, cost of entrance, opening hours with relation to actual time, etc. A maximum number of POIs may also be specified); and determining the one or more PoIs based on the determined priority (see at least FIG. 6 and paragraphs 181-207 regarding in response to the user request, the personal navigation system has identified the historical POI 1 and POI 2 to reside within the field of view and inside the first distance threshold. The system has further determined POI 1 to be closest to the current centre of the field of view, and therefore emits sound with spoken information on POI 1 to the ears of the user. The personal navigation system 10 may transmit the current position of the system to the remote server and requesting information on nearby POIs, preferably of one or more selected categories, and preferably sequenced in accordance with a selected rule of priority, such as proximity, popularity, user ratings, professional ratings, cost of entrance, opening hours with relation to actual time, etc. A maximum number of POIs may also be specified. The server searches for matching POI(s) and transmits the matching record(s), including audio file(s), to the personal navigation system that sequentially presents spoken information on the matching POIs with the hearing instrument. The smart phone 200 may further be configured to request navigation tasks to be performed by a remote navigation enabled server whereby the smart phone communicates position data of the current position, e.g. current longitude, latitude; or, the received satellite signals, and position data of a destination, e.g. longitude, latitude; or street address, etc., to the navigation enabled server that performs the requested navigation tasks and transmits resulting data to the smart phone for presentation to the user). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of CHRISTENSEN which teaches determining a list of candidate PoIs and the information associated with each candidate PoI based on at least one of the generated customized DEM, the determined one or more visible areas, and the determined one or more non-visible areas; determining a priority of each candidate PoI from the list of candidate PoIs, based on one or more ranking parameters; and determining the one or more PoIs based on the determined priority with the system of Moore, as modified by SEIDLER, as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of determining a list of candidate PoIs and the information associated with each candidate PoI based on at least one of the generated customized DEM, the determined one or more visible areas, and the determined one or more non-visible areas; determining a priority of each candidate PoI from the list of candidate PoIs, based on one or more ranking parameters; and determining the one or more PoIs based on the determined priority and would have predictably applied it to improve the system of Moore as modified by SEIDLER. As to claim 2, Moore, as modified by SEIDLER, does not explicitly teach determining at least one of a roll angle, a pitch angle, and a yaw angle associated with a head orientation of the user, using the one or more sensors; determining a vertical FoV and a horizontal FoV of the user based on the at least one of the roll angle, the pitch angle, and the yaw angle; or determining the FoV of the user based on the determined vertical FoV and the determined horizontal FoV. However, CHRISTENSEN teaches determining at least one of a roll angle, a pitch angle, and a yaw angle associated with a head orientation of the user, using the one or more sensors (see at least FIGS. 2-3 and paragraphs 134-139 regarding head yaw 150 is the angle between the current x-axis' projection x' 132 onto a horizontal plane 160 at the location of the user, and a horizontal reference direction 170, such as Magnetic North or True North. Head pitch 180 is the angle between the current x-axis 130 and the horizontal plane 160. Head roll 190 is the angle between the y-axis and the horizontal plane. See also at least paragraphs 147-155 regarding the hearing device 12 has an inertial measurement unit 50 positioned for determining head yaw, head pitch, and head roll, when the user wears the hearing device 12 in its intended operational position on the user's head); determining a vertical FoV and a horizontal FoV of the user based on the at least one of the roll angle, the pitch angle, and the yaw angle (see at least FIGS. 2-3 and paragraphs 134-139 regarding the orientation of the head of the user is defined as the orientation of a head reference coordinate system with relation to a reference coordinate system with a vertical axis and two horizontal axes at the current location of the user. Head yaw 150 is the angle between the current x-axis' projection x' 132 onto a horizontal plane 160 at the location of the user, and a horizontal reference direction 170, such as Magnetic North or True North. Head pitch 180 is the angle between the current x-axis 130 and the horizontal plane 160. Head roll 190 is the angle between the y-axis and the horizontal plane. See also at least paragraphs 147-155 regarding the hearing device 12 has an inertial measurement unit 50 positioned for determining head yaw, head pitch, and head roll, when the user wears the hearing device 12 in its intended operational position on the user's head. See also at least FIG. 6 and paragraphs 181-193 regarding personal navigation system has identified the historical POI 1 and POI 2 to reside within the field of view and inside the first distance threshold. The system has further determined POI 1 to be closest to the current centre of the field of view, and therefore emits sound with spoken information on POI 1 to the ears of the user. Preferably, POIs higher than a predetermined height threshold and with a distance to the user that is larger than a predetermined second distance threshold that is larger than the first distance threshold cannot be selected. In this way, the larger viewing range of tall POIs is taken into account, and the user can control the personal navigation system to select a high POI, e.g. a tower, a high rise building, etc, located behind another POI by looking up at the higher POI. Still, tall POIs located outside the larger viewing range of the user can not be selected. Thus, if the user looks upward to have a look at the top of POI 3, e.g. a tower or a high building behind POI 1 in the viewing field, the processor is configured for, in the event that POI 1 positioned closest to the centre of the field of view of the user within the first distance threshold obstructs the view of a lower part of POI 3; POI 3 however having a height that is larger than the height of POI 1, select POI 3, provided that the determined head pitch is larger than a predetermined pitch threshold, e.g. 15.degree); and determining the FoV of the user based on the determined vertical FoV and the determined horizontal FoV (see at least FIGS. 2-3 and paragraphs 134-139. See also at least paragraphs 147-155. See also at least FIG. 6 and paragraphs 181-193 regarding personal navigation system has identified the historical POI 1 and POI 2 to reside within the field of view and inside the first distance threshold. The system has further determined POI 1 to be closest to the current centre of the field of view, and therefore emits sound with spoken information on POI 1 to the ears of the user. Preferably, POIs higher than a predetermined height threshold and with a distance to the user that is larger than a predetermined second distance threshold that is larger than the first distance threshold cannot be selected. In this way, the larger viewing range of tall POIs is taken into account, and the user can control the personal navigation system to select a high POI, e.g. a tower, a high rise building, etc, located behind another POI by looking up at the higher POI. Still, tall POIs located outside the larger viewing range of the user can not be selected. Thus, if the user looks upward to have a look at the top of POI 3, e.g. a tower or a high building behind POI 1 in the viewing field, the processor is configured for, in the event that POI 1 positioned closest to the centre of the field of view of the user within the first distance threshold obstructs the view of a lower part of POI 3; POI 3 however having a height that is larger than the height of POI 1, select POI 3, provided that the determined head pitch is larger than a predetermined pitch threshold, e.g. 15.degree). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of CHRISTENSEN which teaches determining at least one of a roll angle, a pitch angle, and a yaw angle associated with a head orientation of the user, using the one or more sensors; determining a vertical FoV and a horizontal FoV of the user based on the at least one of the roll angle, the pitch angle, and the yaw angle; and determining the FoV of the user based on the determined vertical FoV and the determined horizontal FoV with the system of Moore, as modified by SEIDLER, as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of determining at least one of a roll angle, a pitch angle, and a yaw angle associated with a head orientation of the user, using the one or more sensors; determining a vertical FoV and a horizontal FoV of the user based on the at least one of the roll angle, the pitch angle, and the yaw angle; and determining the FoV of the user based on the determined vertical FoV and the determined horizontal FoV and would have predictably applied it to improve the system of Moore as modified by SEIDLER. As to claim 3, Moore, as modified by SEIDLER, does not explicitly teach wherein the information associated with each candidate PoI comprises at least one of a candidate object identity, a candidate object shape, a candidate object type, a candidate object name, a visible area information, a distance information, an angle information, and a location information, and wherein the priority of each candidate PoI from the list of candidate PoIs, based on one or more ranking parameters comprises: determining a description of each candidate PoI from the list of candidate PoIs, based on at least one of a spatial context information associated with each candidate PoI, a category information associated with each candidate PoI, and a review information associated with each candidate PoI; or determining the priority of each candidate PoI from the list of candidate PoIs, based on the determined description and the one or more ranking parameters. However, CHRISTENSEN teaches wherein the information associated with each candidate PoI comprises at least one of a candidate object identity, a candidate object shape, a candidate object type, a candidate object name, a visible area information, a distance information, an angle information, and a location information (see at least FIG. 6 and paragraphs 181-207 regarding the first distance threshold may be dependent on the geographical position of the user. The personal navigation system 10 may provide the option that the user can select more than one POI within the user's field of view to be presented to the user by the system, and the user may specify the maximum number of POIs to be presented. Information on the relative positions of POI 1, POI 2, and POI 3 with relation to each other may be added by the processor, such as referring to the central POI, the POI immediately to the left of the central POI, etc.. In this way, the user is provided with spatial knowledge about the surroundings and the need to visually consult a display of the surroundings is minimized making it easy and convenient for the user to navigate to geographical locations, the user desires to see or visit. The illustrated personal navigation system 10 is equipped with a wireless antenna, transmitter, and receiver for communicating over a GSM mobile telephone network through an Internet gateway with a remote server on the Internet accommodating a database with information on POIs, including audio files with spoken information on some or all of the POIs); determining a description of each candidate PoI from the list of candidate PoIs, based on at least one of a spatial context information associated with each candidate PoI, a category information associated with each candidate PoI, and a review information associated with each candidate PoI (see at least FIG. 6 and paragraphs 181-207 regarding the first distance threshold may be dependent on the geographical position of the user. The personal navigation system 10 may provide the option that the user can select more than one POI within the user's field of view to be presented to the user by the system, and the user may specify the maximum number of POIs to be presented. Information on the relative positions of POI 1, POI 2, and POI 3 with relation to each other may be added by the processor, such as referring to the central POI, the POI immediately to the left of the central POI, etc.. The illustrated personal navigation system 10 is equipped with a wireless antenna, transmitter, and receiver for communicating over a GSM mobile telephone network through an Internet gateway with a remote server on the Internet accommodating a database with information on POIs, including audio files with spoken information on some or all of the POIs. The personal navigation system 10 may transmit the current position of the system to the remote server and requesting information on nearby POIs, preferably of one or more selected categories, and preferably sequenced in accordance with a selected rule of priority, such as proximity, popularity, user ratings, professional ratings, cost of entrance, opening hours with relation to actual time, etc. A maximum number of POIs may also be specified); and determining the priority of each candidate PoI from the list of candidate PoIs, based on the determined description and the one or more ranking parameters (see at least FIG. 6 and paragraphs 181-207 regarding the personal navigation system 10 may provide the option that the user can select more than one POI within the user's field of view to be presented to the user by the system, and the user may specify the maximum number of POIs to be presented. If this option is selected in FIG. 6, the processor controls the sound generator to output spoken information on POI 1, POI 2, and POI 3 sequentially, e.g. in the order of proximity to the user. The personal navigation system 10 may transmit the current position of the system to the remote server and requesting information on nearby POIs, preferably of one or more selected categories, and preferably sequenced in accordance with a selected rule of priority, such as proximity, popularity, user ratings, professional ratings, cost of entrance, opening hours with relation to actual time, etc. A maximum number of POIs may also be specified). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of CHRISTENSEN which teaches wherein the information associated with each candidate PoI comprises at least one of a candidate object identity, a candidate object shape, a candidate object type, a candidate object name, a visible area information, a distance information, an angle information, and a location information, and wherein the priority of each candidate PoI from the list of candidate PoIs, based on one or more ranking parameters comprises: determining a description of each candidate PoI from the list of candidate PoIs, based on at least one of a spatial context information associated with each candidate PoI, a category information associated with each candidate PoI, and a review information associated with each candidate PoI; and determining the priority of each candidate PoI from the list of candidate PoIs, based on the determined description and the one or more ranking parameters with the system of Moore, as modified by SEIDLER, as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the information associated with each candidate PoI comprises at least one of a candidate object identity, a candidate object shape, a candidate object type, a candidate object name, a visible area information, a distance information, an angle information, and a location information; determining a description of each candidate PoI from the list of candidate PoIs, based on at least one of a spatial context information associated with each candidate PoI, a category information associated with each candidate PoI, and a review information associated with each candidate PoI; and determining the priority of each candidate PoI from the list of candidate PoIs, based on the determined description and the one or more ranking parameters and would have predictably applied it to improve the system of Moore as modified by SEIDLER. As to claim 4, Moore, as modified by SEIDLER, does not explicitly teach receiving the list of candidate PoIs, wherein a destination point is associated with at least one candidate PoI from the list of candidate PoIs; or performing at least one of: ranking, based on the angle information, the list of candidate PoIs in a case that the destination point belongs to one or more visible areas; ranking, based on the angle information, the list of candidate PoIs in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to a front side of the user, and the destination point belongs to the one or more visible areas based on identifying that the user turns towards a destination angle; ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in a case that the destination point does not belong to the one or more visible areas and the destination point belongs to the front side of the user; and ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to the front side of the user, and the destination point does not belong to the one or more visible areas based on identifying that the user turns towards the destination angle. However, CHRISTENSEN teaches receiving the list of candidate PoIs, wherein a destination point is associated with at least one candidate PoI from the list of candidate PoIs (see at least paragraphs 181-207 regarding the processor may be configured for determining directions towards each of POI 1, POI 2, and POI 3 with relation to the determined geographical position and head yaw of the user, selecting pairs of filters with Head-Related Transfer Functions corresponding to the determined directions, and controlling the sound generator for sequentially outputting audio signals with spoken information on each of POI 1, POI 2, and POI 3 in sequence through the respective selected pairs of filters so that the user hears spoken information on each of POI 1, POI 2, and POI 3 from the respective directions towards the respective POIs. During or after the narrated presentation, the user may request the personal navigation system to guide the user to a selected geographical position, such as a new site with one or more POIs along a guided tour. The processor will then determine a direction towards a selected geographical destination and guide the user towards that geographical destination as previously described); and performing at least one of: ranking, based on the angle information, the list of candidate PoIs in a case that the destination point belongs to one or more visible areas; ranking, based on the angle information, the list of candidate PoIs in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to a front side of the user, and the destination point belongs to the one or more visible areas based on identifying that the user turns towards a destination angle; ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in a case that the destination point does not belong to the one or more visible areas and the destination point belongs to the front side of the user; and ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to the front side of the user, and the destination point does not belong to the one or more visible areas based on identifying that the user turns towards the destination angle (see at least FIGS. 6-10 and paragraphs 181-207 regarding during or after the narrated presentation, the user may request the personal navigation system to guide the user to a selected geographical position, such as a new site with one or more POIs along a guided tour. The processor will then determine a direction towards a selected geographical destination and guide the user towards that geographical destination as previously described. The personal navigation system 10 may transmit the current position of the system to the remote server and requesting information on nearby POIs, preferably of one or more selected categories, and preferably sequenced in accordance with a selected rule of priority, such as proximity, popularity, user ratings, professional ratings, cost of entrance, opening hours with relation to actual time, etc. A maximum number of POIs may also be specified. where the user has taken the metro (metro station indicated by an arrow) to the town square: "King's New Square" in Copenhagen. The user has walked from the metro station to "King's New Square" and is presently looking at the Royal Danish Theatre located south of the square as indicated in FIG. 7. The user has requested information on POIs within sight and made available on the Internet by Wikipedia. The available POIs at "King's New Square" are indicated by capital letters W in square frames. The inner dashed circle centred at the user indicates the first distance threshold, and the outer dashed circle indicates the second distance threshold. Short text introductions to the available POIs are acquired from Wikipedia by the personal navigation system and converted into speech by the text-to-speech converter of the smart phone. The user is looking at the Royal Danish Theatre whereby the Royal Danish Theatre is located right at the centre of the field of view of the user and in response the processor of the personal navigation system selects the corresponding record provided by Wikipedia, converts the text into speech with the text-to-speech converter, and controls the sound generator to output the speech to the loudspeakers of the hearing instrument 12). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of CHRISTENSEN which teaches receiving the list of candidate PoIs, wherein a destination point is associated with at least one candidate PoI from the list of candidate PoIs; and performing at least one of: ranking, based on the angle information, the list of candidate PoIs in a case that the destination point belongs to one or more visible areas; ranking, based on the angle information, the list of candidate PoIs in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to a front side of the user, and the destination point belongs to the one or more visible areas based on identifying that the user turns towards a destination angle; ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in a case that the destination point does not belong to the one or more visible areas and the destination point belongs to the front side of the user; and ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to the front side of the user, and the destination point does not belong to the one or more visible areas based on identifying that the user turns towards the destination angle with the system of Moore, as modified by SEIDLER, as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of receiving the list of candidate PoIs, wherein a destination point is associated with at least one candidate PoI from the list of candidate PoIs; and performing at least one of: ranking, based on the angle information, the list of candidate PoIs in a case that the destination point belongs to one or more visible areas; ranking, based on the angle information, the list of candidate PoIs in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to a front side of the user, and the destination point belongs to the one or more visible areas based on identifying that the user turns towards a destination angle; ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in a case that the destination point does not belong to the one or more visible areas and the destination point belongs to the front side of the user; and ranking the list of candidate PoIs based on the distance information, the visible area information, and the angle information in cases that: the destination point does not belong to the one or more visible areas, the destination point does not belong to the front side of the user, and the destination point does not belong to the one or more visible areas based on identifying that the user turns towards the destination angle and would have predictably applied it to improve the system of Moore as modified by SEIDLER. As to claim 6, Moore does not explicitly teach acquiring elevation data, associated with the location and the FoV of the user; applying at least one image processing mechanism to enhance the acquired elevation data; combining the enhanced elevation data with relevant data associated with the location of the user, wherein the relevant data is associated with predefined information of a map associated with the location; or generating the customized DEM based on a combination of the enhanced elevation data and the relevant data, wherein the generated customized DEM comprises a low-level elevation information and characteristics of physical features. However, SEIDLER teaches acquiring elevation data, associated with the location and the FoV of the user (see at least paragraph 7 regarding obtaining a location of one or more observers within a target area; applying elevation data to a plurality of grid squares within the target area; based on the elevation data, determining a field-of-view (FOV) for each of the one or more observers, wherein the FOV includes one or more of the plurality of grid squares that are visible from the location of the observer. See also at least paragraphs 15-19. See also at least paragraphs 37-39); applying at least one image processing mechanism to enhance the acquired elevation data (see at least paragraphs 15-19 regarding terrain/elevation data 112 may be obtained from a geographic information system (GIS). See also at least paragraphs 37-39); combining the enhanced elevation data with relevant data associated with the location of the user, wherein the relevant data is associated with predefined information of a map associated with the location (see at least paragraph 7 regarding obtaining a location of one or more observers within a target area; applying elevation data to a plurality of grid squares within the target area; based on the elevation data, determining a field-of-view (FOV) for each of the one or more observers, wherein the FOV includes one or more of the plurality of grid squares that are visible from the location of the observer. See also at least paragraphs 15-19. See also at least paragraphs 37-39 regarding uses an elevation value for each cell (e.g., grid square) of a digital elevation model to determine visibility to or from a particular location (i.e., another cell or grid square). For example, to determine whether a target object (e.g., a subject vehicle) of at least a specific height can be seen from an observer having a known height from the ground, two steps are performed); and generating the customized DEM based on a combination of the enhanced elevation data and the relevant data, wherein the generated customized DEM comprises a low-level elevation information and characteristics of physical features (see at least paragraphs 15-19 regarding terrain/elevation data 112 may be obtained from a geographic information system (GIS). The information from GIS may include digital maps of a target area that include various terrain features (e.g., streams, lakes, rivers, ponds, forests, and other natural land formations), man-made features (e.g., highways, roads, streets, buildings, and other types of structures), ground conditions, including unpassable areas, and elevation data for every position (e.g., measured in meters above sea-level (MASL)). See also at least paragraphs 37-39 regarding a typical viewshed analysis uses an elevation value for each cell (e.g., grid square) of a digital elevation model to determine visibility to or from a particular location (i.e., another cell or grid square)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of SEIDLER which teaches acquiring elevation data, associated with the location and the FoV of the user; applying at least one image processing mechanism to enhance the acquired elevation data; combining the enhanced elevation data with relevant data associated with the location of the user, wherein the relevant data is associated with predefined information of a map associated with the location; and generating the customized DEM based on a combination of the enhanced elevation data and the relevant data, wherein the generated customized DEM comprises a low-level elevation information and characteristics of physical features with the system of Moore as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of acquiring elevation data, associated with the location and the FoV of the user; applying at least one image processing mechanism to enhance the acquired elevation data; combining the enhanced elevation data with relevant data associated with the location of the user, wherein the relevant data is associated with predefined information of a map associated with the location; and generating the customized DEM based on a combination of the enhanced elevation data and the relevant data, wherein the generated customized DEM comprises a low-level elevation information and characteristics of physical features and would have predictably applied it to improve the system of Moore. As to claim 7, Moore does not explicitly teach wherein the one or more visible areas and the one or more non-visible areas are determined by using the generated customized DEM and user-information, wherein the user-information comprises at least one of the location of the user, a height, a roll angle, a pitch angle, a yaw angle, a vertical FoV, and a horizontal FoV. However, such matter is taught by SEIDLER (see at least paragraphs 7-10 regarding causing the processor to: obtain a location of one or more observers within a target area, the target area comprised of a plurality of grid squares; apply elevation data to the plurality of grid squares within the target area; determine a field-of-view (FOV) for each of the one or more observers, the FOV based on the elevation data, wherein the FOV includes one or more of the plurality of grid squares that are visible from the location of the observer. See also at least paragraphs 15-19 regarding FOV determination module 102 uses the information from observer data 110 and terrain/elevation data 112 to determine the FOV for each of the one or more observers within the target area. See also at least paragraphs 37-39. See also at least paragraphs 53-55 regarding an example embodiment of using elevation data to determine a FOV from an observer location is illustrated. In this embodiment, an observer 600 has a first height H1 above the ground and is located a distance D away from a target object 610. For example, target object 610 may be subject vehicle 120, described above. Target object 610 has a second height H2 above the ground at its location. The elevation data is applied to the terrain within the target area that includes observer 600 and target object 610. The slope of at least one portion of the terrain is higher than the slope of target object 610. As a result, the portion of the terrain in front of observer 600 extending to a visibility distance Dvis can be seen from the observer’s location. However, because of a high point 602 in the terrain, the portion of the terrain past high point 602, illustrated as an obscured distance DOBS, is not visible to observer 600. As a result, target object 610 located second height H2 above the ground is within obscured distance DOBS and is therefore not within the FOV of observer 600). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of SEIDLER which teaches wherein the one or more visible areas and the one or more non-visible areas are determined by using the generated customized DEM and user-information, wherein the user-information comprises at least one of the location of the user, a height, a roll angle, a pitch angle, a yaw angle, a vertical FoV, and a horizontal FoV with the system of Moore as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of having wherein the one or more visible areas and the one or more non-visible areas are determined by using the generated customized DEM and user-information, wherein the user-information comprises at least one of the location of the user, a height, a roll angle, a pitch angle, a yaw angle, a vertical FoV, and a horizontal FoV and would have predictably applied it to improve the system of Moore. As to claim 8, Moore, as modified by SEIDLER, does not explicitly teach generating one or more structured navigation instruction data for information associated with each candidate PoI; generating one or more navigation instruction texts from the generated one or more structured navigation instruction data; or converting the generated one or more navigation instruction texts into navigation instruction audio to generate the response for the voice-based navigation. However, CHRISTENSEN teaches generating one or more structured navigation instruction data for information associated with each candidate PoI (see at least paragraphs 22-26 regarding the user may also request the personal navigation system to guide the user to a selected geographical position, such as the next interesting location on a guided tour. Thus, preferably the processor is also configured for determining a direction towards a selected geographical destination with relation to the determined geographical position and head yaw of the user, and controlling the sound generator to output audio signals guiding the user); generating one or more navigation instruction texts from the generated one or more structured navigation instruction data (see at least paragraphs 22-26 regarding the user may also request the personal navigation system to guide the user to a selected geographical position, such as the next interesting location on a guided tour. Thus, preferably the processor is also configured for determining a direction towards a selected geographical destination with relation to the determined geographical position and head yaw of the user, and controlling the sound generator to output audio signals guiding the user. See also at least paragraphs 181-207); and converting the generated one or more navigation instruction texts into navigation instruction audio to generate the response for the voice-based navigation (see at least paragraphs 22-26 regarding the user may also request the personal navigation system to guide the user to a selected geographical position, such as the next interesting location on a guided tour. Thus, preferably the processor is also configured for determining a direction towards a selected geographical destination with relation to the determined geographical position and head yaw of the user, and controlling the sound generator to output audio signals guiding the user, and selecting a pair of tilters with a Head-Related Transfer function corresponding to the determined direction towards the selected geographical destination so that the user perceives to hear sound arriving from a sound source located in the determined direction. See also at least paragraphs 181-207 regarding short text introductions to the available POIs are acquired from Wikipedia by the personal navigation system and converted into speech by the text-to-speech converter of the smart phone. The user is looking at the Royal Danish Theatre whereby the Royal Danish Theatre is located right at the centre of the field of view of the user and in response the processor of the personal navigation system selects the corresponding record provided by Wikipedia, converts the text into speech with the text-to-speech converter, and controls the sound generator to output the speech to the loudspeakers of the hearing instrument 12). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of CHRISTENSEN which teaches generating one or more structured navigation instruction data for information associated with each candidate PoI; generating one or more navigation instruction texts from the generated one or more structured navigation instruction data; and converting the generated one or more navigation instruction texts into navigation instruction audio to generate the response for the voice-based navigation with the system of Moore, as modified by SEIDLER, as both systems are directed to a system and method for providing the navigation guidance based on the sensor data, and one of ordinary skill in the art would have recognized the established utility of generating one or more structured navigation instruction data for information associated with each candidate PoI; generating one or more navigation instruction texts from the generated one or more structured navigation instruction data; and converting the generated one or more navigation instruction texts into navigation instruction audio to generate the response for the voice-based navigation and would have predictably applied it to improve the system of Moore as modified by SEIDLER. As to claim 11, Examiner notes claim 11 recites similar limitations to claim 1 and is rejected under the same rational. As to claim 12, Examiner notes claim 12 recites similar limitations to claim 2 and is rejected under the same rational. As to claim 13, Examiner notes claim 13 recites similar limitations to claim 3 and is rejected under the same rational. As to claim 14, Examiner notes claim 14 recites similar limitations to claim 4 and is rejected under the same rational. As to claim 16, Examiner notes claim 16 recites similar limitations to claim 6 and is rejected under the same rational. As to claim 17, Examiner notes claim 17 recites similar limitations to claim 7 and is rejected under the same rational. As to claim 18, Examiner notes claim 18 recites similar limitations to claim 8 and is rejected under the same rational. Claim(s) 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Moore et al., US 2016/0033280 A1, hereinafter referred to as Moore, in view of SEIDLER, EP 3722750 A2, hereinafter referred to as SEIDLER, in view of CHRISTENSEN, US 2014/0025287 A1, hereinafter referred to as CHRISTENSEN, and further in view of Holsinger, US 2011/0184639 A1, hereinafter referred to as Holsinger, respectively. As to claim 9, Moore, as modified by SEIDLER and CHRISTENSEN, does not explicitly teach performing a part of speech (POS) tagging for the information associated with each candidate PoI, wherein the POS tagging indicates at least one of a noun information, a verb information, and an adjective information; determining a list of words from the POS tagging; or generating the one or more structured navigation instruction data based on the determined list of words. However, Holsinger teaches performing a part of speech (POS) tagging for the information associated with each candidate PoI (see at least paragraphs 41-46. See also at least paragraphs 56-65 regarding the TTS function 132 identifies the parts-of-speech components of the preferred name, including noun, adjectives, prepositions, articles and so on at step 704. In one embodiment, the TTS function 132 parses the preferred name in its native language into the parts-of-speech components using the Brill Tagger algorithm for tagging parts-of-speech, using the CYK (Cocke-Younger-Kasami) algorithm or some other method known to one skilled in the art), wherein the POS tagging indicates at least one of a noun information, a verb information, and an adjective information (see at least paragraphs 56-65 regarding the output of the algorithm identifies the parts-of-speech components of the preferred name. For the preferred name of "black skyscraper", the Brill Tagger algorithm identifies parts-of-speech components of noun of "skyscraper" and adjective "black"; for the preferred name of "big green lake", the output of the parsing algorithm is noun of "lake" and adjectives of "big" and "green"; for the preferred name of "pink building with a fountain", the output of the parsing algorithm is noun of "building", adjective of "pink" preposition of "with", article of "a" and noun of "fountain"); determining a list of words from the POS tagging (see at least paragraphs 56-65 regarding the output of the algorithm identifies the parts-of-speech components of the preferred name. For the preferred name of "black skyscraper", the Brill Tagger algorithm identifies parts-of-speech components of noun of "skyscraper" and adjective "black"; for the preferred name of "big green lake", the output of the parsing algorithm is noun of "lake" and adjectives of "big" and "green"; for the preferred name of "pink building with a fountain", the output of the parsing algorithm is noun of "building", adjective of "pink" preposition of "with", article of "a" and noun of "fountain". The TTS function 132 identifies the parts-of-speech components of the preferred name by obtaining the parts-of-speech components of the preferred name directly from the geographic database 116. In this embodiment, the preferred name is provided as identified parts of speech including noun, adjective, preposition, article and so on in the preferred name data record 602. For example, the preferred name of "big green lake" is provided in the data record as noun of "lake" and adjectives or descriptors of "big" and "green". For the preferred name of "black skyscraper", the preferred name record 602 includes data that identifies the noun of "skyscraper" and adjective "black".); and generating the one or more structured navigation instruction data based on the determined list of words (see at least paragraphs 56-65 regarding the TTS function 132 converts the parts-of-speech components of the preferred name in the native language into target language text. The TTS function 132 applies transformational and grammar rules of the target language to create the preferred name in the target language text. The noun and adjective components of the preferred name of the native language are translated into the target language. For example, for the noun of "skyscraper" in English is translated into "wolkenkratzer" in German, and the adjective of "black" in English is translated into "schwarze" in German. The translated parts-of-speech components are arranged into text following the grammar rules of target language; for example, the preferred name in the target language text is "schwarze Wolkenkratzer". The TTS function 132 allows the navigation system 100 to provide enhanced guidance messages using environmental cues and features readily visible in multiple languages without having to include different language versions of the preferred name in the geographic database 116. This feature is useful to tourists that do not understand the native language. By providing enhanced guidance messages in the user's language with reference to features visible in the user's environment, the user will be able to better follow the route with less confusion. Although the above TTS function 132 typically provides the preferred name in the target language as speech, in another embodiment, the preferred name in the target language may be provided as text on the display of the navigation system. See also at least Claim 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Holsinger which teaches performing a part of speech (POS) tagging for the information associated with each candidate PoI, wherein the POS tagging indicates at least one of a noun information, a verb information, and an adjective information; determining a list of words from the POS tagging; and generating the one or more structured navigation instruction data based on the determined list of words with the system of Moore, as modified by SEIDLER and CHRISTENSEN, as both systems are directed to a system and method for operating a navigation system to provide the navigation guidance, and one of ordinary skill in the art would have recognized the established utility of performing a part of speech (POS) tagging for the information associated with each candidate PoI, wherein the POS tagging indicates at least one of a noun information, a verb information, and an adjective information; determining a list of words from the POS tagging; and generating the one or more structured navigation instruction data based on the determined list of words and would have predictably applied it to improve the system of Moore as modified by SEIDLER and CHRISTENSEN. As to claim 19, Examiner notes claim 19 recites similar limitations to claim 9 and is rejected under the same rational. Claim(s) 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Moore et al., US 2016/0033280 A1, hereinafter referred to as Moore, in view of SEIDLER, EP 3722750 A2, hereinafter referred to as SEIDLER, in view of CHRISTENSEN, US 2014/0025287 A1, hereinafter referred to as CHRISTENSEN, in view of Holsinger, US 2011/0184639 A1, hereinafter referred to as Holsinger, and further in view of HUANG et al., CN 116127046 A, hereinafter referred to as HUANG, respectively. As to claim 10, Moore, as modified by SEIDLER and CHRISTENSEN, does not explicitly teach applying grammar and language rules on the one or more determined templates to generate the one or more navigation instruction texts. However, such matter is taught by Holsinger (see at least paragraphs 56-65 regarding the TTS function 132 applies transformational and grammar rules of the target language to create the preferred name in the target language text. The noun and adjective components of the preferred name of the native language are translated into the target language. For example, for the noun of "skyscraper" in English is translated into "wolkenkratzer" in German, and the adjective of "black" in English is translated into "schwarze" in German. The translated parts-of-speech components are arranged into text following the grammar rules of target language; for example, the preferred name in the target language text is "schwarze Wolkenkratzer". The TTS function 132 matches the target language text of the preferred name to phonemes for the target language. In one embodiment, the phonemes for the target language are stored in a database associated with the navigation system 100. For example, each word in the target language text is matched to phonemes for German stored on a database. In one embodiment, the TTS function 132 provides a SAMPA (Speech Assessment Methods Phonetic Alphabet) representation of the preferred name in the target language providing a computer-readable phonetic script in the target language which may be used to provide the guidance message with the preferred name in the target language through the speaker of the user interface. The TTS function 132 allows the navigation system 100 to provide enhanced guidance messages using environmental cues and features readily visible in multiple languages without having to include different language versions of the preferred name in the geographic database 116. See also at least Claim 1 regarding providing the guidance message including the target language text). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of Holsinger which teaches applying grammar and language rules on the one or more determined templates to generate the one or more navigation instruction texts with the system of Moore, as modified by SEIDLER and CHRISTENSEN, as both systems are directed to a system and method for operating a navigation system to provide the navigation guidance, and one of ordinary skill in the art would have recognized the established utility of applying grammar and language rules on the one or more determined templates to generate the one or more navigation instruction texts and would have predictably applied it to improve the system of Moore as modified by SEIDLER and CHRISTENSEN. Moore, as modified by SEIDLER, CHRISTENSEN, and Holsinger, does not explicitly teach determining one or more templates, from a pre-defined set of templates, for the generated one or more structured navigation instruction data; determining relevant data elements from the generated one or more structured navigation instruction data as per a requirement of the one or more determined templates; or inserting the determined relevant data elements into the one or more determined templates. However, HUANG teaches determining one or more templates, from a pre-defined set of templates, for the generated one or more structured navigation instruction data (see at least paragraphs 48-59 regarding constructing a second training set based on the user preference ranking between the same user input text and different candidate outputs and the preset template set); determining relevant data elements from the generated one or more structured navigation instruction data as per a requirement of the one or more determined templates (see at least paragraphs 48-59 regarding the preset template can be specifically expressed as: "I want to find [String] - find this place for you, located at No. X, X Street, the detailed information of the business is as follows: GetInfo(POIName)", "Navigate to [String] - FindPOI(POIName), located at No. X, X Street, there are N routes to here, under the current traffic conditions, the fastest route is as follows: Navi (POIName)", that is, the preset template is used to indicate what style of text fragment corresponds to what kind of output text containing interface call instructions, where "String" is used to indicate that the corresponding part can be filled with any string); and inserting the determined relevant data elements into the one or more determined templates (see at least paragraphs 48-59 regarding the preset template can be specifically expressed as: "I want to find [String] - find this place for you, located at No. X, X Street, the detailed information of the business is as follows: GetInfo(POIName)", "Navigate to [String] - FindPOI(POIName), located at No. X, X Street, there are N routes to here, under the current traffic conditions, the fastest route is as follows: Navi (POIName)", that is, the preset template is used to indicate what style of text fragment corresponds to what kind of output text containing interface call instructions, where "String" is used to indicate that the corresponding part can be filled with any string. That is, first, for each user input text, a sample pair is constructed between the user input text and each candidate output, and these sample pairs are sorted according to the user preference ranking of the candidate output in each sample pair; then, the preset templates(the preset template set contains multiple preset templates) that record the correspondence between the input text and the corresponding interface call instructions are combined to jointly construct the second training set, so that by using the second training set constructed in this way, the model can learn which results are more in line with the user's actual needs). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the system of HUANG which teaches determining one or more templates, from a pre-defined set of templates, for the generated one or more structured navigation instruction data; determining relevant data elements from the generated one or more structured navigation instruction data as per a requirement of the one or more determined templates; and inserting the determined relevant data elements into the one or more determined templates with the system of Moore, as modified by SEIDLER, CHRISTENSEN, and Holsinger, as both systems are directed to a system and method for operating a navigation system to provide the navigation guidance, and one of ordinary skill in the art would have recognized the established utility of determining one or more templates, from a pre-defined set of templates, for the generated one or more structured navigation instruction data; determining relevant data elements from the generated one or more structured navigation instruction data as per a requirement of the one or more determined templates; and inserting the determined relevant data elements into the one or more determined templates and would have predictably applied it to improve the system of Moore as modified by SEIDLER, CHRISTENSEN, and Holsinger. As to claim 20, Examiner notes claim 20 recites similar limitations to claim 10 and is rejected under the same rational. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Vianello (US 20230143198 A1) regarding a system for determining a view parameter for the location. Musabji et al. (US 20120059720 A1) regarding a system for collecting images and providing navigation features using the images. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE S. PARK whose telephone number is (571)272-3151. The examiner can normally be reached Mon-Thurs 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M ANTONUCCI can be reached at (313)446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.S.P./Examiner, Art Unit 3666 /ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666
Read full office action

Prosecution Timeline

Apr 01, 2024
Application Filed
Sep 05, 2025
Non-Final Rejection — §103, §112
Dec 11, 2025
Response Filed
Mar 27, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600384
MODEL HYPERPARAMETER ADJUSTMENT USING VEHICLE DRIVING CONTEXT CLASSIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12596367
METHOD FOR THE SEMI-AUTOMATED GUIDANCE OF A MOTOR VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12594886
Vehicle and Control Method Thereof
2y 5m to grant Granted Apr 07, 2026
Patent 12576874
DRIVER SCORING SYSTEM AND METHOD USING OPTIMUM PATH DEVIATION
2y 5m to grant Granted Mar 17, 2026
Patent 12565194
PARKING ASSISTANCE APPARATUS AND PARKING ASSISTANCE METHOD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
97%
With Interview (+31.6%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month