DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a Non-Final rejection on the merits of this application. Claims 1-20 are currently pending, as discussed below.
Examiner Notes that the fundamentals of the rejections are based on the broadest reasonable interpretation of the claim language. Applicant is kindly invited to consider the reference as a whole. References are to be interpreted as by one of ordinary skill in the art rather than as by a novice. See MPEP 2141. Therefore, the relevant inquiry when interpreting a reference is not what the reference expressly discloses on its face but what the reference would teach or suggest to one of ordinary skill in the art.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 04/29/2024 is being considered by the examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “120” has been used to designate both “External Server” and “User computing device” in Fig. 1A. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
101 Analysis – Step 1 – YES
Claim 1 is directed to a computer-implemented method, Claim 9 is directed to a system; and claim 17 is directed to a non-transitory computer-readable medium. Therefore, claims 1, 9 and 17 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous claims 9 and 17 are rejected for the same reasons as the representative claim 1 as discussed here. Claim 1 recites:
A computer-implemented method for progressively updating a navigation route, the method comprising:
receiving, by one or more processors from a user, an initial input that includes a coarse location as a first destination;
determining, by the one or more processors, an initial route including a first set of navigation instructions to the first destination;
initiating a navigation session and providing, by the one or more processors, the initial route to the user to allow the user to follow the first set of navigation instructions to the first destination;
during the navigation session, determining, by the one or more processors, a second destination that is a precise location and is different from the first destination;
determining, by the one or more processors, an updated route including a second set of navigation instructions from a current location of the user on the initial route to the second destination;
updating, by the one or more processors, a portion of the initial route to include the updated route; and
providing, by the one or more processors, the updated portion of the initial route to the user.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” and/or “certain method for organizing human activity” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind and/or methods for organizing human activity associated with interactions between people. The claimed process is analogous to a passenger telling a driver to “drive toward a shopping district. I will let you know the exact store once we are nearby.” Where the driver initially navigates toward the general area, and upon receiving further input (e.g., “go to the Target on 5th street”), adjusts the route and continues. For example, the “receiving...coarse location as a first destination” limitation recites a mental process and/or human activity equivalent to a person (e.g. driver) receiving general direction (e.g. from a passenger). The “determining…initial route…” limitation recites a mental process where a human mentally or manually determines routes using map or with the help of pen and paper. The “initiating a navigation…” step illustrates a method of organizing human activities because providing direction is a conventional interpersonal interaction (analogous to a person giving directions to another person). The “during the navigation, determining…a second destination…” limitation illustrates a mental process and/or method of organizing human activities, e.g. a passenger changing his/her mind during a trip or providing specific instruction to a certain location where refining a destination is a human decision making process. The “determining…an updated route…” limitation illustrates a mental process where manually updating a route based on current location and/or new input is a well-known human process. The “updating…a portion of the initial route…” limitation illustrates a mental process where a mental process of merging old and new directions are human decision making processes. Lastly, the “providing…the updated portion…” limitation illustrates a method of organizing human activities where sharing an updated directions is a human communication activity, traditionally done without computers.
Examiner would also note MPEP 2106.04(a)(2)(III): The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. Here, the determination is a form of making evaluation and judgement based on observation (driver behavior).
Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A computer-implemented method for progressively updating a navigation route, the method comprising:
receiving, by one or more processors from a user, an initial input that includes a coarse location as a first destination;
determining, by the one or more processors, an initial route including a first set of navigation instructions to the first destination;
initiating a navigation session and providing, by the one or more processors, the initial route to the user to allow the user to follow the first set of navigation instructions to the first destination;
during the navigation session, determining, by the one or more processors, a second destination that is a precise location and is different from the first destination;
determining, by the one or more processors, an updated route including a second set of navigation instructions from a current location of the user on the initial route to the second destination;
updating, by the one or more processors, a portion of the initial route to include the updated route; and
providing, by the one or more processors, the updated portion of the initial route to the user.
For the following reason(s), the examiner submits that the above identified limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of “one or more processors” merely describes how to generally “apply” the otherwise abstract ideas in a generic or general-purpose computer environment, where processor is recited as generic processor performing a generic computer function. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component and merely automates the steps. The computer merely automates what humans have long done manually (i.e. providing and adjusting direction). As such, the claims amounts to nothing more than applying an abstract idea using conventional computer technology.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impost any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of “one or more processors”, the examiner submits that the processor is recited at a high-level of generality (i.e. as a generic computer component performing generic calculation) such that it amounts no more than mere instruction to apply the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solutions activities.
As explained, the additional elements are recited at a high level of generality to simply implement the abstract idea and are not themselves being technologically improved. See, e.g., MPEP §2106.05; Alice Corp. v. CLS Bank, 573 U.S., 208,223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention”). Electric Power Group, LLC v, Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (Selecting information for collection, analysis and display constitute insignificant extra-solution activity). Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1243-44, 120 USPQ2d 1844, 1855-57 (Fed. Cir. 2016)( Generating a second menu from a first menu and sending the second menu to another location as performed by generic computer components). Hence, the claims are not patent eligible.
Dependent Claims
Dependent claims 2-8, 10-16 and 18-20 do not recite any further limitations that causes the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial except and/or additional elements (i.e. mental process and/or method of organizing human activities) that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-8, 10-16 and 18-20 are not patent eligible under the same rationale as provided for in the rejection of claim 1.
As such, claims 1-20 are rejected under 35 USC § 101 as being drawn to an abstract idea without significant more, and thus are ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5-11, and 13-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kennewick et al. (US 2012/0101809 A1 hereinafter Kennewick) in view of Qi et al. (US 2023/0349708 A1 hereinafter Qi).
Regarding claim 1 (similarly claims 9 and 17), Kennewick teaches A computer-implemented method for progressively updating a navigation route (see at least [0008-0013]: when a voice destination entry includes a partial destination, a final destination may be successively refined over one or more subsequent voice destination entries), the method comprising:
receiving, by one or more processors from a user, an initial input that includes a coarse location as a first destination; (see at least Fig. 5 [0008-0110]: A user may generally approximate a destination, which may result in a route being calculated along a preferred route to the approximated destination. A user may provide partial destination inputs where the partial destination may be neighborhood, city region, or various other ways.)
determining, by the one or more processors, an initial route including a first set of navigation instructions to the first destination; (see at least Fig. 5 [0008-0110]: Alternatively, when at least one of the possible destinations can be identified unambiguously, while meeting the minimal confidence level, a positive indication may result in processing proceeding to another decisional operation 560 , which controls how a route will be calculated to the identified destination (e.g., a highest ranking entry in the weighted N-best list of destinations). The system may calculate a preliminary route to the identified partial destination.)
initiating a navigation session and providing, by the one or more processors, the initial route to the user to allow the user to follow the first set of navigation instructions to the first destination; (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination. Further, it will be apparent that voice destination entries may be successively refined into a final destination en route (e.g., the final destination may be successively refined as a user proceeds along a preliminary route), in advance (e.g., the user may choose to drive upon a complete route to a final destination being identified), or in other ways.)
during the navigation session, determining, by the one or more processors, a second destination that is a precise location and is different from the first destination; (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination. Further, it will be apparent that voice destination entries may be successively refined into a final destination en route (e.g., the final destination may be successively refined as a user proceeds along a preliminary route), in advance (e.g., the user may choose to drive upon a complete route to a final destination being identified), or in other ways.)
determining, by the one or more processors, an updated route including a second set of navigation instructions from a current location of the user on the initial route to the second destination; (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination. Further, it will be apparent that voice destination entries may be successively refined into a final destination en route (e.g., the final destination may be successively refined as a user proceeds along a preliminary route), in advance (e.g., the user may choose to drive upon a complete route to a final destination being identified), or in other ways.)
updating, by the one or more processors, a portion of the initial route to include the updated route; (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination. The calculated route may be dynamically adjusted or rerouted based on subsequent inputs, generated inferences, dynamic data, or various other sources of information (e.g., inferences may be generated, which may result in dynamic routing, in response to dynamic data relating to traffic conditions, detours, events, weather conditions, or user preferences, or other factors)).and
providing, by the one or more processors, the updated portion of the initial route to the user. (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination.)
It may be alleged that Kennewick does not explicitly teach initiating a navigation session and providing, by the one or more processor, the initial route to the user to allow the user to follow the first set of navigation instructions to the first destination,
Qi is directed to navigation system that successively refine a final destination while en route, Qi teaches initiating a navigation session and providing, by the one or more processor, the initial route to the user to allow the user to follow the first set of navigation instructions to the first destination, (see at least Fig. 2A-15 [0062-0251]: S103 The navigation system displays the initial navigation region and the initial navigation route. S104 The navigation system starts navigation based on the initial navigation route.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Kennewick’s navigation system and method to incorporate the technique of initiating a navigation session and providing, by the one or more processor, the initial route to the user to allow the user to follow the first set of navigation instructions to the first destination as taught by Qi with reasonable expectation of success to help reduce a quantity of human-machine interactions, reducing time occupied by navigation before departure and improving user experience (Qi [0006]).
Regarding claim 2 (similarly claims 10 and 18), the combination of Kennewick in view of Qi teaches The computer-implemented method of claim 1 (similarly claims 9 and 17), further comprising:
Kennewick further teaches determining, by the one or more processors, a clarification location along the initial route where (i) the user receives no prompt for a navigation instruction from the first set of navigation instructions and (ii) a current navigation instruction from the first set of navigation instructions corresponding to the clarification location is configured to lead the user to the second destination; (see at least Fig. 5 [0008-0110]: when at least one of the possible destinations can be identified unambiguously, while meeting the minimal confidence level, a positive indication may result in processing proceeding to another decisional operation 560 , which controls how a route will be calculated to the identified destination (e.g., a highest ranking entry in the weighted N-best list of destinations). The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. Thereafter, subsequent processing returns to operation 510 , where the voice user interface may await an additional voice destination input that refines the partial destination. Further, in various implementations, one or more system prompts may be generated to request the additional voice destination inputs (e.g., as the user approaches the partial destination and additional information will be needed to provide further routing, or as the user approaches a point in the preliminary route where distinct routes may be taken to different topographical subtiles or points within the partial destination, etc.).)
upon reaching the clarification location along the initial route, prompting, by the one or more processors, the user for clarification regarding the second destination; (see at least Fig. 5 [0008-0110]: when at least one of the possible destinations can be identified unambiguously, while meeting the minimal confidence level, a positive indication may result in processing proceeding to another decisional operation 560 , which controls how a route will be calculated to the identified destination (e.g., a highest ranking entry in the weighted N-best list of destinations). The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. Thereafter, subsequent processing returns to operation 510 , where the voice user interface may await an additional voice destination input that refines the partial destination. Further, in various implementations, one or more system prompts may be generated to request the additional voice destination inputs (e.g., as the user approaches the partial destination and additional information will be needed to provide further routing, or as the user approaches a point in the preliminary route where distinct routes may be taken to different topographical subtiles or points within the partial destination, etc.).and
receiving, from the user, a clarification input that verifies the second destination. (see at least Fig. 5 [0008-0110]: when at least one of the possible destinations can be identified unambiguously, while meeting the minimal confidence level, a positive indication may result in processing proceeding to another decisional operation 560 , which controls how a route will be calculated to the identified destination (e.g., a highest ranking entry in the weighted N-best list of destinations). The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. Thereafter, subsequent processing returns to operation 510 , where the voice user interface may await an additional voice destination input that refines the partial destination. Further, in various implementations, one or more system prompts may be generated to request the additional voice destination inputs (e.g., as the user approaches the partial destination and additional information will be needed to provide further routing, or as the user approaches a point in the preliminary route where distinct routes may be taken to different topographical subtiles or points within the partial destination, etc.).
Regarding claim 3 (similarly claims 11 and 19), the combination of Kennewick in view of Qi teaches The computer-implemented method of claim 1 (similarly claims 9 and 17), further comprising:
Kennewick further teaches parsing, by the one or more processors, the initial input of the user to determine a candidate second destination of a plurality of second destinations; (see at least Fig. 5 [0008-0110]: a user may provide a full or partial destination input using free form natural language, for example, including voice commands and/or multi-modal commands (e.g., an input may include an utterance of “I'm going here,” coupled with a touched point on a display). The full or partial destination may be specified in various ways, including by specific address, place name, person's name, business name, neighborhood, city, region, or various other ways (e.g., a voice destination entry may be provided in an exploratory manner, such as when a user wants to visit a museum, but has yet to decide which one to visit). The voice destination input may be parsed or otherwise analyzed using one or more dynamically adaptable recognition grammars, for example, as described above in reference to FIG. 3. For example, recognition grammars may be loaded, generated, extended, pruned, or otherwise adapted based on various factors, including a proximity to a user's point of presence (e.g., as the user moves from one area to another, the recognition grammar may be optimized based on a current location, a direction of travel, temporal constraints, etc.), a contextual history (e.g., as the user interacts with the voice user interface, the grammar may adapt based on dictionaries, keywords, concepts, or other information associated with other contexts, domains, devices, applications, etc.), or other factors, as will be apparent. As such, an operation 520 may include generating one or more interpretations of the voice destination input, which may be analyzed using various data sources in order to generate an N-best list of possible destinations (e.g., a navigation agent may query a directory, a voice search engine, or other components to identify one or more destinations that at least partially match criteria contained in the voice destination input).)and
upon reaching a clarification location along the initial route, prompting, by the one or more processors the user for clarification regarding the candidate second destination. (see at least Fig. 5 [0008-0110]: when at least one of the possible destinations can be identified unambiguously, while meeting the minimal confidence level, a positive indication may result in processing proceeding to another decisional operation 560 , which controls how a route will be calculated to the identified destination (e.g., a highest ranking entry in the weighted N-best list of destinations). The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. Thereafter, subsequent processing returns to operation 510 , where the voice user interface may await an additional voice destination input that refines the partial destination. Further, in various implementations, one or more system prompts may be generated to request the additional voice destination inputs (e.g., as the user approaches the partial destination and additional information will be needed to provide further routing, or as the user approaches a point in the preliminary route where distinct routes may be taken to different topographical subtiles or points within the partial destination, etc.).
Regarding claim 5 (similarly claim 13), the combination of Kennewick in view of Qi teaches The computer-implemented method of claim 3 (similarly claim 11), further comprising:
Kennewick further teaches analyzing, by the one or more processors, each of the plurality of second destinations based on contextual indicators from the initial input; (see at least Fig. 5 [0008-0110]: The voice destination input may be parsed or otherwise analyzed using one or more dynamically adaptable recognition grammars, for example, as described above in reference to FIG. 3. For example, recognition grammars may be loaded, generated, extended, pruned, or otherwise adapted based on various factors, including a proximity to a user's point of presence (e.g., as the user moves from one area to another, the recognition grammar may be optimized based on a current location, a direction of travel, temporal constraints, etc.), a contextual history (e.g., as the user interacts with the voice user interface, the grammar may adapt based on dictionaries, keywords, concepts, or other information associated with other contexts, domains, devices, applications, etc.), or other factors, as will be apparent. As such, an operation 520 may include generating one or more interpretations of the voice destination input, which may be analyzed using various data sources in order to generate an N-best list of possible destinations (e.g., a navigation agent may query a directory, a voice search engine, or other components to identify one or more destinations that at least partially match criteria contained in the voice destination input).)
calculating, by the one or more processors, a likelihood value for each of the plurality of second destinations based on the contextual indicators; (see at least Fig. 5 [0008-0110]: The generated list of possible destinations may be post-processed in an operation 530 in order to assign weights or ranks to one or more of the entries in the N-best list. The post-processing may include analyzing the destination list generated in operation 520 according to various factors in order to determine a most likely intended destination from a full or partial voice destination input. For example, post-processing operation 530 may rank or weigh possible destinations according to shared knowledge about the user, domain-specific knowledge, dialogue history, or other factors. Furthermore, the post-processing operation 530 may analyze the full or partial destination input in order to identify an address to which a route can be calculated, for example, by resolving a closest address that makes “sense” relative to the input destination. For example, a user may specify a partial destination that identifies a broad and approximated area (e.g., “Take me to Massachusetts”), and depending on a user's current location, direction of travel, preferences, or other information, post-processing operation 530 may select an address makes sense for calculating a route (e.g., an address in Cape Cod may be selected for a user having relatives that live in Cape Cod, whereas an address in Boston may be selected for a user who may be traveling to various popular sightseeing areas, etc.).)
ranking, by the one or more processors, the plurality of second destinations based on the likelihood value for each of the plurality of second destinations; (see at least Fig. 5 [0008-0110]: The generated list of possible destinations may be post-processed in an operation 530 in order to assign weights or ranks to one or more of the entries in the N-best list. The post-processing may include analyzing the destination list generated in operation 520 according to various factors in order to determine a most likely intended destination from a full or partial voice destination input. For example, post-processing operation 530 may rank or weigh possible destinations according to shared knowledge about the user, domain-specific knowledge, dialogue history, or other factors. Furthermore, the post-processing operation 530 may analyze the full or partial destination input in order to identify an address to which a route can be calculated, for example, by resolving a closest address that makes “sense” relative to the input destination. For example, a user may specify a partial destination that identifies a broad and approximated area (e.g., “Take me to Massachusetts”), and depending on a user's current location, direction of travel, preferences, or other information, post-processing operation 530 may select an address makes sense for calculating a route (e.g., an address in Cape Cod may be selected for a user having relatives that live in Cape Cod, whereas an address in Boston may be selected for a user who may be traveling to various popular sightseeing areas, etc.).)and
providing, by the one or more processors, the plurality of second destinations to the user in a ranked list based on the ranking. (see at least Fig. 5 [0008-0110]: The generated list of possible destinations may be post-processed in an operation 530 in order to assign weights or ranks to one or more of the entries in the N-best list. The post-processing may include analyzing the destination list generated in operation 520 according to various factors in order to determine a most likely intended destination from a full or partial voice destination input. For example, post-processing operation 530 may rank or weigh possible destinations according to shared knowledge about the user, domain-specific knowledge, dialogue history, or other factors. Furthermore, the post-processing operation 530 may analyze the full or partial destination input in order to identify an address to which a route can be calculated, for example, by resolving a closest address that makes “sense” relative to the input destination. For example, a user may specify a partial destination that identifies a broad and approximated area (e.g., “Take me to Massachusetts”), and depending on a user's current location, direction of travel, preferences, or other information, post-processing operation 530 may select an address makes sense for calculating a route (e.g., an address in Cape Cod may be selected for a user having relatives that live in Cape Cod, whereas an address in Boston may be selected for a user who may be traveling to various popular sightseeing areas, etc.).)
Regarding claim 6 (similarly claim 14), the combination of Kennewick in view of Qi teaches The computer-implemented method of claim 1 (similarly claim 9), further comprising:
Kennewick further teaches receiving, from the user, a refinement trigger configured to initiate prompting the user for clarification regarding the second destination; (see at least Fig. 5 [0008-0110]: For example, in an illustrative conversation between a user and the voice user interface, the user may utter a voice-based input of “Take me to Seattle.” Based on a current location of the user (e.g., as determined using one or more navigation sensors, radio frequency identifiers, local or remote map databases, etc.), the navigation application may select an address in Seattle that provides a route from the current location to Seattle (e.g., a central point in Seattle may be selected for long distance travel, whereas an address in northern Seattle may be selected for a current location being close to north Seattle). Further, when the requested destination does not specify a final destination (e.g., as in the above example), the user may successively refine the final destination in subsequent requests, and the post-processing may continue to select additional addresses based on the current location, a current route, or other factors. For instance, continuing the above-provided example, a subsequent input of “Take me to Pike Street” may result in the post-processing analyzing the current route to determine an appropriate address on Pike Street, and possibly recalculating the route, as necessary. Thus, the current route may have the user driving north-bound on Interstate-5, such that an address may be selected on Pike Street on a same side as north-bound lanes of Interstate-5 (e.g., preserving current routing information).)
responsive to receiving clarification from the user regarding the second destination, determining, by the one or more processors, the updated route including the second set of navigation instructions from the current location to the second destination; (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination. Further, it will be apparent that voice destination entries may be successively refined into a final destination en route (e.g., the final destination may be successively refined as a user proceeds along a preliminary route), in advance (e.g., the user may choose to drive upon a complete route to a final destination being identified), or in other ways.)
updating, by the one or more processors, the portion of the initial route to include the updated route; (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination. Further, it will be apparent that voice destination entries may be successively refined into a final destination en route (e.g., the final destination may be successively refined as a user proceeds along a preliminary route), in advance (e.g., the user may choose to drive upon a complete route to a final destination being identified), or in other ways.) and
displaying, by the one or more processors, the updated portion of the initial route to the user. (see at least Fig. 5 [0008-0110]: The decisional operation 560 may determine whether the identified destination provides a full or otherwise complete destination. For example, through successive refinement, a partial voice destination entry may result in an identifiable, yet incomplete, destination. As such, an operation 570 may calculate a preliminary route to the identified partial destination. The processing operations 510 - 570 may be iteratively repeated until the final destination can be identified through successive refinement using one or more multi-modal voice destination entries. for example, the route may be completed by extrapolating the route to the partial destination into a complete route to the final destination. Further, it will be apparent that voice destination entries may be successively refined into a final destination en route (e.g., the final destination may be successively refined as a user proceeds along a preliminary route), in advance (e.g., the user may choose to drive upon a complete route to a final destination being identified), or in other ways.)
Regarding claim 7 (similarly claim 15), the combination of Kennewick in view of Qi teaches The computer-implemented method of claim 1 (similarly claim 9), further comprising:
Kennewick further teaches determining, by the one or more processors, a latest point along the initial route where the initial route is configured to lead the user to the second destination; (see at least Fig. 5 [0008-0110]: one or more system prompts may be generated to request the additional voice destination inputs (e.g., as the user approaches the partial destination and additional information will be needed to provide further routing, or as the user approaches a point in the preliminary route where distinct routes may be taken to different topographical subtiles or points within the partial destination, etc.).and
It may be alleged that Kennewick does not explicitly teach determining, by the one or more processors, a first location along the initial route where the user is likely to experience a smallest number of distractions.
Qi is directed to navigation system that successively refine a final destination while en route, Qi teaches determining, by the one or more processors, a first location along the initial route where the user is likely to experience a smallest number of distractions. (see at least Fig. 2A-15 [0062-0251]: S105 The navigation system outputs prompt information for prompting the user to confirm or enter a navigation destination to provide a precise navigation route for the user. The navigation system outputs the prompt information when determining that a vehicle is currently in a safe driving state (e.g. when vehicle encounter road conditions such as red light/congestion that the vehicle keeps in a low speed state for a long period of time or directly stops moving forward, or in an autonomous mode) or when determining that a distance between the vehicle and the initial navigation region is less than or equal to a threshold.)
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Kennewick’s navigation system and method to incorporate the technique of that prompting a user to clarify a second destination when ensuring the driver is asked only when it’s safe and contextually appropriate as taught by Qi with reasonable expectation of success to provide dynamic, data-driven directions or routing to a destination by prompting the user for clarification with least distraction and doing so would improve safety through minimizing driver distraction such that the drivers can keep eyes on the road and hands on the wheel.
Regarding claim 8 (similarly claim 16), the combination of Kennewick in view of Qi teaches The computer-implemented method of claim 1 (similarly claim 9), further comprising:
Kennewick further teaches wherein the initial input, the first set of navigation instructions, and the second set of navigation instructions include verbal communication. (see at least Fig. 5 [0008-0110]: he navigation application available within architecture 200 may resolve natural language, voice-based requests relating to navigation (e.g., calculating routes, identifying locations, displaying maps, etc.). The navigation application can provide a user with interactive, data-driven directions to a destination or waypoint, wherein the user can specify the destination or waypoint using free-form natural language (e.g., the user can identify full or partial destinations, including a specific address, a general vicinity, a city, a name or type of a place, a name or type of a business, a name of a person, etc.). The multi-modal prompt may further include system-generated speech, for example, stating, “I found several possible Washingtons, did you mean one of these or something else?” As a result, processing then proceeds back to operation 510 , in which the user can disambiguate the intended destination through another input (e.g., by touching one of the displayed destinations when the display device includes a touch-screen, or by verbalizing one of the destinations, or by providing additional information indicating that none of the displayed destinations were intended).)
Claim(s