Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 5, 7, 8, 10-11, 14-18, and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. 20240210194 by Sharifi et. al. (hereafter Sharifi).
Claim 1:
Sharifi discloses
“obtaining a map search request transmitted by a terminal, the map search request comprising map search content, wherein the map search content comprises natural language content;”[ obtaining a map search request transmitted by a terminal (fig. 4 402, user input; 410, user input), the map search request (fig. 4 402, user input; 410, user input) comprising map search content (0090, speech input; 0095, speech input), wherein the map search content (0090, speech input; 0095, speech input) comprises natural language content (0090, speech; 0095, speech )]
“inputting the map search content into a generative language model, to obtain search requirement information, wherein the search requirement information is configured to represent a search requirement of the map search content;”[ inputting the map search content (0090, speech input; 0095, speech input) into a generative language model (0096, executing machine learning model), to obtain search requirement information (0096, filter; fig. 4 412), wherein the search requirement information (0096, filter; fig. 4 412) is configured to represent a search requirement of the map search content (0096, eliminating…the routes in the set of routes with a respective relevance score that does not satisfy a relevalnce threshold based on a natural language transcription of the subsequent speech input)]
“querying a preset map database for a search result corresponding to the search requirement information; and”[ querying a preset map database (0090, searching the destination database) for a search result (fig. 4 412, set of navigation search results) corresponding to the search requirement information (0096, filter the set of navigation results)]
“feeding back the search result to the terminal to trigger the terminal to present the search result on a map interface.”[ feeding back the search result to the terminal to trigger the terminal to present the search result on a map interface (0100, providing at a user interface, the one or more refined navigation search results for viewing by the user)]
Claim 2:
Sharifi discloses
“The method according to claim 1, wherein the search requirement information comprises classification tag information and requirement description information, the classification tag information comprises at least one classification tag corresponding to the search requirement, the requirement description information is configured for performing natural language description on the search requirement, and the querying the preset map database for the search result matching the search requirement information comprises: querying the preset map database for at least one candidate search result based on the classification tag information; and determining, based on the requirement description information, the search result matching the search requirement information in the at least one candidate search result.”[ wherein the search requirement information (0096, filter; fig. 4 412)comprises classification tag information (fig. 4 412, based on the subsequent user input) and requirement description information (0096, natural language description of the subsequent speech input), the classification tag information (fig. 4 412, based on the subsequent user input) comprises at least one classification tag (0096, subsequent speech input including refined search query) corresponding to the search requirement (0096, filter; fig. 4 412), the requirement description information is configured for performing natural language description(0096, natural language description of the subsequent speech input) on the search requirement(0096, filter; fig. 4 412), and the querying the preset map database (0090, searching the destination database) for the search result (fig. 4 412 set of navigation search results) matching the search requirement information(0096, filter the set of navigation results) comprises: querying the preset map database (0090, searching the destination database)for at least one candidate search result(fig. 4 412, set of navigation search results) based on the classification tag information (0096, subsequent speech input including refined search query); and determining, based on the requirement description information(0096, natural language description of the subsequent speech input), the search result (fig. 4 418, refined navigation search results) matching the search requirement information (0096, filter the set of navigation results) in the at least one candidate search result (fig. 4 412, set of navigation search results)]
Claim 5:
Sharifi discloses
“The method according to claim 2, wherein the inputting the map search content into the generative language model, to obtain the search requirement information comprises: inputting the map search content into the generative language model, to obtain the search requirement information inputting the map search content and first prompt information to the generative language model, to obtain the classification tag information; and inputting the map search content and second prompt information to the generative language model, to obtain the requirement description information.”[ wherein the inputting the map search content (0095, speech input) into the generative language model(0096, executing machine learning model), to obtain the search requirement information (0096, filter; fig. 4 412)comprises: inputting the map search content (0095, speech input)into the generative language model(0096, executing machine learning model), to obtain the search requirement information inputting the map search content (0096, eliminating…the routes in the set of routes with a respective relevance score that does not satisfy a relevance threshold based on a natural language transcription of the subsequent speech input; routes that satisfy are search requirement information inputting map search content) and first prompt information (fig. 4 416) to the generative language model(0096, executing machine learning model), to obtain the classification tag information (0096, subsequent speech input including refined search query); and inputting the map search content(0095, speech input) and second prompt information (fig. 4 408) to the generative language model(0096, executing machine learning model), to obtain the requirement description information(0096, natural language description of the subsequent speech input)]
Claim 7:
Sharifi discloses
“The method according to claim 5, wherein the map search request comprises a real-time location, and the inputting the map search content into the generative language model in response to the map search request, to obtain the classification tag information and the requirement description information comprises:”[ wherein the map search request (0056, navigate to nearby hiking spots) comprises a real-time location (0025, current location ), and the inputting the map search content (0090, speech input; 0095, speech input) into the generative language model (fig. 1a 109a/b)in response to the map search request(0056, navigate to nearby hiking spots), to obtain the classification tag information(fig. 4 412, based on the subsequent user input) and the requirement description information (0096, natural language description of the subsequent speech input)comprises]
“inputting the map search content, the first prompt information, and real-time location prompt information to the generative language model, to obtain the classification tag information, wherein the real-time location prompt information is configured for prompting the real-time location of the map search request; and” [inputting the map search content(0090, speech input; 0095, speech input), the first prompt information (fig. 4 416), and real-time location prompt information (0064, user’s current location) to the generative language model (fig. 1a 109a/b), to obtain the classification tag information(0096, subsequent speech input including refined search query), wherein the real-time location prompt information is configured for prompting the real-time location of the map search request (0064, anchoring the search on the user’s current location)]
“inputting the map search content, the second prompt information, and the real-time location prompt information to the generative language model, to obtain the requirement description information.”[ inputting the map search content(0090, speech input; 0095, speech input), the second prompt information(fig. 4 408), and the real-time location prompt information(0064, user’s current location) to the generative language model(fig. 1a 109a/b), to obtain the requirement description information(0096, natural language description of the subsequent speech input)]
Claim 8:
Sharifi discloses
“The method according to claim 5, comprising: inputting the search result, the map search content, and third prompt information to the generative language model, to obtain an interaction question, wherein the interaction question is related to the search result; and feeding back the interaction question to the terminal, to trigger the terminal to present the interaction question on the map interface.”[ inputting the search result (fig. 4 406, navigation search result), the map search content (fig. 4 402, user input), and third prompt information (fig. 4 408 feedback loop to prompt another audio request) to the generative language model (fig. 1 109a/b), to obtain an interaction question (fig. 4 408 audio request), wherein the interaction question (fig. 4 408, audio request) is related to the search result (fig. 4 408, navigation search result); and feeding back the interaction question to the terminal, to trigger the terminal to present the interaction question on the map interface (fig. 4 408, provide an audio request for the user to refine the set of navigation search results)]
Claims 10-11 and 14-16:
Claims 10-11 and 14-16 recite similar limitations as that of claims 1-2, 5, 7, and 8 except claims 10-11 and 14-16 are directed to an apparatus instead of a method. Claims 10-11 and 14-16 are rejected under similar rationale as that of claims 1-2, 6,7, and 8. Sharifi further discloses a memory and processor in at least figure 1a.
Claim 17-18 and 20:
Claims 17-18 and 20 recite similar limitations as that of claims 1-2, and 5 except claims 17-18 and 20 are directed to a non-transitory computer-readable storage medium instead of a method. Claims 17-18 and 20 are rejected under similar rationale as that of claims 1-2 and 5. Sharifi further discloses a non-transitory computer-readable storage medium and processor in at least figure 1a.
Allowable Subject Matter
Claims 3-4, 6, 9, 12-13, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL PHAM whose telephone number is (571)272-3924. The examiner can normally be reached M-F 11-730pm Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at 571-272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL PHAM/ Primary Examiner, Art Unit 2153