Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1–10, 14, and 21–29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (an abstract idea) and the claims do not recite additional elements that integrate the exception into a practical application or amount to significantly more than the exception itself.
Representative Claim
Independent Claim 1 is representative of the claimed invention. Claims 14 and 21 are addressed separately below.
Claim 1 recites, in relevant part:
receiving a mapping query from a user device;
automatically processing the mapping query to generate an intermediate prompt for input into a generative artificial intelligence (AI) system;
automatically inputting the intermediate prompt into the generative AI to generate intermediate output, wherein the intermediate output comprises live information;
automatically integrating the intermediate output and the mapping query into a final live street view;
and automatically generating the final live street view for output on a device.
Alice/Mayo Step 2A — Prong One
Judicial Exception
Claim 1 recites an abstract idea.
Specifically, the claim is directed to collecting, analyzing, and outputting data, which is an abstract idea recognized by the USPTO under the “data analysis” grouping of abstract ideas in the 2019 Revised Patent Eligibility Guidance.
The claimed steps involve:
receiving a query,
generating an intermediate prompt,
generating output using a generative AI model, and
outputting a visual result (“final live street view”).
These steps constitute the analysis and transformation of information followed by presentation of the results, without reciting a particular technological mechanism for performing those functions.
The recitation of a “generative artificial intelligence system” does not alter this conclusion. As claimed, the AI is used as a generic information-processing tool to generate content in response to an input prompt, which falls squarely within abstract data processing.
Accordingly, Claim 1 recites an abstract idea.
Alice/Mayo Step 2A — Prong Two
Integration Into a Practical Application
Claim 1 does not integrate the abstract idea into a practical application.
Although Claim 1 references a mapping context (e.g., “street view,” “live information”), the claim is drafted at a result-oriented functional level and does not recite a specific technological improvement to computer functionality or mapping technology.
In particular, Claim 1 does not recite:
a specific mechanism for acquiring, validating, or synchronizing “live information”;
any geospatial registration or coordinate-space alignment of generated content;
any defined map layer, tile structure, or rendering pipeline;
any latency, bandwidth, caching, or real-time update constraints; or
any non-conventional interaction between the generative AI and the mapping system.
Instead, the claim merely requires that information be generated and presented as a “final live street view,” which constitutes using generic computing to present updated information.
While the specification describes more detailed mapping architectures and content-generation workflows, those technical details are not recited in Claim 1 and therefore cannot supply integration into a practical application.
Accordingly, the abstract idea is not meaningfully integrated into a practical application.
Alice/Mayo Step 2B
Inventive Concept
Claim 1 does not recite an inventive concept.
The additional elements beyond the abstract idea consist of:
generic computing components (user device, AI system, output device), and
generic functional automation (“automatically processing,” “automatically generating,” “automatically integrating”).
These elements merely instruct the abstract idea to be implemented using conventional computing technology and a generative AI model, without reciting any non-conventional or non-generic technological improvement.
The use of a generative AI model does not, by itself, supply an inventive concept. As explained in USPTO guidance, the eligibility inquiry focuses on what the claim is directed to, not on whether an emerging technology label is invoked.
Accordingly, Claim 1 does not amount to significantly more than the abstract idea itself.
Dependent Claims 2–10
Claims 2–10 depend from Claim 1 and therefore incorporate the same abstract idea.
Claim 2
Claim 2 recites determining whether a query includes a word or phrase requesting live information and accessing a source of live information. This limitation constitutes information classification and retrieval, which remains abstract and does not provide a technological improvement.
Claims 5–6
Claims 5 and 6 recite accessing an image, identifying an object, modeling a change in a condition of the object over time, and determining object size based on a shadow.
Although these claims reference image analysis, the limitations are recited at a high level of functional abstraction and do not specify any particular computer-vision technique, model architecture, or rendering mechanism. As claimed, these steps amount to abstract data analysis without a recited technological improvement.
Claim 9
Claim 9 recites determining a route and appending enhanced information in real time while the device moves. The claim does not specify how real-time performance is achieved (e.g., via caching strategies, bandwidth-adaptive rendering, or update scheduling), and therefore remains abstract.
Accordingly, Claims 2–10 do not overcome the 101 deficiency.
Independent Claim 14
Claim 14 recites:
receiving a mapping query and user preference;
accessing a map from a mapping application;
generating AI prompts and content; and
generating a “map layer update” and modifying the map based on the update.
Claim 14 is likewise directed to the abstract idea of generating and presenting information.
The recited “map layer update” does not integrate the abstract idea into a practical application because map layers are conventional and fundamental data structures in mapping and graphics systems. Merely generating AI-based content and inserting it into a map layer—without claiming a non-conventional update mechanism, data structure, or rendering technique—constitutes routine data storage and presentation.
Accordingly, Claim 14 does not recite additional elements sufficient to transform the abstract idea into patent-eligible subject matter.
Independent Claim 21 (System Claim)
Claim 21 recites a system with circuitry configured to perform substantially the same steps as Claim 1.
Reciting the abstract idea in system form using “circuitry configured to” language does not alter the eligibility analysis. Claim 21 therefore fails under §101 for the same reasons as Claim 1.
Claims 22–29, which depend from Claim 21, likewise fail to add additional elements that integrate the abstract idea into a practical application or provide an inventive concept.
Conclusion
Claims 1–10, 14, and 21–29 are rejected under 35 U.S.C. § 101 as being directed to an abstract idea and failing to recite additional elements sufficient to integrate the abstract idea into a practical application or to amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 7-10, 14, 21-24, 27-29 are rejected under 35 U.S.C. 103 as being unpatentable over Unnikrishnan (US 20140053077 A1), in view of Socher (US 20240020538 A1), and in view of Jones (US 20130132375 A1).
Regarding Claim 1
Disclosure by Unnikrishnan
Unnikrishnan teaches:
A method
See at least: “Methods and systems for improved integration of an overhead representation (e.g., a map) with a street view representation …” (Abstract).
Rationale: Unnikrishnan expressly describes “methods and systems,” supporting A method.
comprising: receiving a mapping query from a user device;
See at least: “In one embodiment, a client 115 executing a browser 120 connects to the map server 105 … to access a map and/or to make changes to features in the map …” ([0026]); “The front end module … receives user input information from the clients 115 that includes information about user inputs that search, navigate, or edit the map and street view.” ([0025]).
Rationale: Unnikrishnan’s “client 115” (a user device) provides user input information that “search[es], navigate[s], or edit[s] the map and street view,” which constitutes receiving a mapping query from a user device.
and automatically generating the final live street view for output on a device.
See at least: “The street view module 131 accesses the images in the street view database 111 to generate a street view for display in a street view window.” ([0024]); “When a user of the client 115 adjusts the interactive control or markers, the street view module 131 receives the changes and automatically updates the street view to reflect the appropriate changes.” ([0024]); “The front end module outputs the user interface … to the client device … for display … and … receives user input information … which is relayed … for updating the maps [and] street views ….” ([0025]).
Rationale: Unnikrishnan teaches automatically generating a street view for display and outputting the user interface to a client device, including automatically updating the street view in response to user interactions. Under the broadest reasonable interpretation, ‘live’ reads on dynamically updated information retrieved contemporaneously with request (e.g., via periodic/view-triggered queries), not requiring sensor-streamed imagery. Thus, Unnikrishnan establishes automatically generating the final street view for output on a device, with the “live” aspect addressed by the combined teachings below.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly teach:
automatically processing the mapping query to generate an intermediate prompt for input into a generative artificial intelligence (AI) system;
automatically inputting the intermediate prompt into the generative AI to generate intermediate output, wherein the intermediate output comprises live information;
automatically integrating the intermediate output and the mapping query into a final live street view;
Disclosure by Socher
Socher provides teachings for the following:
automatically processing the mapping query to generate an intermediate prompt
See at least: “The NL preprocessing module … concatenate[s] input information … into an input sequence of tokens, and generate[s] one or more predicted text queries.” ([0072]); “The LLM interface submodule …prepare prompts based on inputs processed by NL preprocessing submodule ….” ([0056]).
Rationale: Socher discloses that an NL preprocessing module “generate[s] one or more predicted text queries” from received input, and that an LLM interface “prepares prompts based on inputs processed by the NL preprocessing submodule”. These disclosures meet automatically processing the mapping query to generate an intermediate prompt.
for input into a generative artificial intelligence (AI) system;
See at least: “The text generation module 230 may further include … an LLM interface submodule 234..The LLM interface submodule 234 … may interface with LLMs external to text generation module 230 ….” ([0056]).
Rationale: Socher discloses a text generation module including an “LLM interface submodule” that interfaces with one or more large language models. These LLMs are generative AI systems.
automatically inputting the intermediate prompt into the generative AI to generate intermediate output,
See at least: “The LLM interface submodule … may prepare prompts … and process results received from external LLMs ….” ([0056]); “The generation submodule … may use one or more LLMs to generate text-based output ….” ([0056]).
Rationale: Socher discloses that the LLM interface prepares prompts, sends them to LLMs, and processes results received from the LLMs, and that a generation submodule uses one or more LLMs to generate text-based output. These disclosures meet automatically inputting the intermediate prompt into the generative AI to generate intermediate output.
wherein the intermediate output comprises live information;
See at least: “The search submodule … transmits the customized queries to the corresponding APIs … and receives search results ….” ([0056]); “The generation submodule … generate[s] text-based output … based off of received and processed search results ….” ([0056]); “Other context … may include information about world events … searches that have been conducted around the same time ….” ([0070]).
Rationale: Socher teaches transmitting customized queries to external APIs and generating output based on received and processed search results. Because the generated output includes information derived from API search results obtained responsive to the query at runtime, the intermediate output comprises live information under the broadest reasonable interpretation.
Motivation to Combine Unnikrishnan and Socher
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan and Socher before them, to modify Unnikrishnan’s client/server map and street-view interface—which receives user mapping input and automatically updates a street view—by incorporating Socher’s known prompt-based generative AI pipeline that prepares prompts, retrieves real-time search results, and generates output via LLMs. This represents a predictable use of known query-processing and content-generation techniques to enhance a query-driven user interface, yielding the expected benefit of presenting up-to-date information responsive to a user’s mapping query.
Claim Limitations Not Explicitly Disclosed by the Combination of Unnikrishnan and Socher
After combining Unnikrishnan and Socher, the following is not explicitly disclosed:
automatically integrating the intermediate output and the mapping query into a final live street view;
Disclosure by Jones
Jones provides teachings for:
automatically integrating the intermediate output and the mapping query into a final live street view;
See at least: “A time-based network link fetches placemark files when triggered … periodically queried for data. That data can then be populated onto a map.” ([0035]); “A view-dependent network link makes a search query when triggered by the motion of the view specification. This technique essentially makes a data layer out of the query.” ([0035]); “The renderer can also dynamically generate the geospatial search region defined by the instantaneous view specification …([0190]); receiving results of the view-dependent search … rendering the results into contents of a virtual data layer … and displaying that virtual data layer.” ([0006]).
Rationale: Jones teaches automatically retrieving information in response to a query using time-based and view-dependent network links and rendering the retrieved results into a displayed geospatial view as a virtual data layer. Specifically, Jones discloses periodically querying external servers or triggering a query based on view motion, receiving results, and populating those results into the currently displayed view as an overlay associated with the query context. A PHOSITA would have understood this “virtual data layer” technique as a general mechanism for integrating query-responsive information into a displayed geospatial view, independent of the specific view type. Accordingly, it would have been obvious to apply Jones’s known query-to-view integration technique to Unnikrishnan’s street-view display, to present Socher’s generated, query-responsive output as an overlay or augmentation associated with the mapping query, thereby integrating the intermediate output and the mapping query into the final street-level view.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to combine these references. Unnikrishnan supplies a query-driven, automatically updated street-view UI. Socher supplies a generative AI pipeline that prepares prompts, retrieves real-time information, and generates output via LLMs. Jones supplies a known technique for automatically binding query-triggered retrieval to view presentation by rendering results as a virtual data layer tied to the current view. Their combination represents the predictable assembly of known components to achieve the expected result of presenting dynamically updated, query-responsive information within a street-level view, consistent with KSR.
Regarding Claim 2,
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 1, which is the basis for Claim 2.
Disclosure by Unnikrishnan
Unnikrishnan does not explicitly teach:
wherein the automatically processing the mapping query
to generate the intermediate prompt
includes determining whether the mapping query includes a word or phrase
intended to request live or updated information;
and based at least in part on determining the mapping query includes the word or phrase intended to request the live or updated information,
accessing a source of the live information regarding the subject.
Disclosure by Socher
Socher teaches:
wherein the automatically processing the mapping query
See at least: “upon receiving input 122, the text generation server 110 may determine whether a real-time search is needed …” ([0028]).
Rationale: Socher expressly discloses system-side processing performed “upon receiving input,” which constitutes automatically processing the mapping query.
to generate the intermediate prompt
See at least: “the text generation server 110 … may determine what to generate as a search query …” ([0028]); “the text generation server 110 may then convert the search query into customized search queries 111 a-n …” ([0033]).
Rationale: Socher expressly teaches generating system-derived query strings from received input, which constitute the claimed intermediate prompt.
includes determining whether the mapping query includes a word or phrase
See at least: “may determine what to generate as a search query …” ([0032]); “the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0025]).
Rationale: Socher teaches determining, according to the user query, whether a real-time search is needed to provide most up-to-date information. A PHOSITA would have understood that one conventional, routine way to implement such query-intent detection is to analyze the query text for terms indicating recency (e.g., “current,” “latest,” “now,” “today”) or similar phrases requesting updated information. Therefore, Socher at least suggests and renders obvious the claimed determination of whether the mapping query includes a word or phrase intended to request live or updated information.
intended to request live or updated information;
See at least: “may determine whether a real-time search is needed …” ([0028]); “real-time search results that reflect most-up-to-date information according to a user query.” ([0025]).
Rationale: Socher expressly ties the determination of whether to perform a real-time search to whether the user query requests “most-up-to-date information,” satisfying intended to request live or updated information.
and based at least in part on determining the mapping query includes the word or phrase intended to request the live or updated information,
See at least: “upon receiving input 122, the text generation server 110 may determine whether a real-time search is needed …” ([0028]).
Rationale: Socher conditions execution of the real-time search pathway on a determination made according to the user query. That determination is therefore based at least in part on the content of the mapping query, including whether it contains language indicative of a request for updated information.
accessing a source of the live information regarding the subject.
See at least: “the text generation server 110 may then convert the search query into customized search queries 111 a-n …” ([0033]); “The search submodule … transmits the customized queries to the corresponding APIs … and receives search results ….” ([0056]).
Rationale: Socher expressly discloses accessing external sources via APIs and receiving real-time search results, which constitutes accessing a source of the live information regarding the subject.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to modify the method of Unnikrishnan to arrive at the method of Claim 2. Unnikrishnan discloses a query-driven map and street-view interface that updates a street-level view in response to user input. Socher discloses automatically evaluating a received query—according to the user query—to determine whether real-time information is requested and, when so determined, accessing external sources via APIs to obtain up-to-date information. Jones discloses an automated technique for associating query-triggered retrieval with view presentation by periodically querying external sources and populating returned data into the displayed view as a query-dependent data layer. A person of ordinary skill in the art would have been motivated to incorporate Socher’s query-based real-time information determination into Unnikrishnan’s street-view method and to apply Jones’s established query-to-view integration technique to present the retrieved information within the street-level display, as these modifications involve the application of known techniques to a known system and would have been expected to operate according to their established functions.
Regarding Claim 3,
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 1, which is the basis for Claim 3.
Disclosure by Unnikrishnan
Unnikrishnan teaches:
comprising: receiving the mapping query
See at least: “a client 115 executing a browser 120 connects to the map server 105 … to access a map and/or to make changes to features in the map …” ([0026]); “The front end module … receives user input information from the clients 115 …” ([0025]).
Rationale: Unnikrishnan expressly discloses receiving user input corresponding to a mapping query.
including a request
See at least: “receives user input information … that includes information about user inputs that search, navigate, or edit the map and street view.” ([0025]).
Rationale: The received user input constitutes a request submitted to the mapping system.
for a live street view
See at least: “The street view module 131 … generate[s] a street view for display in a street view window.” ([0024]).; “the street view module 131 … automatically updates the street view to reflect the appropriate changes.” ([0024]).
Rationale: Unnikrishnan teaches generating a street view for display and automatically updating the street view in response to user interaction. Thus, Unnikrishnan establishes a user query requesting a street view, with any “live” or up-to-date aspect addressed by the combined teachings relied upon for Claim 1..
Claim Limitations Not Explicitly Disclosed by Unnikrishnan and Socher
Unnikrishnan and Socher do not explicitly teach:
at an address.
Disclosure by Jones
Jones renders obvious:
at an address.
See at least: “In general, when the user enters a search query … it is put into a request and sent to the GIS server system … [which] responds with the appropriate data ….” ([0047]); “A view-dependent network link makes a search query when triggered by the motion of the view specification ….” ([0035]).
Rationale: Jones teaches that user-entered geographic search queries are packaged into requests and transmitted to a GIS server for resolution to geographic data. A PHOSITA would have understood that a conventional and routine type of geographic search query is an address string (e.g., street number + street name), and that using an address as the location identifier is a predictable implementation choice for the taught GIS query mechanism. Therefore, Jones at least suggests and renders obvious receiving the mapping query at an address.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Unnikrishnan, Socher, and Jones. Unnikrishnan teaches a query-driven map and street-view interface, Socher teaches processing a received query to determine how to obtain up-to-date information, and Jones teaches transmitting user-entered geographic queries—such as address-based queries—to a GIS server and associating the retrieved information with a displayed geospatial view. Applying Jones’s known location-query mechanism within the Unnikrishnan/Socher system to specify the location for which a street view is requested represents the predictable use of a known technique to improve a known mapping and street-view system and would have been expected to operate according to its established function.
Regarding Claim 4,
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 3, which is the basis for Claim 4.
Disclosure by Unnikrishnan
Unnikrishnan does not explicitly teach:
wherein the automatically processing the mapping query to generate the intermediate prompt includes accessing a source of the live information regarding the address.
Disclosure by Socher
Socher provides teachings for:
wherein the automatically processing the mapping query to generate the intermediate prompt includes accessing a source of the live information regarding the address.
See at least: “the text generation server 110 may then convert the search query into customized search queries 111 a-n ….” ([0033]); “The search submodule … transmits the customized queries to the corresponding APIs … and receives search results ….” ([0056]); “the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0021]).
Rationale: Socher expressly teaches that, during automatic processing of a received query to generate intermediate query text (“customized search queries”), the system accesses external sources by transmitting those queries to APIs and receiving search results. This constitutes accessing a source of the live information as part of generating the intermediate prompt. Because Claim 4 depends from Claim 3 and the mapping query includes an address, a PHOSITA would have understood the customized queries and resulting retrieval to be directed to the subject of the mapping query, i.e., the addressed location, such that the accessed live information is regarding the address.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Unnikrishnan, Socher, and Jones to arrive at the method of Claim 4. Unnikrishnan teaches a query-driven map and street-view interface, Jones teaches associating user-specified location queries with view presentation, and Socher teaches automatically processing a received query by converting it into customized queries and accessing external sources via APIs to obtain real-time information. A PHOSITA would have been motivated to incorporate Socher’s source-accessing query processing into the Unnikrishnan/Jones workflow so that, when processing a mapping query requesting a street view at a specified address, the processing step includes accessing a source of live information regarding that address, as a predictable application of known techniques operating according to their established functions.
Regarding Claim 7
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 2, which is the basis for Claim 7.
Disclosure by Unnikrishnan
Unnikrishnan teaches:
receiving a request for a map at a location,
See at least: “a client 115 executing a browser 120 connects to the map server 105 … to access a map” ([0026]); “The front end module … receives user input information … that search, navigate, or edit the map” ([0025]).
Rationale: Unnikrishnan teaches receiving input to access and navigate a map, including updating interactive controls and markers. A PHOSITA would have understood that navigation/marker-based map interactions specify a geographic area of interest (i.e., a location) within the map interface. Thus, Unnikrishnan at least suggests and renders obvious receiving a request for a map at a location..
wherein the subject includes the map and the location,
See at least: “…includes information about…inputs that search, navigate, or edit the map and street view…for updating the maps, street views, interactive controls, and markers.” ([0025]).
Rationale: Unnikrishnan teaches receiving inputs that search/navigate/edit the map and update markers/controls, which a PHOSITA would understand as inputs directed to both (i) the map presentation and (ii) the geographic focus of the current view/marker. Therefore, the subject includes the map and the location.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly teach:
wherein the automatically processing the mapping query to generate the intermediate prompt includes accessing a source of the live information regarding the location.
Disclosure by Socher
Socher teaches:
wherein the automatically processing the mapping query to generate the intermediate prompt includes accessing a source of the live information regarding the location.
See at least: “the text generation server 110 may determine whether a real-time search is needed” ([0028]); “customized search queries 111 a-n are sent to respective data sources 103 a-n through respective APIs” ([0033);“the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query” ([0025]).
Rationale: Socher expressly discloses that during automatic query processing, the system determines whether live information is required and accesses external sources via APIs to obtain real-time information according to the user query. When applied to Unnikrishnan’s location-based mapping query, the accessed live information is regarding the location, as recited.
Disclosure by Jones
Jones further supports integration of the accessed live information with the map view:
See at least: “A view-dependent network link makes a search query when triggered by the motion of the view specification. This technique essentially makes a data layer out of the query” ([0035]); “data can then be populated onto a map” ([0035]).
Rationale: Jones teaches a query-triggered mechanism for binding externally retrieved, up-to-date information to a map view at a location, reinforcing that accessed live information is tied to the queried location and presented within the mapping interface.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to modify Unnikrishnan’s location-based map request handling to incorporate Socher’s automatic determination of whether live information is needed and corresponding access to external sources via APIs, and to apply Jones’s established query-to-map integration mechanism to associate that retrieved live information with the requested map location. This represents a predictable use of known query processing and information-retrieval techniques to provide up-to-date, location-specific information within a map interface, yielding the expected result recited in Claim 7.
Regarding Claim 8,
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 2, which is the basis for Claim 8.
Disclosure by Unnikrishnan
Unnikrishnan teaches:
receiving a request for a place of business at a location,
See at least: “The front end module 132 also receives user input information … that includes information about user inputs that search, navigate, or edit the map and street view.” ([0025]); “Both visual markers 410 … represent the location of the business ‘Haircut Salon.’” ([0039])
Rationale: Unnikrishnan expressly teaches receiving user input for search/navigate operations in a map/street-view interface, and further teaches a concrete place of business (“Haircut Salon”) having a location represented by linked markers; thus the received user input encompasses a request directed to a place of business at a location.
wherein the subject includes the place of business and the location,
See at least: “Both visual markers 410 … represent the location of the business ‘Haircut Salon.’” ([0039])
Rationale: Unnikrishnan ties the place of business (“Haircut Salon”) to its geographic location via the markers representing that business’s location. Thus, the subject of the request includes both the place of business and the location..
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly teach:
wherein the automatically processing the mapping query to generate the intermediate prompt includes accessing a source of the live information regarding the business and/or the location.
Disclosure by Socher
Socher provides teachings for:
wherein the automatically processing the mapping query to generate the intermediate prompt includes accessing a source of the live information regarding the business and/or the location.
See at least: “upon receiving input 122, the text generation server 110 may determine whether a real-time search is needed …” ([0028]); “the text generation server 110 may then convert the search query into customized search queries … [which] are sent to respective data sources … through respective APIs … [and] return query results ….” ([0033]); “the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0025])
Rationale: Socher expressly teaches that, during query processing, the system determines whether to perform a real-time search, generates queries, and accesses data sources via APIs to obtain real-time search results; thus the processing “includes accessing a source of the live information.” When the received mapping query’s subject is a business and/or location, the accessed live information is directed to the business and/or the location according to the user query.
Motivation to Combine Unnikrishnan and Socher
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan and Socher before them, to incorporate Socher’s disclosed real-time source-accessing query processing via APIs into Unnikrishnan’s disclosed map/street-view system that receives user input to search/navigate and that presents a business at a location, because both references address responding to user queries with automatically obtained information, and the modification predictably yields the expected result of obtaining live information responsive to a query whose subject is the business and/or location.
Disclosure by Jones
Jones further reinforces:
receiving a request for a place of business at a location, wherein the subject includes the place of business and the location,
See at least: “The GUI … includes layer control … [for] data points of geographic interest (e.g., points of interest) … example … layers … (e.g., Lodging, Dining … Coffee Shops …).” ([0083]); “ … The Home Store, Site #3 …” ([0137])
Rationale: Jones expressly teaches requesting/displaying points of interest including concrete places of business (e.g., Dining, Coffee Shops) and naming a business as a Placemark (“The Home Store, Site #3”), which inherently couples the place of business with a geographic location (i.e., a “location” in the mapping module). Thus Jones supplies the explicit “place of business at a location” request/subject framing for the mapping environment, without duplicating Socher’s separate “accessing a source of the live information” teaching.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to implement Unnikrishnan’s integrated map/street-view interface for a business at a location using Jones’s known points-of-interest/placemark constructs for representing and requesting places of business at locations, and to further incorporate Socher’s disclosed processing that accesses live information sources via APIs responsive to the user query, because these teachings are in the same mapping/search UI field and their combination predictably yields the expected result: upon receiving a request whose subject includes a place of business and a location, the system’s processing accesses a source of live information directed to the business and/or location.
Regarding Claim 9,
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 1, which is the basis for Claim 9.
Disclosure by Unnikrishnan
Unnikrishnan teaches:
and appending the final live street view
See at least: “the street view module 131 … generate a street view for display in a street view window ….” ([0024]); “automatically updates the street view ….” ([0024])
Rationale: Unnikrishnan expressly teaches generating and automatically updating a street view for display, supporting the final live street view foundation into which additional information can be appended.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly teach:
receiving a request for driving instructions from a current location to a destination address;
determining a route corresponding to the driving instructions from the current location to the destination address;
pre-processing enhanced information for at least one location, point of interest, or data point along the route;
and appending the final live street view to include the pre-processed enhanced information in real time while the device moves along the route.
Disclosure by Socher
Socher provides teachings for:
pre-processing enhanced information
See at least: “determine whether a real-time search is needed …” ([0024]); “convert the search query into customized search queries … [which are] sent … through respective APIs … [and] return query results …” ([0033]); “generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0021])
Rationale: Socher teaches automatically processing a received query by generating customized queries, accessing external sources via APIs, and producing output based on the returned real-time results. This constitutes pre-processing information used to enhance what is presented to the user (i.e., generating enhanced information from live sources before display).
in real time
See at least: “determine whether a real-time search is needed ….” ([0024]); “generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0021])
Rationale: Socher expressly ties output generation to “real-time search results” reflecting “most-up-to-date information,” supporting the in real time enhancement aspect when applied to a mapping/navigation request.
Claim Limitations Not Explicitly Disclosed by the Combination of Unnikrishnan and Socher
After combining Unnikrishnan and Socher, the following is not explicitly disclosed:
receiving a request for driving instructions from a current location to a destination address;
determining a route corresponding to the driving instructions from the current location to the destination address;
pre-processing enhanced information for at least one location, point of interest, or data point along the route;
and appending the final live street view to include the pre-processed enhanced information in real time while the device moves along the route.
Disclosure by Jones
Jones provides teachings for:
receiving a request for driving instructions
See at least: “Directions mode provides a route between a current location and a target location …” ([0080]); “The user can also request driving or walking directions …” ([0090])
Rationale: Jones expressly teaches receiving a user request for directions in a “Directions mode,” satisfying receiving a request for driving instructions.
from a current location
See at least: “Directions mode provides a route between a current location and a target location ….” ([0080])
Rationale: Jones expressly recites “current location,” satisfying from a current location.
to a destination address
See at least: “Directions mode provides a route between a current location and a target location ….” ([0080])
Rationale: Jones expressly teaches routing to a “target location.” A destination address is a conventional form of specifying such a destination location for directions, and using an address as the destination identifier would have been an obvious, routine implementation choice for the taught “target location” in a directions request.
determining a route corresponding to the driving instructions from the current location to the destination address;
See at least: “Directions mode provides a route between a current location and a target location ….” ([0080])
Rationale: Jones expressly provides “a route between” the recited endpoints, satisfying determining a route corresponding to the driving instructions between the current location and the destination.
pre-processing enhanced information for at least one location, point of interest, or data point along the route;
See at least: “Directions mode can incorporate intermediate waypoints … and points of interest along the route ….” ([0080]); “In local search mode, the map can show points of interest near the current location …” ([0080])
Rationale: Jones expressly teaches incorporating and presenting “points of interest along the route,” which constitutes obtaining/processing route-associated POI content for use in presentation, meeting pre-processing enhanced information for at least one … point of interest … along the route.
and appending the final live street view to include the pre-processed enhanced information
See at least: “This technique essentially makes a data layer out of the query.” ([0180]); “That data can then be populated onto a map.” ([0180])
Rationale: Jones teaches forming a query-driven “data layer” whose contents are populated into a displayed geospatial view and refreshed periodically or based on motion. Unnikrishnan provides the street-view window that is automatically updated for display. A PHOSITA would have predictably applied Jones’s query-to-view data-layer technique to append/overlay route-associated enhanced information within the Unnikrishnan street-view interface during navigation.
in real time
See at least: “A time-based network link fetches placemark files when triggered to do so by the passage of time ….” ([0178]); “periodically queried for data ….” ([0178]); “Refresh Interval specifies a time interval for periodic view refresh.” ([0180])
Rationale: Jones expressly teaches periodic querying/refreshing based on time intervals, supporting updating presented information in real time (i.e., dynamically during use, via periodic refresh).
while the device moves along the route.
See at least: “A view-dependent network link makes a search query when triggered by the motion of the view specification.” ([0180])
Rationale: Jones expressly teaches triggering a query based on “motion,” which corresponds to updating/querying as the user/device progresses (moves) during navigation along the route. A PHOSITA would have understood that, in a navigation/directions mode, the view specification is routinely updated based on the device’s changing position along the route; thus motion-triggered updates correspond to updates while the device moves along the route.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to combine these references. Jones expressly teaches a Directions mode that generates a route between a “current location” and a destination (“target location”) and identifies points of interest along that route. Unnikrishnan teaches a street-view interface that generates a street view for display and automatically updates that view in response to user interaction. Socher teaches retrieving and generating real-time, most-up-to-date information based on user queries. Applying Jones’s route computation and query-driven, refreshable data-layer mechanism to Unnikrishnan’s automatically updating street-view interface, while sourcing route-associated enhanced information from Socher’s real-time retrieval and generation pipeline, represents a predictable integration of known techniques. The combination yields the expected result of appending enhanced, route-associated information to a live street-view presentation as the device moves along the route.
Regarding Claim 10
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 1, which is the basis for Claim 10.
Disclosure by Unnikrishnan
Unnikrishnan does not explicitly teach:
wherein the intermediate output includes, in addition to the live information, and at least one of a map, map data, route mapping information, mapping application output, a photo from a tax record, a photo from an online source, a street view, a past location of a device, a current location of the device, a predicted future location of the device, live traffic information, live weather information, information about a live event, a user-provided image, user-generated information, a user selection, a user search query, or a user selection of a zoom level of a map.
It should be noted that although Unnikrishnan does not disclose generative-AI-derived live information or externally sourced live or predictive data, it does disclose a map, map data, mapping application output, a street view, and user selections used to update the displayed street view.
Disclosure by Socher
Socher teaches:
the intermediate output includes, in addition to the live information
See at least: "The generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query." ([0025])
Rationale: Socher teaches generating output using “real-time search results” that reflect “most-up-to-date information.” A PHOSITA would have understood the generated output to include (i.e., convey) live/up-to-date information derived from those real-time results.
and at least one of a map, map data, route mapping information, mapping application output, a photo from a tax record, a photo from an online source, a street view, a past location of a device, a current location of the device, a predicted future location of the device, live traffic information, live weather information, information about a live event, a user-provided image, user-generated information, a user selection, a user search query, or a user selection of a zoom level of a map.
a photo from an online source
See at least: "the generation server 110 may further insert one or more images retrieved from a webpage following a search result link into the NL output 125 as illustration." ([0044])
Rationale: An image retrieved from a webpage is explicitly a photo from an online source.
information about a live event
See at least: "when the input 122 inquires 'what's up with the latest tour of BTS?'" ([0049])
Rationale: A latest tour of a musical group is explicitly information about a live event.
live weather information
See at least: "relevant search apps may further be incorporated, such as to provide a visual depiction of relevant data in addition to the text response (e.g., to show weather information, stock charts, and/or the like)." ([0080])
Rationale: The system is configured to show weather information, which is a type of real-time or live data.
a user-provided image
See at least: "input 122 may comprise two images and a text question" ([0038])
Rationale: The input from the user can comprise images, which are user-provided image[s].
user-generated information
See at least: "User context 404 may include inputs representative of the user, including a user ID, user preferences, user click logs, or other information collected or provided by the user" ([0070])
Rationale: Information collected or provided by the user is user-generated information.
a user selection
See at least: "user past activities approving or disapproving a search result from a specific data source." ([0114])
Rationale: The act of approving or disapproving a search result is a user selection.
a user search query
See at least: "input 122 may include a user NL input 126 such as a user question" ([0026]) and "natural language user input 402 may include a word, multiple words, a sentence, or any other type of search query provided by a user" ([0070])
Rationale: A user NL input or natural language user input that is a search query is a user search query.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to incorporate Socher’s expressly disclosed real-time search–based generation of live information and associated output content into Unnikrishnan’s map–street-view presentation framework, and to employ Jones’s known mechanisms for integrating query-dependent information into mapping application outputs, because the combination represents a predictable use of complementary techniques to enrich a mapping/street-view interface with live or dynamically retrieved information and related output elements, as recited in Claim 10.
Regarding Claim 14,
Unnikrishnan teaches:
A method
See at least: “Methods and systems for improved integration of an overhead representation...” (Abstract).
Rationale: Unnikrishnan expressly discloses a method.
comprising: receiving a mapping query from a user device;
See at least: “The front end module ... receives user input information from the clients 115 that includes information about user inputs that search, navigate, or edit the map...” ([0025]); “a client 115 executing a browser 120 connects to the map server 105 ... to access a map...” ([0026]).
Rationale: Unnikrishnan teaches receiving user inputs to search or navigate the map (mapping query) from a client (user device).
accessing a map
See at least: “The map module 130 accesses the map data stored in the map data database 110 to generate a map.” ([0021]).
Rationale: Unnikrishnan teaches accessing a map.
from a mapping application
See at least: “client 115 executing a browser 120... The browser 120 is capable of displaying a map... Alternatively, the maps can be accessed by a standalone program separate from the browser 120, such as an ANDROID or IPHONE application that is designed for accessing maps.” ([0027]).
Rationale: Unnikrishnan discloses accessing the map via a browser or standalone application designed for maps (mapping application).
based at least on the mapping query;
See at least: “receives user input information ... which is relayed to the map server 105 ... for updating the maps ...” ([0025]).
Rationale: Unnikrishnan teaches that the accessing and updating of the map are based at least on the user input (mapping query).
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly teach:
receiving a user preference;
automatically generating, utilizing a generative artificial intelligence (AI) model, a prompt based as least in part on the mapping query, the user preference, and the map from the mapping application;
automatically generating, utilizing a generative AI content generator, content based at least in part on the mapping query, the user preference, and the map from the mapping application;
automatically generating a map layer update in a format suitable for the map from the mapping application based at least in part on the generated content, wherein the map layer update comprises live information,
and automatically modifying the map accessed from the mapping application based at least in part on the generated map layer update comprising the live information.
Disclosure by Socher
Socher provides teachings for the following:
receiving a user preference;
See at least: “The generative AI system may also take into account user context when generating conversational responses to user queries. In some embodiments, user context 404 may include any combination of user profile information (e.g., user ID, user gender, user age, user location, zip code, device information, mobile application usage information, and/or the like), user configured preferences or dislikes of one or more data sources ” ([0113]).
Rationale: Socher teaches obtaining/using user context including user configured preferences, which, under BRI, encompasses receiving user preference information as system input.
automatically generating, utilizing a generative artificial intelligence (AI) model, a prompt
See at least: “The text generation server 110 may... communicate with a number of external LLMs 116 a-n...” ([0047]); “The LLM interface submodule 234 ... prepare prompts based on inputs...” ([0056]).
Rationale: Socher expressly teaches automatically preparing prompts via an interface module for transmission to external LLMs (Large Language Models). These prompts are generated for use with (i.e., utilizing) a generative artificial intelligence (AI) model.
based as least in part on the mapping query, the user preference
See at least: “concatenate input information such as natural language user input 402, user context 404... into an input sequence of tokens...” ([0072]); “The LLM interface submodule 234 ... prepare prompts based on inputs processed by NL preprocessing submodule...” ([0056]).
Rationale: Socher teaches generating the prompt/tokens based at least in part on the user input (mapping query) and user context (which includes user preference).
automatically generating, utilizing a generative AI content generator, content
See at least: “utilize one or more NLP models 115 to... generate text based on the search results...” ([0034]); “generation submodule 233 may generate text-based output... based off of received and processed search results... generate a summary... generative text...” ([0085]).
Rationale: Socher teaches automatically generating text or summaries (content) using LLMs or a generation submodule (generative AI content generator).
based at least in part on the mapping query, the user preference
See at least: “generate text based on the search results and any parameters specified in input 122...” ([0034]); “input 122 ... may include a user NL input 126 ... user parameters, user context...” ([0026]).
Rationale: The generated content is based at least in part on the input (mapping query) and parameters/context (user preference).
wherein the map layer update comprises live information
See at least: “generate a text output based on real-time search results that reflect most-up-to-date information...” ([0025]); “real-time search based on the user input...” ([0021]).
Rationale: Socher teaches generating content derived from real-time search results (live information).
Motivation to Combine Unnikrishnan and Socher
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan and Socher before them, to integrate Socher's generative AI processing pipeline into Unnikrishnan's mapping system. Unnikrishnan provides the user interface for map navigation, while Socher provides a sophisticated method for processing user queries and preferences to generate up-to-date, AI-driven responses. A PHOSITA would be motivated to combine these to allow the mapping application to process natural language queries regarding map features using Socher's LLMs, thereby providing enhanced, personalized, and real-time answers (content) directly within the mapping context.
Claim Limitations Not Explicitly Disclosed by the Combination of Unnikrishnan and Socher
After combining the teachings of Unnikrishnan and Socher, the following are not explicitly disclosed:
based on the map from the mapping application.
automatically generating a map layer update in a format suitable for the map from the mapping application based at least in part on the generated content...
and automatically modifying the map accessed from the mapping application based at least in part on the generated map layer update comprising the live information.
Disclosure by Jones
Jones provides teachings for the following remaining missing elements:
based at least in part on the map from the mapping application
See at least: “The bounding-box of the current view is appended to the URL...” ([0146]); “The <ViewFormat> element can be used to select what view information the mapping module 130 sends to the server... BBOX=[bboxWest]... [lookatLon],[lookatLat]...” ([0158]).
Rationale: Jones teaches extracting map-view information (e.g., BBOX / view coordinates) from the mapping application’s current displayed map and sending it for query construction and retrieval. Under BRI, this map-view information constitutes map-derived context from the mapping application usable as the claimed ‘map from the mapping application’ (i.e., the portion/region of the map currently displayed).
automatically generating a map layer update based at least in part on the generated content
See at least: “This technique essentially makes a data layer out of the query.” ([0035]); “rendering the results into contents of virtual data layer...” ([0006]); “returned to the user in visual form as ... text, and/or other annotations...” ([0203]).
Rationale: Jones expressly teaches automatically generating a map layer update (virtual data layer) by rendering “results” (which can include text and annotations) into the layer. Socher teaches automatically generating content (text output) from real-time results . Therefore, it would have been obvious to a PHOSITA to use Jones’s known “results→layer” mechanism to render Socher’s generated content (or the real-time results incorporated into that content) into a layer update.
in a format suitable for the map from the mapping application
See at least: “KML... is a hierarchical XML-based grammar and file format for... displaying geographic features... KML controls elements that appear in the 3D viewer...” ([0116]).
Rationale: Jones teaches generating the layer in KML, which is a format suitable for the map application to render.
wherein the map layer update comprises live information
See at least: “generate a text output based on real-time search results that reflect most-up-to-date information…” ([0025]) (Socher); “rendering the results into contents of virtual data layer…” ([0006]) (Jones).
Rationale: Socher teaches generating output that reflects most-up-to-date information based on real-time search results. Rendering that output into Jones’s virtual data layer yields a layer update that comprises (i.e., includes/conveys) live information.
and automatically modifying the map accessed from the mapping application based at least in part on the generated map layer update comprising the live information.
See at least: “displaying that virtual data layer.” ([0006]); “That data can then be populated onto a map.” ([0035]); “periodically queried for data... [which] can then be populated onto a map.” ([0035]).
Rationale: Jones teaches automatically modifying the map by populating or displaying the data layer (map layer update) which contains the queried results (live information).
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to combine the references to implement the claimed method. Unnikrishnan provides a mapping application for presenting map information, Socher teaches generating prompts and AI-based content using contextual inputs such as user input and user preferences, and Jones teaches extracting map-view information (e.g., view specification, bounding box, or coordinates) from a mapping application to define the geographic context of a user’s current view. A person of ordinary skill in the art would have found it obvious to incorporate such map-derived context as part of the inputs used by Socher so that the generated prompts and content correspond to the geographic area currently displayed in the mapping application. Further, Socher teaches generating content derived from real-time information, and Jones teaches rendering returned information, including text and annotations, into a virtual data layer (e.g., a KML layer) for display on a map. A person of ordinary skill in the art would have found it obvious to apply Jones’s known data-layer rendering techniques to the AI-generated content produced by Socher, thereby generating a map layer update comprising live information and modifying the map displayed by Unnikrishnan’s mapping application.
Regarding Claim 21,
Disclosure by Unnikrishnan
Unnikrishnan discloses:
A system comprising:
See at least: “Methods and systems for improved integration of an overhead representation...” (Abstract); “The system 100 includes a map server 105...” ([0021]).
Rationale: Unnikrishnan explicitly discloses a system.
a device;
See at least: “a client 115 executing a browser 120... The client 115 may be... a mobile device...” ([0026]).
Rationale: Unnikrishnan discloses a client device. Under BRI, the client 115 is also the “user device” that provides the user input and receives the output UI, and thus corresponds to the claimed device.
and circuitry configured to:
See at least: “implemented as systems comprising ... one or more processors coupled to the one or more memories ... wherein the one or more processors are operable to perform steps...” ([0057]).
Rationale: Unnikrishnan discloses processors (circuitry) configured to execute the described methods including circuitry configured to perform the operations described herein (e.g., receiving user input, generating street view output, updating the interface).
receive a mapping query from a user device;
See at least: “The front end module ... receives user input information from the clients 115 that includes information about user inputs that search, navigate, or edit the map...” ([0025]).
Rationale: Unnikrishnan discloses circuitry configured to receive user inputs that search the map (mapping query) from a client (user device).
and automatically generate the final live street view for output on the device.
See at least: “The street view module 131 accesses the images... to generate a street view for display...” ([0024]); “automatically updates the street view to reflect the appropriate changes.” ([0024]); “outputs the user interface... to the client device...” ([0025]).
Rationale: Unnikrishnan discloses automatically generating a street view that is automatically updated (live) for output on the device.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly disclose the following:
a generative artificial intelligence system;
automatically process the mapping query to generate an intermediate prompt for input into a generative artificial intelligence (AI) system;
automatically input the intermediate prompt into the AI artificial intelligence system to generate intermediate output,
wherein the intermediate output comprises live information;
automatically integrate the intermediate output and the mapping query into a final live street view;
Disclosure by Socher
Socher discloses:
a generative artificial intelligence system;
See at least: “The generative AI system may generate a text output...” ([0025]); “systems and methods for a customized generative AI platform...” (Abstract).
Rationale: Socher explicitly discloses a generative artificial intelligence system.
automatically process the mapping query to generate an intermediate prompt for input into a generative artificial intelligence (AI) system;
See at least: “The LLM interface submodule 234 ... prepare prompts based on inputs processed by NL preprocessing submodule...” ([0056]); “generate one or more predicted text queries.” ([0072]).
Rationale: Socher teaches automatically processing inputs (mapping query) to generate prompts (intermediate prompt) for input into LLMs (generative AI system). Under BRI, a “mapping query” is a type of user query that specifies a geographic subject/location; Socher’s natural-language user input used to drive real-time search and LLM prompting encompasses such location-directed queries.
automatically input the intermediate prompt into the AI artificial intelligence system to generate intermediate output,
See at least: “communicate with a number of external LLMs... transmit the prompt... to the external LLMs...” ([0047]); “receive the results... from the external LLMs.” ([0056]); “generate text-based output...” ([0056]).
Rationale: Socher teaches inputting the prompt into the AI system (LLMs) to generate results/text (intermediate output).
wherein the intermediate output comprises live information;
See at least: “generate a text output based on real-time search results that reflect most-up-to-date information...” ([0025]).
Rationale: Socher teaches that the generated output is based on real-time search results (live information).
Motivation to Combine Unnikrishnan and Socher
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan and Socher before them, to integrate Socher's generative AI system into Unnikrishnan's mapping system. Unnikrishnan provides the map/street view interface, and Socher provides the capability to generate real-time, AI-driven information. It would have been obvious to a PHOSITA to incorporate Socher’s prompt-generation and LLM real-time search pipeline into Unnikrishnan’s map/street-view system, so that location-directed user input received at the client is automatically processed into an intermediate prompt, submitted to the generative AI system, and used to obtain up-to-date information for presentation in the mapping/street-view interface, because this is the predictable use of known query-processing and real-time retrieval techniques in a known mapping application.
Claim Limitations Not Explicitly Disclosed by the Combination of Unnikrishnan and Socher
After combining Unnikrishnan and Socher, the following is not explicitly disclosed:
automatically integrate the intermediate output into a final live street view;
Disclosure by Jones
Jones renders obvious:
automatically integrate the intermediate output and the mapping query into a final live street view;
See at least: “rendering the results into contents of virtual data layer... and displaying that virtual data layer.” ([0006]); “This technique essentially makes a data layer out of the query.” ([0035]); “Using the KML file, the client can display the... features... overlaid on the map.” ([0116]).
Rationale: Jones teaches integrating query results into a geographic viewer by rendering results into a virtual data layer and displaying the layer as an overlay (e.g., KML). Unnikrishnan teaches outputting an automatically updated street-view window to the client device. Therefore, it would have been obvious to a PHOSITA to apply Jones’s known results-to-overlay mechanism to the Unnikrishnan street-view window to present Socher’s AI-generated live information in association with the user’s mapping query, thereby automatically integrating the intermediate output and the mapping query into the final live street view.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to combine the references. Unnikrishnan provides the system with a live street view. Socher provides the Generative AI system that produces live intermediate output from prompts. Jones provides the "virtual data layer" mechanism to integrate dynamic external data into a geographic viewer. It would have been obvious to a PHOSITA to incorporate Socher’s prompt-generation and real-time retrieval pipeline into Unnikrishnan’s mapping/street-view system to produce live intermediate output responsive to a location-directed query, and to apply Jones’s known results-to-data-layer rendering and overlay technique to integrate that output with the displayed geographic view (including Unnikrishnan’s street-view window). This is the predictable use of known techniques (real-time query processing and data-layer overlays) in a known mapping UI to present up-to-date information in association with the user’s mapping query.
Regarding Claim 22,
The combination of Unnikrishnan, Socher, and Jones establishes the system of Claim 21, which is the basis for Claim 22.
Disclosure by Unnikrishnan
Unnikrishnan discloses:
wherein the circuitry configured to automatically process the mapping query
See at least: “implemented as systems comprising ... one or more processors ... wherein the one or more processors are operable to perform steps...” ([0057]).
Rationale: Unnikrishnan discloses the hardware circuitry (processors) performing the system steps.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly disclose:
to generate the intermediate prompt
is configured to determine whether the mapping query includes a word or phrase intended to request live or updated information;
and based at least in part on determining the mapping query includes the word or phrase intended to request the live or updated information, access a source of the live information regarding the subject.
Disclosure by Socher
Socher discloses:
is configured to determine whether the mapping query includes a word or phrase intended to request live or updated information;
See at least: “upon receiving input 122, the text generation server 110 may determine whether a real-time search is needed...” ([0028]); “generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0025]); “natural language user input 402 may include a word, multiple words...” ([0070]).
Rationale: Since Socher defines the user query as including ‘a word, multiple words’ ([0070]) and performs the determination whether a real-time search is needed according to that user query ([0028]), the determination is based at least in part on the words/phrases present in the received query under BRI. Under BRI, the claimed ‘mapping query’ is a type of user query directed to a geographic subject/location; thus Socher’s query-driven real-time determination applies to the mapping query.
and based at least in part on determining the mapping query includes the word or phrase intended to request the live or updated information,
See at least: “upon receiving input 122, the text generation server 110 may determine whether a real-time search is needed …” ([0028]).
Rationale: Socher conditions execution of the real-time search pathway on a determination made according to the user query. That determination is therefore based at least in part on the content of the mapping query, including whether it contains language indicative of a request for updated information.
access a source of the live information regarding the subject.
See at least: “the text generation server 110 may then convert the search query into customized search queries 111 a-n …” ([0033]); “The search submodule … transmits the customized queries to the corresponding APIs … and receives search results ….” ([0056]).
Rationale: Socher expressly discloses accessing external sources via APIs and receiving real-time search results, which constitutes accessing a source of the live information regarding the subject.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to modify the system of Claim 21 to include Socher's query analysis logic. Unnikrishnan's system receives user input; Socher teaches a specific logic for processing such input to differentiate between requests needing static knowledge versus those needing "most-up-to-date" data. Incorporating this logic would predictably enable the mapping system to efficiently allocate resources by only accessing live external sources when the user's specific words or phrases indicate a need for current information.
Regarding Claim 23,
The combination of Unnikrishnan, Socher, and Jones establishes the system of Claim 21, which is the basis for Claim 23.
Disclosure by Unnikrishnan
Unnikrishnan discloses:
wherein the circuitry configured to: receive the mapping query
See at least: “a client 115 executing a browser 120 connects to the map server 105 … to access a map and/or to make changes to features in the map …” ([0026]); “The front end module … receives user input information from the clients 115 …” ([0025]).
Rationale: Unnikrishnan expressly discloses receiving user input corresponding to a mapping query.
is configured to receive a request
See at least: “receives user input information … that includes information about user inputs that search, navigate, or edit the map and street view.” ([0025]).
Rationale: The received user input constitutes a request submitted to the mapping system.
for a live street view
See at least: “The street view module 131 … generate[s] a street view for display in a street view window.” ([0024]).; “the street view module 131 … automatically updates the street view to reflect the appropriate changes.” ([0024]).
Rationale: Unnikrishnan expressly discloses generating and automatically updating a street view, satisfying for a live street view under the adopted construction.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan and Socher
Unnikrishnan and Socher do not explicitly disclose:
at an address.
Disclosure by Jones
Jones renders obvious:
at an address.
See at least: “In general, when the user enters a search query … it is put into a request and sent to the GIS server system … [which] responds with the appropriate data ….” ([0047]); “A view-dependent network link makes a search query when triggered by the motion of the view specification ….” ([0035]).
Rationale: Jones teaches that user-entered geographic search queries are packaged into requests and transmitted to a GIS server for resolution to geographic data. A PHOSITA would have understood that a conventional and routine type of geographic search query is an address string (e.g., street number + street name), and that using an address as the location identifier is a predictable implementation choice for the taught GIS query mechanism. Therefore, Jones at least suggests and renders obvious receiving a request for a live street view at an address, because an address string is a conventional and routine form of geographic search query used by GIS systems..
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Unnikrishnan, Socher, and Jones. Unnikrishnan teaches a query-driven map and street-view interface, Socher teaches processing a received query to determine how to obtain up-to-date information, and Jones teaches transmitting user-entered geographic queries—such as address-based queries—to a GIS server and associating the retrieved information with a displayed geospatial view. Applying Jones’s known location-query mechanism within the Unnikrishnan/Socher system to specify the location for which a street view is requested represents the predictable use of a known technique to improve a known mapping and street-view system and would have been expected to operate according to its established function.
Regarding Claim 24,
The combination of Unnikrishnan, Socher, and Jones establishes the system of Claim 23, which is the basis for Claim 24.
Disclosure by Unnikrishnan
Unnikrishnan does not explicitly disclose:
wherein the circuitry configured to automatically process the mapping query to generate the intermediate prompt is configured to access a source of the live information regarding the address.
Disclosure by Socher
Socher provides teachings for:
wherein the circuitry configured to automatically process the mapping query to generate the intermediate prompt is configured to access a source of the live information regarding the address.
See at least: “the text generation server 110 may then convert the search query into customized search queries 111 a-n ….” ([0033]); “The search submodule … transmits the customized queries to the corresponding APIs … and receives search results ….” ([0056]); “the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0021]).
Rationale: Socher teaches that upon receiving input, the system automatically processes the query by converting the search query into customized search queries ([0033]) and then transmitting the customized queries to corresponding APIs and receiving search results ([0056]), where the results are real-time search results reflecting “most-up-to-date information” according to the user query ([0021]). Under BRI, this disclosed automatic processing pipeline—i.e., forming the intermediate query text (the intermediate prompt) and using that intermediate query text to perform the API-based retrieval—constitutes the circuitry being configured to access a source of live information during the processing that generates the intermediate prompt. Because Claim 24 depends from Claim 23 (request at an address), the accessed live information is regarding the address.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to arrive at the system of Claim 24. As established for Claim 21 and carried forward through Claim 23, Unnikrishnan discloses a query-driven map and street-view interface that receives user input and generates an automatically updated street view, Jones discloses an established mechanism for associating user location queries with view presentation, and Socher discloses automatic query processing that includes converting a query into customized queries and accessing external sources by transmitting those queries to APIs and receiving real-time search results. A PHOSITA would have been motivated to incorporate Socher’s source-accessing query processing into the Unnikrishnan/Jones mapping query workflow so that, when processing a mapping query requesting a live street view at an address, the processing step includes accessing a source of live information regarding that address, as recited in Claim 24. This combination represents the predictable application of known techniques according to their established functions.
Regarding Claim 27
The combination of Unnikrishnan, Socher, and Jones establishes the system of Claim 22, which is the basis for Claim 27.
Disclosure by Unnikrishnan
Unnikrishnan discloses:
wherein the circuitry is configured to: receive a request for a map at a location,
See at least: “a client 115 executing a browser 120 connects to the map server 105 … to access a map” ([0026]); “The front end module … receives user input information … that search, navigate, or edit the map” ([0025]).
Rationale: Unnikrishnan expressly teaches receiving user input for accessing/navigating a map, i.e., a request for a map.
wherein the subject includes the map and the location,
See at least: “…includes information about…inputs that search, navigate, or edit the map and street view…for updating the maps, street views, interactive controls, and markers.” ([0025]).
Rationale: Unnikrishnan teaches that mapping queries relate simultaneously to map content and specific locations, satisfying that the subject includes the map and the location.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly disclose:
wherein the automatically process the mapping query to generate the intermediate prompt includes access a source of the live information regarding the location.
Disclosure by Socher
Socher discloses:
wherein the automatically process the mapping query to generate the intermediate prompt includes access a source of the live information regarding the location.
See at least: “the text generation server 110 may determine whether a real-time search is needed” ([0028]); “customized search queries 111 a-n are sent to respective data sources 103 a-n through respective APIs” ([0033);“the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query” ([0025]).
Rationale: Socher expressly discloses that during automatic query processing, the system determines whether live information is required and accesses external sources via APIs to obtain real-time information according to the user query. When applied to Unnikrishnan’s location-based mapping query, the accessed live information is regarding the location, as recited.
Disclosure by Jones
Jones further supports integration of the accessed live information with the map view:
receive a request for a map at a location
See at least: “The bounding-box of the current view is appended to the URL …” ([0146]); “The <ViewFormat> element … BBOX=[bboxWest] …” ([0158]).
Rationale: Jones expressly teaches associating a map request/query with a current view specification (e.g., BBOX coordinates), which defines the location for which map data is requested.
wherein the subject includes the map and the location
See at least: “A view-dependent network link makes a search query when triggered by the motion of the view specification. This technique essentially makes a data layer out of the query” ([0035]); “data can then be populated onto a map” ([0035]).
Rationale: Jones ties the query/request to the map view (map context) and the view specification (location context); thus, the subject includes the map and the location.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to modify Unnikrishnan’s location-based map request handling to incorporate Socher’s automatic determination of whether live information is needed and corresponding access to external sources via APIs, and to apply Jones’s established query-to-map integration mechanism to associate that retrieved live information with the requested map location. This represents a predictable use of known query processing and information-retrieval techniques to provide up-to-date, location-specific information within a map interface, yielding the expected resulting system recited in Claim 27.
Regarding Claim 28,
The combination of Unnikrishnan, Socher, and Jones establishes the system of Claim 22, which is the basis for Claim 28.
Disclosure by Unnikrishnan
Unnikrishnan discloses:
wherein the circuitry is configured to: receive a request for a place of business at a location,
See at least: “The front end module 132 also receives user input information … that includes information about user inputs that search, navigate, or edit the map and street view.” ([0025]); “Both visual markers 410 … represent the location of the business ‘Haircut Salon.’” ([0039])
Rationale: Unnikrishnan expressly discloses receiving user input for search/navigate operations in a map/street-view interface, and further teaches a concrete place of business (“Haircut Salon”) having a location represented by linked markers; thus the received user input encompasses a request directed to a place of business at a location.
wherein the subject includes the place of business and the location,
See at least: “Both visual markers 410 … represent the location of the business ‘Haircut Salon.’” ([0039])
Rationale: Unnikrishnan’s disclosure ties the place of business (“Haircut Salon”) to its location via the markers; thus the request’s subject includes both the place of business and the location.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly disclose:
and the circuitry configured to automatically process the mapping query to generate the intermediate prompt includes access a source of the live information regarding the business and/or the location.
Disclosure by Socher
Socher discloses:
and the circuitry configured to automatically process the mapping query to generate the intermediate prompt includes access a source of the live information regarding the business and/or the location.
See at least: “upon receiving input 122, the text generation server 110 may determine whether a real-time search is needed …” ([0028]); “the text generation server 110 may then convert the search query into customized search queries … [which] are sent to respective data sources … through respective APIs … [and] return query results ….” ([0033]); “the generative AI system may generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0025])
Rationale: Socher expressly discloces that, during query processing, the system determines whether to perform a real-time search, generates queries, and accesses data sources via APIs to obtain real-time search results; thus the processing “includes accessing a source of the live information.” When the received mapping query’s subject is a business and/or location, the accessed live information is regarding the business and/or the location “according to a user query.”
Motivation to Combine Unnikrishnan and Socher
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan and Socher before them, to incorporate Socher’s disclosed real-time source-accessing query processing via APIs into Unnikrishnan’s disclosed map/street-view system that receives user input to search/navigate and that presents a business at a location, because both references address responding to user queries with automatically obtained information, and the modification predictably yields the expected result of obtaining live information responsive to a query whose subject is the business and/or location.
Claim Limitations Not Explicitly Disclosed by the Combination of Unnikrishnan and Socher
After combining Unnikrishnan and Socher, the following portion is not explicitly disclosed:
receive a request for a place of business at a location, wherein the subject includes the place of business and the location
Disclosure by Jones
Jones discloses:
receive a request for a place of business at a location, wherein the subject includes the place of business and the location,
See at least: “The GUI … includes layer control … [for] data points of geographic interest (e.g., points of interest) … example … layers … (e.g., Lodging, Dining … Coffee Shops …).” ([0083]); “ … The Home Store, Site #3 …” ([0137])
Rationale: Jones expressly teaches requesting/displaying points of interest including concrete places of business (e.g., Dining, Coffee Shops) and naming a business as a Placemark (“The Home Store, Site #3”), which inherently couples the place of business with a geographic location (i.e., a “location” in the mapping module). Thus Jones supplies the explicit “place of business at a location” request/subject framing for the mapping environment, without duplicating Socher’s separate “accessing a source of the live information” teaching.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to implement Unnikrishnan’s integrated map/street-view interface for a business at a location using Jones’s known points-of-interest/placemark constructs for representing and requesting places of business at locations, and to further incorporate Socher’s disclosed processing that accesses live information sources via APIs responsive to the user query, because these teachings are in the same mapping/search UI field and their combination predictably yields the expected result: upon receiving a request whose subject includes a place of business and a location, the system’s processing accesses a source of live information regarding the business and/or location.
Regarding Claim 29,
The combination of Unnikrishnan, Socher, and Jones establishes the system of Claim 21, which is the basis for Claim 29.
Disclosure by Unnikrishnan
Unnikrishnan discloses:
and append the final live street view
See at least: “the street view module 131 … generate a street view for display in a street view window ….” ([0024]); “automatically updates the street view ….” ([0024])
Rationale: Unnikrishnan teaches generating a street view in a street-view window and automatically updating it, establishing the “final live street view” presentation environment into which additional information may be added.
Claim Limitations Not Explicitly Disclosed by Unnikrishnan
Unnikrishnan does not explicitly disclose:
wherein the circuitry is configured to: receive a request for driving instructions from a current location to a destination address;
determine a route corresponding to the driving instructions from the current location to the destination address;
pre-process enhanced information for at least one location, point of interest, or data point along the route;
and append the final live street view to include the pre-processed enhanced information in real time while the device moves along the route.
Disclosure by Socher
Socher provides teachings for:
pre-process enhanced information
See at least: See at least: “determine whether a real-time search is needed …” [0028]; “convert the search query into customized search queries …” [0033]; “transmits the customized queries … through respective APIs … and receives search results …” ([0056]); “generate a text output based on real-time search results …”[0025]
Rationale: Socher teaches automatically processing an input query by (i) forming customized query strings, (ii) accessing external sources via APIs to obtain results, and (iii) generating output content from those results. This constitutes pre-processing enhanced information (i.e., preparing retrieved/generated informational content prior to rendering in the UI).
in real time
See at least: “determine whether a real-time search is needed ….” ([0024]); “generate a text output based on real-time search results that reflect most-up-to-date information according to a user query.” ([0021])
Rationale: Socher expressly ties generation to “real-time search results,” supporting the “in real time” aspect of the enhanced information.
Claim Limitations Not Explicitly Disclosed by the Combination of Unnikrishnan and Socher
After combining Unnikrishnan and Socher, the following is not explicitly disclosed:
wherein the circuitry is configured to: receive a request for driving instructions from a current location to a destination address;
determine a route corresponding to the driving instructions from the current location to the destination address;
pre-process enhanced information for at least one location, point of interest, or data point along the route;
and append the final live street view to include the pre-processed enhanced information in real time while the device moves along the route.
Disclosure by Jones
Jones discloses:
wherein the circuitry is configured to: receive a request for driving instructions
See at least: “Directions mode provides a route between a current location and a target location …” ([0080]); “The user can also request driving or walking directions …” ([0090])
Rationale: Jones expressly teaches receiving a user request for directions in a “Directions mode,” satisfying receiving a request for driving instructions.
from a current location
See at least: “Directions mode provides a route between a current location and a target location ….” ([0080])
Rationale: Jones expressly recites “current location,” satisfying from a current location.
to a destination address
See at least: “Directions mode provides a route between a current location and a target location ….” ([0080])
Rationale: Jones expressly teaches routing to a “target location.” A destination address is a conventional form of specifying such a destination location for directions, and using an address as the destination identifier would have been an obvious, routine implementation choice for the taught “target location” in a directions request.
determine a route corresponding to the driving instructions from the current location to the destination address;
See at least: “Directions mode provides a route between a current location and a target location ….” ([0080])
Rationale: Jones expressly provides “a route between” the recited endpoints, satisfying determining a route corresponding to the driving instructions between the current location and the destination.
pre-process enhanced information for at least one location, point of interest, or data point along the route;
See at least: “Directions mode can incorporate intermediate waypoints … and points of interest along the route ….” ([0080]); “In local search mode, the map can show points of interest near the current location …” ([0080])
Rationale: Jones expressly teaches incorporating and presenting “points of interest along the route,” which constitutes obtaining/processing route-associated POI content for use in presentation, meeting pre-processing enhanced information for at least one … point of interest … along the route.
and append the final live street view to include the pre-processed enhanced information
See at least: “This technique essentially makes a data layer out of the query.” ([0180]); “That data can then be populated onto a map.” ([0180])
Rationale: Jones expressly teaches forming a query-driven “data layer” and populating returned data into the displayed view, which corresponds to adding (i.e., appending) enhanced, query-driven information into the displayed mapping/street-view presentation established by Unnikrishnan.
in real time
See at least: “A time-based network link fetches placemark files when triggered to do so by the passage of time ….” ([0178]); “periodically queried for data ….” ([0178]); “Refresh Interval specifies a time interval for periodic view refresh.” ([0180])
Rationale: Jones expressly teaches periodic querying/refreshing based on time intervals, supporting updating presented information in real time (i.e., dynamically during use, via periodic refresh).
while the device moves along the route.
See at least: “A view-dependent network link makes a search query when triggered by the motion of the view specification.” ([0180])
Rationale: Jones expressly teaches triggering a query based on “motion,” which corresponds to updating/querying as the user/device progresses (moves) during navigation along the route.
Motivation to Combine Unnikrishnan, Socher, and Jones
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, and Jones before them, to combine these references. Jones expressly teaches a Directions mode that generates a route between a “current location” and a destination (“target location”) and identifies points of interest along that route. Unnikrishnan teaches a street-view interface that generates a street view for display and automatically updates that view in response to user interaction. Socher teaches retrieving and generating real-time, most-up-to-date information based on user queries. Applying Jones’s route computation and query-driven, refreshable data-layer mechanism to Unnikrishnan’s automatically updating street-view interface, while sourcing route-associated enhanced information from Socher’s real-time retrieval and generation pipeline, represents a predictable integration of known techniques. The combination yields the expected result of appending enhanced, route-associated information to a live street-view presentation as the device moves along the route.
Claims 5 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Unnikrishnan (US 20140053077 A1), in view of Socher (US 20240020538 A1), in view of Jones (US 20130132375 A1), and in view of Filip (US 9471834 B1).
Regarding Claim 5,
The combination of Unnikrishnan, Socher, and Jones establishes the method of Claim 3, which is the basis for Claim 5.
Disclosure by Unnikrishnan, Socher and Jones
Unnikrishnan, Socher and Jones do not explicitly disclose a method:
comprising: accessing an image at the address;
determining a date and/or a time of the image at the address;
identifying an object in the image at the address;
modeling a change in a condition of the object
from the date and/or the time of the image at the address
to a current date and/or a current time;
and appending the final live street view
to include the modeled change in the condition of the object
from the date and/or the time of the image at the address
to a current date and/or a current time.
Disclosure by Filip
Filip provides teachings for a method:
comprising: accessing an image at the address;
See at least: “In block 702, a request for map information is received … information associated with a street address …” (FIG. 7, Block 702); “In block 704, a first image is provided of a geographical location corresponding to the map data …” (FIG. 7, Block 704)
Rationale: Filip teaches receiving a request associated with a “street address” and providing a “first image” of the corresponding location, which teaches accessing an image at the address.
determining a date and/or a time of the image at the address;
See at least: “the determination may further be based on information such as the date the first image was taken …” (FIG. 7, Block 708, Col. 8, ll. 1-2); “indicating a date, time, location, or other information regarding when and where the…image was taken...” (Col. 6, ll. 64-66)
Rationale: Filip expressly teaches using “the date the first image was taken” and further discloses image metadata including “date, time,” which teaches determining a date and/or a time of the image at the address.
identifying an object in the image at the address;
See at least: “information relating to a status of an object in the first image is received …” (FIG. 7, Block 706)
Rationale: Filip expressly recites an “object in the first image,” which teaches identifying an object in the image at the address (the “first image” being the address-associated image of FIG. 7, Block 704).
modeling a change in a condition of the object from the date and/or the time of the image at the address to a current date and/or a current time;
See at least: “information relating to a status of an object in the first image … indicating that the object is out of date …” (FIG. 7, Block 706); “In block 708, it is determined whether the first image is to be updated … based on … the date the first image was taken …” (FIG. 7, Block 708); “the new image 610includes updated objects …” (Col. 7, ll. 9-10); “the new image may be magnified, reduced, shifted, translated, interpolated, enhanced, or otherwise processed. The processed new image may then be used to update the original image…(Col. 7, ll. 28-31)
Rationale: Filip teaches (i) determining an object is “out of date” and using “the date the first image was taken” as part of the update determination, and (ii) obtaining a “new image” that includes “updated objects.” Filip further teaches computational operations (e.g., “transformed,” “magnified,” “shifted,” “matching”) performed to align/insert updated imagery relative to the original image. These teachings support modeling a change in a condition of the object from the date and/or the time of the image at the address to a current date and/or a current time, where the “new image” with “updated objects” represents the later/current condition relative to the earlier “date the first image was taken,” and the disclosed transform/matching operations constitute a concrete computational mechanism for representing that change in the resulting view.
and appending the final live street view to include the modeled change in the condition of the object from the date and/or the time of the image at the address to a current date and/or a current time.
See at least: “In block 712, the first image is updated with the updated image …” (FIG. 7, Block 712); “portions of the first image including the objects identified as out of date may be updated with corresponding portions of the updated image.” (FIG. 7, Block 712)
Rationale: Filip teaches updating the first image by updating portions corresponding to “objects identified as out of date” with “corresponding portions” of an “updated image.” When applied within the already-established Claim 3 street-view framework (Unnikrishnan + Socher + Jones), updating the image content presented in the street-view output teaches appending the final live street view to include the modeled change in the condition of the object from the date and/or the time of the image at the address to a current date and/or a current time, because the displayed street-view output is modified to include the updated depiction corresponding to the later/current condition.
Motivation to Combine Unnikrishnan, Socher, Jones, and Filip
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, Jones, and Filip before them, to modify the established Unnikrishnan + Socher + Jones method of Claim 3 by incorporating Filip’s disclosed address-associated image updating technique (including use of “the date the first image was taken,” determination that an “object … is out of date,” receipt of a “new image” including “updated objects,” and updating portions of the first image with corresponding portions of the updated image), because Filip is directed to maintaining accurate, up-to-date location imagery associated with a “street address,” and incorporating that known image-update technique into the established street-view presentation framework is a predictable use of known techniques according to their established functions to yield the expected result of presenting a street-level view that reflects updated object conditions over time, as recited in Claim 5.
Regarding Claim 25,
The combination of Unnikrishnan, Socher, and Jones establishes the system of Claim 23, which is the basis for Claim 25.
Disclosure by Unnikrishnan, Socher and Jones
Unnikrishnan, Socher and Jones do not explicitly disclose:
wherein the circuitry is configured to:
access an image at the address;
determine a date and/or a time of the image at the address;
identify an object in the image at the address;
model a change in a condition of the object from the date and/or the time of the image at the address to a current date and/or a current time;
and append the final live street view to include the modeled change in the condition of the object from the date and/or the time of the image at the address to a current date and/or a current time.
Disclosure by Filip
Filip provides teachings for a system:
wherein the circuitry is configured to: access an image at the address;
See at least: “In block 702, a request for map information is received … information associated with a street address …” (FIG. 7, Block 702); “In block 704, a first image is provided of a geographical location corresponding to the map data …” (FIG. 7, Block 704)
Rationale: Filip teaches receiving a request associated with a “street address” and providing a “first image” of the corresponding location, which teaches accessing an image at the address.
determine a date and/or a time of the image at the address;
See at least: “the determination may further be based on information such as the date the first image was taken …” (FIG. 7, Block 708, Col. 8, ll. 1-2); “indicating a date, time, location, or other information regarding when and where the…image was taken...” (Col. 6, ll. 64-66)
Rationale: Filip expressly teaches using “the date the first image was taken” and further discloses image metadata including “date, time,” which teaches determining a date and/or a time of the image at the address.
identify an object in the image at the address;
See at least: “portions of the first image including the objects identified as out of date may be updated with corresponding portions of the updated image” (FIG. 7, Block 712)
Rationale: Filip teaches that objects in the first image are “identified as out of date” and that the system updates “portions of the first image including” those objects with corresponding portions of an updated image (FIG. 7, Block 712). Identifying which portions of the image include the object, for purposes of replacing those portions, at least requires the system to identify (i.e., locate/associate) the object in the image.
modeling a change in a condition of the object from the date and/or the time of the image at the address to a current date and/or a current time;
See at least: “information relating to a status of an object in the first image … indicating that the object is out of date …” (FIG. 7, Block 706); “In block 708, it is determined whether the first image is to be updated … based on … the date the first image was taken …” (FIG. 7, Block 708); “the new image 610includes updated objects …” (Col. 7, ll. 9-10); “the new image may be magnified, reduced, shifted, translated, interpolated, enhanced, or otherwise processed. The processed new image may then be used to update the original image…(Col. 7, ll. 28-31)
Rationale: Filip teaches using date/time information for the first image and determining whether that image/object is out of date, then obtaining an updated image depiction (“new image … includes updated objects”) and computationally processing it (e.g., translate/interpolate/enhance) to update the original image. Under BRI, constructing and inserting the updated depiction into the displayed view is a computational modeling/representation of the object’s changed condition from the earlier image date/time to a later (i.e., updated/current) time.
and append the final live street view to include the modeled change in the condition of the object from the date and/or the time of the image at the address to a current date and/or a current time.
See at least: “In block 712, the first image is updated with the updated image …” (FIG. 7, Block 712); “portions of the first image including the objects identified as out of date may be updated with corresponding portions of the updated image.” (FIG. 7, Block 712)
Rationale: Filip teaches updating the displayed image by replacing portions corresponding to out-of-date objects with corresponding portions of an updated image (FIG. 7, Block 712). When incorporated into the already-established street-view output of Claim 23 (Unnikrishnan/Socher/Jones), this teaches appending/modifying the final live street view to include the updated depiction that represents the object’s changed condition over time.
Motivation to Combine Unnikrishnan, Socher, Jones, and Filip
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, Jones, and Filip before them, to modify the established Unnikrishnan + Socher + Jones system of Claim 23 by incorporating Filip’s disclosed address-associated image updating technique (including use of “the date the first image was taken,” determination that an “object … is out of date,” receipt of a “new image” including “updated objects,” and updating portions of the first image with corresponding portions of the updated image), because Filip is directed to maintaining accurate, up-to-date location imagery associated with a “street address,” and incorporating that known image-update technique into the established street-view presentation framework is a predictable use of known techniques according to their established functions to yield the expected result of presenting a street-level view whose imagery is updated to reflect changed object conditions relative to an earlier image date/time, as recited in Claim 25.
Claims 6 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Unnikrishnan (US 20140053077 A1), in view of Socher (US 20240020538 A1), in view of Jones (US 20130132375 A1), in view of Filip (US 9471834 B1) and in view of Stein (US 20070154068 A1).
Regarding Claim 6,
The combination of Unnikrishnan, Socher, Jones, and Filip establishes the method of Claim 5, which is the basis for Claim 6.
Disclosure by Unnikrishnan, Socher, Jones, and Filip
Unnikrishnan, Socher, Jones, and Filip do not explicitly disclose a method:
comprising: identifying a shadow cast by the object,
and determining a size of the object
based at least in part on the shadow.
Disclosure by Stein
Stein teaches a method:
comprising: identifying a shadow cast by the object,
See at least: “shadow 23, as cast by vehicle 11 on road surface 20 …” ([0040])
Rationale: Stein expressly recites “shadow … as cast by [the] vehicle …,” which teaches identifying a shadow cast by the object (the “vehicle 11” being the object casting the shadow).
and determining a size of the object based at least in part on the shadow.
See at least: “The measurements of the dimension are preferably performed by: …” ([0019]); “The height of the lower edge is determined based on … an image of a shadow on a road surface …” ([0019])
Rationale: Stein expressly teaches “measurements of the dimension,” which teaches determining a size of the object, and further teaches that such determination is “determined based on … an image of a shadow,” which teaches based at least in part on the shadow.
Motivation to Combine Unnikrishnan, Socher, Jones, Filip, and Stein
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, Jones, Filip, and Stein before them, to arrive at the method of Claim 6. Unnikrishnan, Socher, and Jones collectively establish a query-driven, dynamically updated street-view presentation. Filip teaches identifying objects in address-associated imagery and updating the view to reflect changes in object condition over time. Stein teaches identifying a shadow cast by an object and determining object size based at least in part on the shadow. A person of ordinary skill in the art would have been motivated to incorporate Stein’s shadow-based size determination into Filip’s object-analysis workflow, as shadow-based measurement is a known and predictable computer-vision technique for extracting physical dimensions from imagery. Integrating this technique into the established street-view update pipeline yields the expected result of determining object size as part of updating the street-level view, without altering the fundamental operation of the system.
Regarding Claim 26,
The combination of Unnikrishnan, Socher, Jones, and Filip establishes the system of Claim 25, which is the basis for Claim 26.
Disclosure by Unnikrishnan, Socher and Jones
Unnikrishnan, Socher, Jones, and Filip do not explicitly disclose a system:
wherein the circuitry is configured to: identify a shadow cast by the object,
and determine a size of the object based at least in part on the shadow.
Disclosure by Stein
Stein discloses a system:
wherein the circuitry is configured to: identify a shadow cast by the object,
See at least: “shadow 23, as cast by vehicle 11 on road surface 20 …” ([0040])
Rationale: Stein expressly recites “shadow … as cast by [the] vehicle …,” which discloses a system identifying a shadow cast by the object (the “vehicle 11” being the object casting the shadow).
and determine a size of the object based at least in part on the shadow.
See at least: “The measurements of the dimension are preferably performed by: …” ([0019]); “The height of the lower edge is determined based on … an image of a shadow on a road surface …” ([0019])
Rationale: Stein expressly teaches “measurements of the dimension,” which discloses a system capable of determining a size of the object, and further teaches that such determination is “determined based on … an image of a shadow,” which teaches based at least in part on the shadow.
Motivation to Combine Unnikrishnan, Socher, Jones, Filip, and Stein
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having Unnikrishnan, Socher, Jones, Filip, and Stein before them, to arrive at the system of Claim 26. Unnikrishnan, Socher, and Jones collectively establish a query-driven, dynamically updated street-view presentation. Filip teaches identifying objects in address-associated imagery and updating the view to reflect changes in object condition over time. Stein teaches identifying a shadow cast by an object and determining object size based at least in part on the shadow. A person of ordinary skill in the art would have been motivated to incorporate Stein’s shadow-based size determination into Filip’s object-analysis workflow, as shadow-based measurement is a known and predictable computer-vision technique for extracting physical dimensions from imagery. Integrating this technique into the established street-view update pipeline yields the expected result of determining object size as part of updating the street-level view, without altering the fundamental operation of the system.
Response to Arguments
Applicant's arguments filed 10/06/2025 have been fully considered.
Response to 101 Arguments:
Applicant’s arguments have been fully considered but are not persuasive. The rejection of Claims 1–10, 14, and 21–29 under 35 U.S.C. § 101 is maintained.
Response to Applicant’s Step 2A (Prong One) arguments
Applicant argues the claims “do not recite an abstract idea” because they involve generative AI, live data, and multi-stage computation that cannot practically be performed in the human mind. This argument is not persuasive. The rejection is not premised on the “mental processes” grouping, but on the data analysis grouping (i.e., collecting, analyzing/manipulating, and outputting information). Claim 1 recites receiving a mapping query, generating an intermediate prompt, generating intermediate output (including “live information”) using a generative AI, and generating/outputting a “final live street view.” These limitations describe information processing and presentation of results, which is an abstract idea under the 2019 PEG regardless of whether it is performed by a generative AI system or requires substantial computation.
Response to Applicant’s Step 2A (Prong Two) “practical application” arguments
Applicant asserts the claims provide a specific technological solution and “simulate live reality,” solving deficiencies of conventional mapping (e.g., outdated imagery). However, the claims are drafted at a result-oriented functional level and do not recite how such improvements are achieved. For example, the claims do not require any specific geospatial registration technique, tile/layer update protocol, rendering pipeline, latency/bandwidth constraint, caching/prefetch mechanism, validation of live data, or other non-conventional mapping/computer mechanism that would integrate the abstract idea into a practical application. Merely stating the output is a “final live street view” and that the intermediate output “comprises live information” does not supply a technological implementation; it merely describes the intended use and desired result of the abstract idea.
With respect to Claim 14, Applicant relies on “map layer update” as a technical feature. This argument is not persuasive because “map layers” and updating them with information are conventional in mapping/graphics systems. Claim 14 does not recite a particular map-layer data structure, update format, conflict-resolution rule, or rendering technique that would constitute a non-conventional improvement to computer or mapping technology. The claim broadly covers generating content and modifying a map based on that content, which remains information generation and presentation.
Response to Applicant’s Step 2B “significantly more” arguments
Applicant’s assertions of a “novel, non-obvious AI architecture,” “automated prompt engineering,” “integration of non-visual live data,” “temporal condition change modeling,” “shadow analysis,” and “network bandwidth optimization” are largely not commensurate with the claim scope. The independent claims do not recite the specific technical mechanisms Applicant describes (e.g., particular prompt schemas, verification constraints, specific CV algorithms, video encoding efficiency techniques, motion parallax adjustments, bandwidth-adaptive streaming, or real-time scheduling). Where the dependent claims add image/object/shadow concepts (e.g., Claims 5–6 and 25–26), they are recited at a high level of abstraction (e.g., “modeling a change,” “determining a size… based on a shadow”) without a particular technical technique or computer improvement, and therefore still amount to abstract data analysis and result presentation.
Further, Applicant’s reliance on “non-obviousness” does not establish eligibility under 101. Eligibility turns on whether the claims are directed to a judicial exception and whether the additional elements integrate the exception into a practical application or add significantly more—not on whether the subject matter may be new.
System claims
Claim 21 and its dependents recite substantially the same abstract workflow in system form (“circuitry configured to…”). Merely presenting the abstract idea as “circuitry” does not change the eligibility analysis.
Examiner 101 Conclusion
Accordingly, Claims 1–10, 14, and 21–29 remain directed to an abstract idea (collecting, analyzing/manipulating, and outputting information) and fail to recite additional elements that integrate the abstract idea into a practical application or amount to significantly more. The 101 rejection is therefore maintained.
Examiner Response to Applicant’s 103 Arguments
Arguments Directed to Withdrawn References
Applicant’s arguments directed to the rejection of Claims 1–29 over Unnikrishnan in view of Flynn, LeBeau, Alpert, Byun, and Kim have been fully considered. However, that rejection has been withdrawn. The claims are presently rejected under 35 U.S.C. § 103 over Unnikrishnan in view of Socher and Jones, and further in view of Filip and Stein, as set forth in the Final Rejection. Accordingly, Applicant’s arguments directed to the alleged deficiencies of Flynn, LeBeau, Alpert, Byun, and Kim do not traverse the present grounds of rejection and are therefore not persuasive.
Arguments Regarding “Generative AI” and “Intermediate Prompt”
(Claims 1, 14, 21): Applicant argues that the prior art fails to teach automatically processing a mapping query to generate an intermediate prompt for input into a generative artificial intelligence system. This argument is not persuasive. As explained in the Final Rejection, Socher discloses automatic processing of received user input via a natural-language preprocessing module and an LLM interface that generates system-derived query or prompt text and inputs that text into one or more large language models to obtain output. Under the broadest reasonable interpretation, such system-generated query text corresponds to the claimed “intermediate prompt,” and the LLMs constitute a generative artificial intelligence system. Applicant’s argument does not address these teachings of Socher and therefore does not overcome the rejection.
Arguments Regarding “Live Information” and “Integration”
Applicant argues that the prior art fails to teach generating intermediate output comprising “live information” and integrating such output into a street view. This argument is not persuasive. Socher expressly teaches accessing external data sources via APIs to obtain “real-time” or “most up-to-date” information according to a user query, which corresponds to the claimed “live information.” Jones teaches automatically integrating query-driven, dynamically retrieved information into a geospatial viewer using virtual data layers or overlays. Unnikrishnan teaches generating and automatically updating a street-view display. Taken together, the applied references teach or render obvious integrating live, query-responsive information into a street-view presentation, as recited in the claims. Applicant has not identified any claim limitation that is not addressed by this combination.
Arguments Alleging Teaching Away, Incompatibility, or Improper Combination
Applicant asserts, either explicitly or implicitly, that the cited references are incompatible, teach away from one another, or would not be combined absent hindsight. These arguments are not persuasive. None of the applied references criticizes, discredits, or discourages the use of the techniques disclosed by the other references. To the contrary, Unnikrishnan, Socher, and Jones are all directed to processing user queries and presenting dynamically updated information in a mapping or geospatial interface. The combination involves applying known query-processing and information-integration techniques to a known mapping system, which constitutes a predictable use of prior art elements according to their established functions. Applicant has not identified any technical incompatibility that would have prevented a person of ordinary skill in the art from combining the references as proposed.
Arguments Alleging Hindsight Reconstruction
Applicant further contends that the rejection relies on impermissible hindsight. This argument is not persuasive. The motivation to combine is explicitly supported by the references themselves and by the ordinary expectations of a person of ordinary skill in the art. The applied references address related problems in the same technical field—namely, responding to user queries with dynamically updated, location-based information—and their combination yields no unexpected result. The rejection does not rely on Applicant’s disclosure for guidance but instead on the express teachings of the cited prior art.
Arguments Regarding “Modeling a Change in a Condition of an Object”
Applicant argues that the prior art fails to teach “modeling a change” in an object’s condition over time. This argument is not persuasive in view of Filip. Filip teaches determining whether an object depicted in a first image is “out of date” based on temporal information, obtaining updated imagery including updated objects, and computationally processing and inserting the updated depiction into the displayed image. Under the broadest reasonable interpretation, this temporal comparison and computational update constitute modeling a change in the condition of the object from an earlier date/time to a current date/time. Applicant’s arguments do not rebut these teachings.
Arguments Regarding “Shadow Identification” and “Size Determination”
Applicant argues that the prior art fails to teach identifying a shadow cast by an object and determining object size based at least in part on the shadow. This argument is not persuasive in view of Stein, which expressly teaches identifying a shadow cast by an object and using that shadow to determine object dimensions. Stein therefore teaches the claimed limitation as recited.
Arguments Alleging Non-Analogous Art
To the extent Applicant contends that any of the applied references constitute non-analogous art, this argument is not persuasive. Each applied reference is directed to systems for processing visual, spatial, or query-driven information in a computing environment and is reasonably pertinent to the problem addressed by the claims. A person of ordinary skill in the art would have looked to such references when seeking to enhance a mapping or street-view system with dynamically updated information.
Examiner 103 Conclusion
For the foregoing reasons, Applicant’s arguments have been fully considered but are not persuasive. The applied references—Unnikrishnan, Socher, Jones, Filip, and Stein—teach or render obvious the amended claim limitations. Accordingly, Claims 1–10, 14, and 21–29 remain prima facie obvious for the reasons set forth in the Final Rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Lee (US 11003865 B1): Lee teaches dynamically updating a virtual or map-based representation using newly obtained information associated with real-world locations, including replacing outdated visual or object data with current data. Lee’s disclosure of updating location-linked visual models in response to newly available information aligns with the claimed generation and integration of live or updated content into a displayed street-level or map-based view.
Skidmore (US 20190325662 A1): Skidmore is relevant because it teaches modifying and updating virtual models that represent real-world environments based on newly received information, including altering attributes or representations of objects at specific locations. Skidmore addresses the technical problem of maintaining accuracy in virtual representations as real-world conditions change, which corresponds to the claimed updating and enhancement of displayed views using current, location-specific information.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWABUSAYO ADEBANJO AWORUNSE whose telephone number is (571)272-4311. The examiner can normally be reached M - F (8:30AM - 5PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at (571) 270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWABUSAYO ADEBANJO AWORUNSE/Examiner, Art Unit 3662
/JELANI A SMITH/Supervisory Patent Examiner, Art Unit 3662