DETAILED ACTION
Claims 1-15 are pending in the present application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of British patent application number GB2118547.5 filed on 12/20/2021 has been received and made of record.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 08/12/2024 and 06/18/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 2 and 12 are objected to because of the following informalities: “generating Geographic Information System, GIS,” should be “generating Geographic Information System (GIS)”. Appropriate correction is required.
Claim 3 is objected to because of the following informalities: “The method of claim 3” should be “The method of claim 2”. Appropriate correction is required.
Claims 10-13 are objected to because of the following informalities: “visualisation” should be “visualization” (American English). Appropriate correction is required.
Claim 13 is objected to because of the following informalities: “a virtual reality, VR,” should be “a virtual reality (VR)”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Regarding claims 3, , the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 6-7, 10-11, and 13-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. PGPubs 2015/0109338 to McKinnon et al..
Regarding claim 1, McKinnon et al. teach a method for processing data performed by one or more processors (abstract, par 0028), the method comprising:
receiving input data that comprises a plurality of elements (par 0015, “an AR management engine that is configured to obtain an initial map of an area of interest from the area data within the area database”, par 0032-0033, “to capture image or other sensor data (e.g., orientation data, position data, etc.) that indicates that an object is viewable by a user of the device, the system can cause the device to instantiate some or all of the content objects based on an association between the viewable object(s) and the content object(s) (e.g., based on at least one of object recognition, orientation, location, etc.)”), each element comprising a plurality of attributes (par 0032-0033, ”Where the device is configured or programmed to capture image or other sensor data (e.g., orientation data, position data, etc.) that indicates that an object is viewable by a user of the device, the system can cause the device to instantiate some or all of the content objects based on an association between the viewable object(s) and the content object(s) (e.g., based on at least one of object recognition, orientation, location, etc.)”, par 0038, “One should appreciate that although the area of interest corresponds to a physical location, within the disclosed system the area of interest comprises a data structure that includes attributes and values that digital describe the area of interest. Thus, the area of interest can be considered a digital model or object of the area of interest in a form processable by the disclosed computing devices”, par 0088, “Descriptor database 405 could comprise, among other things, object image data, descriptors associated with the image data, and information relating to the device capturing the object image data (e.g., translation data (heaving, swaying, surging), rotation data (e.g., pitching, yawing, rolling), distance, clarity, brightness, location, etc.). While this example is generally related to associating a descriptor with an object generally (without consideration for a specific portion of the object or the device capturing data related to the object, etc.), it should be appreciated that obtaining information related to the data capturing device at the time of capture could be valuable in providing an increased level of accuracy as an object captured from one position, view or angle could be associated with AR content objects that are different from the same object when captured from a different position, view or angle”); and
generating geometric data based on the input data by determining locations in a space for elements of the plurality of elements based on attributes of the elements (par 0050-0051, “To generate the initial map 218, the mapping module of map generation engine 202 can employ a "structure from motion" module capable of generating a 3D map of the geometry depicted in images and thus construct a 3D model of the area of interest. To create a 2D blueprint or floor plan, the map generation engine 202 can "flatten" the constructed 3D model. During the flattening, the map generation engine 202 can label certain geometric features of interest within the 3D model (e.g., doors, windows, multi-level spaces or structures, overpasses and/or underpasses in a building, etc.) via classifiers trained offline in advance of the flattening process. These classifiers can be mapped to corresponding geometric features of interest via a recognition of these features in the 3D model and/or the image data used to generate the 3D model using image recognition techniques”, par 0056, “Based on an applicable initial map 118A (applicable to a selected area of interest) and optional ancillary area data (e.g., image, video, audio, sensor, signal or other data, etc.), the AR management engine 130 can derive a set of views of interest 132 related to the area of interest”, par 0072, “To determine what part of the area of interest (reflected in the initial map 118A) will constitute a view of interest 132, the AR management engine 130 analyzes the distribution (e.g., density, layout, etc.) of recognized or recognizable objects within the initial map 118A, including the recognized objects from the perspective of possible point of view origins. The analysis can correspond to a cluster analysis of recognized objects within a particular spatial relationship of one another, and also to possible point-of-view origins. The point-of-view origins correspond to various points within the area of interest from which a user will view a view of interest 132 or part of a view of interest 132. Thus, the location, size and shape of a view of interest can be determined based on having a certain amount (minimum or maximum) of recognized objects within the view of interest, a certain density of recognized objects, a certain layout, etc.”) and generating geometries in the space for the elements at the respective determined locations (par 0040, “This can involve overlaying the content on real-world imagery (preferably in real-time) via the computing device, such that the user of the computing device sees a combination of the real-world imagery with the AR content seamlessly”, par 0069, “the AR management engine 130 can associate at least some of the recognized objects within the area of interest with AR content types or categories. These recognized objects can be considered to be potential "attachment points" for AR content. These attachment points can be identified as potential objects to which AR content objects can be associated within the area of interest to varying levels of specificity or granularity. In other words, the "type" of AR content object identified as applicable to the attachment point can be of a variety of levels of generality or granularity. Certain attachment points can be theme- or topic-independent, merely identified as suitable object to which content can be attached or associated. Examples of these types of attachment points can be recognized billboards, large sections of wall, plants, floor patterns, signage, logos, structural supports, etc.”, par 0081, “Once the views of interest 132 have been derived, and AR content objects have been generated, AR management engine 130 could obtain a set of AR content objects 134 (e.g., from the AR content database 120 via network 135) related to the derived set of views of interest 132. It should be appreciated that the set of AR content objects 134 could be obtained in any suitable manner, including for example, based on a search query of AR content database 120 (e.g., a search for AR content objects 134 in database 120 that are associated with one or more descriptors that are associated with one or more views of interest 132, etc.), based on a characteristic of the initial map 118A (e.g., dimensions, layout, an indication of the type of area, etc.), based on a user selection, recommendation or request (e.g., by an advertiser, merchant, etc.), or based on a context of an intended use of a user (e.g., based on what activities a user wishes to capture (e.g., shopping, educational, sightseeing, directing, traveling, gaming, etc.)”, par 0093-0096, “FIG. 5 is a schematic of an AR management engine 530 generating area tile maps 538 and 538T. AR Management engine 530 comprises, among other things, an initial map 518A, view(s) of interest 532 (e.g., point of view origin data, field of interest data, etc.), descriptors associated with view(s) of interest 533, and AR content object(s) 534. Based on the aforementioned area data and optional other data (e.g., signal data, etc.), AR management engine 430 establishes AR experience clusters A, B, C and D within the initial map 518A as a function of the set of AR content objects 534 and the views of interest 532. In the example shown in FIG. 5, Cluster A comprises a first point of view origin (e.g., a coordinate) having a first field of interest leading to view A …. Cluster C comprises the point of view origin having the field of interest leading to view W; and Cluster D comprises the point of view origins having fields of interest leading to views X and Y. Each of clusters B, C and D could include point of view origin(s) having corresponding fields of interest and views including objects of interest"), wherein determining locations in the space for elements based on attributes of the elements comprises performing clustering on the elements based on attributes of the elements (par 0072, “To determine what part of the area of interest (reflected in the initial map 118A) will constitute a view of interest 132, the AR management engine 130 analyzes the distribution (e.g., density, layout, etc.) of recognized or recognizable objects within the initial map 118A, including the recognized objects from the perspective of possible point of view origins. The analysis can correspond to a cluster analysis of recognized objects within a particular spatial relationship of one another, and also to possible point-of-view origins. The point-of-view origins correspond to various points within the area of interest from which a user will view a view of interest 132 or part of a view of interest 132. Thus, the location, size and shape of a view of interest can be determined based on having a certain amount (minimum or maximum) of recognized objects within the view of interest, a certain density of recognized objects, a certain layout, etc. For example, the system could assign a point in space for each recognizable object. The point in space might be the centroid of all the image descriptors associated with the recognized object as represented in 3-space. The system can then use clusters of centroids to measure density. In embodiments, the point-of-view origin can correspond to the point of origin of the area data such as image keyframe data was captured during the initial map-making process”, par 0083, “One should appreciate that a cluster could be established based on any suitable parameter(s), which could be established manually by one or more users, or automatically by a system of the inventive subject matter.”).
Regarding claim 6, McKinnon et al. teach all the limitation of claim 1, and further teach wherein determining locations in the space for the elements comprises determining two-dimensional or three- dimensional locations in the space for the elements, wherein determining locations in the space for the elements comprises performing a Voronoi tessellation of the space and determining locations for the elements based on the Voronoi tessellation (par 0019, “Based on the AR experience clusters or information related thereto, the AR management engine could generate a tile map comprising tessellated tiles (e.g., regular or non-regular (e.g., semi-regular, aperiodic, etc.), Voronoi tessellation, penrose tessellation, K-means cluster, etc.) that cover at least a portion of the area of interest. Some or all of the tiles could advantageously be individually bound to a subset of the obtained AR content objects, which can comprise overlapping or completely distinct subsets”, par 0085, “Based on the established AR experience clusters 136, the AR management engine 130 could generate an area tile map 138 of the area of interest. The tile map 138 could comprise a plurality of tessellated tiles covering the area of interest or portion(s) thereof. Depending on the parameters used to establish the AR experience clusters 136, the area tile map 138 could comprise a regular tessellation, a semi-regular tessellation, an aperiodic tessellation, a Voronoi tessellation, a Penrose tessellation, or any other suitable tessellation. The concepts of establishing experience clusters and generating tile maps are discussed in further detail below with FIGS. 5 and 6”, par 0095, “The area tile maps could comprise a plurality of tessellated tiles covering at least some of the area of interest (e.g., a portion of the Aria.RTM. Hotel and Casino, etc.), and one or more of the tiles could be bound to a subset of the AR content objects 534. In the example of FIG. 4, the tessellation comprises an aperiodic tessellation, and is based at least in part on the density of AR content objects associated with each point of view origin”).
Regarding claim 7, McKinnon et al. teach all the limitation of claim 1, and further teach wherein performing clustering on the elements based on attributes of the elements comprises performing centroid- based clustering and/or hierarchical clustering (par 0072, “ The analysis can correspond to a cluster analysis of recognized objects within a particular spatial relationship of one another, and also to possible point-of-view origins. The point-of-view origins correspond to various points within the area of interest from which a user will view a view of interest 132 or part of a view of interest 132. Thus, the location, size and shape of a view of interest can be determined based on having a certain amount (minimum or maximum) of recognized objects within the view of interest, a certain density of recognized objects, a certain layout, etc. For example, the system could assign a point in space for each recognizable object. The point in space might be the centroid of all the image descriptors associated with the recognized object as represented in 3-space. The system can then use clusters of centroids to measure density “, par 0094, “any suitable algorithm(s) or method(s) of clustering can be utilized to establish experience clusters, including for example, centroid-based clustering (e.g., k-means clustering, etc.), hierarchical clustering, distribution-based clustering, density-based clustering, or any other suitable algorithms or methods”).
Regarding claim 10, McKinnon et al. teach all the limitation of claim 1, and further teach further comprising generating a visualisation representation based on the input data for display using a display device (par 0020, “The AR management engine could further configure a device (e.g., a mobile device, kiosk, tablet, cell phone, laptop, watch, vehicle, server, computer, etc.) to obtain at least a portion of the subset based on the tile map (e.g., based on the device's location in relation to the tiles of a tile map, etc.), and present at least a portion of the AR content objects on a display of the device (e.g., instantiate the object, etc.) “, par 0040, “AR content objects 134 can be data objects including content that is to be presented via a suitable computing device (e.g., smartphone, AR goggles, tablet, etc.) to generate an augmented-reality or mixed-reality environment. This can involve overlaying the content on real-world imagery (preferably in real-time) via the computing device, such that the user of the computing device sees a combination of the real-world imagery with the AR content seamlessly. Contemplated AR content objects can include a virtual object, chroma key content, digital image, digital video, audio data, application, script, promotion, advertisements, games, workflows, kinesthetic, tactile, lesson plan, etc. AR content objects can include graphic sprites and animations, can range from an HTML window and anything contained therein to 3D sprites rendered either in scripted animation or for an interactive game experience “, par 0074, “the field of interest can be considered to be a potential field of view of a user (i.e., the user's visible area as seen through a display device on a smartphone or other computing device that displays a live video feed, via AR goggles or glasses, etc.) that would cause the user to see a particular view within a larger view of interest 132 at any given time”).
Regarding claim 11, McKinnon et al. teach all the limitation of claim 10, and further teach wherein generating a visualisation representation based on the input data comprises generating a visualisation representation of the generated geometric data that comprises a plurality of generated geometries that are each associated with a respective one of the elements of the input data (par 0050-0051, “ the mapping module of map generation engine 202 can employ a "structure from motion" module capable of generating a 3D map of the geometry depicted in images and thus construct a 3D model of the area of interest …During the flattening, the map generation engine 202 can label certain geometric features of interest within the 3D model (e.g., doors, windows, multi-level spaces or structures, overpasses and/or underpasses in a building, etc.) via classifiers trained offline in advance of the flattening process. These classifiers can be mapped to corresponding geometric features of interest via a recognition of these features in the 3D model and/or the image data used to generate the 3D model using image recognition techniques”, par 0100-0101, “When a user navigating the real world area of interest gets close enough to a portion represented by Tile A (e.g., within 50 feet, within 25 feet, within 10 feet, within two feet, within 1 foot, etc. of any portion of tile A), it is contemplated that the user's device could be auto-populated with the 7 AR content objects bound to view of interest W. When the user scans view W.sub.1 with a device having a sensor (e.g., camera, etc.), it is contemplated that a system of the inventive subject matter could utilize object recognition techniques to recognize objects of interest within view W.sub.1 and instantiate one or more of the AR content objects associated with the objects of interest … Viewed from another perspective, a user device in an area of interest could obtain and store AR content objects associated with one or more tiles corresponding to the area of interest. For example, it is contemplated that any time a user device is within 5 feet of a location corresponding with a tile or an area map, the user device will store AR content objects associated with that tile”).
Regarding claim 13, McKinnon et al. teach all the limitation of claim 10, and further teach further comprising displaying the generated visualisation representation using a display device, wherein the generated visualisation representation is a virtual reality, VR, visualisation and the display device is a VR display device (par 0020, “The AR management engine could further configure a device (e.g., a mobile device, kiosk, tablet, cell phone, laptop, watch, vehicle, server, computer, etc.) to obtain at least a portion of the subset based on the tile map (e.g., based on the device's location in relation to the tiles of a tile map, etc.), and present at least a portion of the AR content objects on a display of the device (e.g., instantiate the object, etc.).”, par 0040, “AR content objects 134 can be data objects including content that is to be presented via a suitable computing device (e.g., smartphone, AR goggles, tablet, etc.) to generate an augmented-reality or mixed-reality environment. This can involve overlaying the content on real-world imagery (preferably in real-time) via the computing device, such that the user of the computing device sees a combination of the real-world imagery with the AR content seamlessly”, par 0074, “he field of interest can be considered to be a potential field of view of a user (i.e., the user's visible area as seen through a display device on a smartphone or other computing device that displays a live video feed, via AR goggles or glasses, etc.) that would cause the user to see a particular view within a larger view of interest 132 at any given time”).
Regarding claim 14, McKinnon et al. teach a computer system, the computer system comprising one or more processors configured to (abstract, par 0028). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale.
Regarding claim 15, McKinnon et al. teach a non-transitory computer-readable storage medium having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to (par 0028). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2015/0109338 to McKinnon et al. in view of U.S. 2003/0187819 to Gutierrez et al..
Regarding claim 2, McKinnon et al. teach all the limitation of claim 1, but do not explicitly teach further comprising generating Geographic Information System, GIS, data by encoding the generated geometric data in a GIS format and performing one or more GIS processing operations on the generated GIS data.
In related endeavor, Gutierrez et al. teach further comprising generating Geographic Information System, GIS, data by encoding the generated geometric data in a GIS format and performing one or more GIS processing operations on the generated GIS data (par 0010-0011, “a three-dimensional (3D) volumetric geo-spatial querying system which overcomes the two-dimensional limitations of the prior art and includes a 3D GIS which can include a database of geo-spatial data configured to store geo-spatial data using, not two-dimensional, but three-dimensional coordinates. The GIS further can include at least one database operation configured to process a database query against geo-spatial data stored in the database ….the 3D GIS can include a geo-spatial data encoder configured to encode the geo-spatial data prior to storing the geo-spatial data in the database. In particular, in one aspect of the present invetion, the encoder can be a helical hyperspatial code encoder. In another aspect of the present invention, the encoder can include an oct-tree encoder”, par 0021, “The GIS VolPrint, itself, can be processed into a suitable encoded format within the encoder 120 en route to the geo-spatial database 130. Once stored in the geospatial database 130, the collection of GIS VolPrints in the geo-spatial database 130 can be managed by a GIS database engine 140. Importantly, database operations and methods 150 can be defined for the storage, retrieval and indexing of the VolPrints stored in the geo-spatial database 130. The operations and methods 150 can include not only rudimentary access operations such as matching, adding and deleting, but also VolPrint specific operations such as distance, etc.”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified McKinnon et al. to include further comprising generating Geographic Information System, GIS, data by encoding the generated geometric data in a GIS format and performing one or more GIS processing operations on the generated GIS data as taught by Gutierrez et al. to create and visualize 3D structures and sequences using GIS technologies particular to the three-dimensional visualization and analysis of geographic data to provide most clearly advantageous of a three-dimensional (3D) map.
Regarding claim 12, McKinnon et al. teach all the limitation of claim 10, but do not explicitly teach further comprising generating Geographic Information System, GIS, data by encoding the generated geometric data in a GIS format, wherein generating a visualisation representation based on the input data comprises generating a visualisation representation of the GIS data, wherein the generating the visualisation representation comprises three-dimensional rendering for display using a display device.
In related endeavor, Gutierrez et al. teach further comprising generating Geographic Information System, GIS, data by encoding the generated geometric data in a GIS format, wherein generating a visualisation representation based on the input data comprises generating a visualisation representation of the GIS data, wherein the generating the visualisation representation comprises three-dimensional rendering for display using a display device (par 0005-0007, “GIS technology includes computer systems configured to assemble, store, manipulate and display geographically referenced information ….some GIS technologies can integrate scene generation systems for the 3D visualization of data”, par 0010-0011, “a three-dimensional (3D) volumetric geo-spatial querying system which overcomes the two-dimensional limitations of the prior art and includes a 3D GIS which can include a database of geo-spatial data configured to store geo-spatial data using, not two-dimensional, but three-dimensional coordinates. The GIS further can include at least one database operation configured to process a database query against geo-spatial data stored in the database ….the 3D GIS can include a geo-spatial data encoder configured to encode the geo-spatial data prior to storing the geo-spatial data in the database. In particular, in one aspect of the present invetion, the encoder can be a helical hyperspatial code encoder. In another aspect of the present invention, the encoder can include an oct-tree encoder”, par 0021, “The GIS VolPrint, itself, can be processed into a suitable encoded format within the encoder 120 en route to the geo-spatial database 130. Once stored in the geospatial database 130, the collection of GIS VolPrints in the geo-spatial database 130 can be managed by a GIS database engine 140. Importantly, database operations and methods 150 can be defined for the storage, retrieval and indexing of the VolPrints stored in the geo-spatial database 130. The operations and methods 150 can include not only rudimentary access operations such as matching, adding and deleting, but also VolPrint specific operations such as distance, etc.”, par 0024-0025, “Query Results 270 can be provided by the GIS 250. For instance, a 3D route can be computed and visualized according not only to latitudinal and longitudinal components, but also in respect to changes in altitude. Thus, an optimal route can be selected between two points in a geo-spatially accurate 3D model, taking into account, for instance, undesirable changes in road grade”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified McKinnon et al. to include further comprising generating Geographic Information System, GIS, data by encoding the generated geometric data in a GIS format, wherein generating a visualisation representation based on the input data comprises generating a visualisation representation of the GIS data, wherein the generating the visualisation representation comprises three-dimensional rendering for display using a display device as taught by Gutierrez et al. to create and visualize 3D structures and sequences using GIS technologies particular to the three-dimensional visualization and analysis of geographic data to provide most clearly advantageous of a three-dimensional (3D) map.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2015/0109338 to McKinnon et al. in view of U.S. 2003/0187819 to Gutierrez et al., further in view of U.S. PGPubs 2016/0294970 to Robinson.
Regarding claim 3, McKinnon et al. as modified by Gutierrez et al. teach all the limitation of claim 2, but keep silent for teaching wherein the one or more GIS processing operations comprise one or more of: a GIS data compression operation such as a GIS geometry simplification; performing a spatial analysis on the GIS data such as determining a distance or area within the GIS data; and generating a plurality of map tiles based on the GIS data and providing one or more map tiles of the plurality of map tiles to a client for display by the client.
In related endeavor, Robinson teaches wherein the one or more GIS processing operations comprise one or more of: a GIS data compression operation such as a GIS geometry simplification; performing a spatial analysis on the GIS data such as determining a distance or area within the GIS data; and generating a plurality of map tiles based on the GIS data and providing one or more map tiles of the plurality of map tiles to a client for display by the client (par 0002-0005, “ A GIS can arrange millions of tiles images in a mosaic to create an illusion of a large seamless image, each tile containing an image that is, for example, 256×256 pixels …. The GIS includes a GIS map client that can generate and transmit a request to a remote spatial server (e.g., a map server), which determines which tile images need to be retrieved and supplied to the GIS client to respond to the particular request. The GIS client downloads the tile images from the map server and renders the map by positioning the tile images on a page. Tile images may be stored in random access memory in the process of rendering the requested map”, par 0007-0008, “the cache in non-volatile local memory is populated on demand, in that the tile images are supplied by the GIS server without extensive processing at the server level, and it is the client that determines which tile images are to be cached to reproduce particular components, for instance, of frequently-accessed maps or map sections … The system includes a computer-readable storage device comprising instructions that cause a processor to perform operations for retrieving one or more tile images (e.g., map tiles) from a spatial server and storing the tile images to a non-volatile local memory”, par 0046, par 0073, “The standard identifier can be a predetermined identifier that is used to identify a specific image stored in the spatial server 12. For example, a standard identifier can correspond to a specific tile image or group of tile images located on the spatial server 12 or previously stored by the GIS client 16”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified McKinnon et al. as modified by Gutierrez et al. to include wherein the one or more GIS processing operations comprise one or more of: a GIS data compression operation such as a GIS geometry simplification; performing a spatial analysis on the GIS data such as determining a distance or area within the GIS data; and generating a plurality of map tiles based on the GIS data and providing one or more map tiles of the plurality of map tiles to a client for display by the client as taught by Robinson to generates the extract by retrieving the necessary tile images and associated data from server to greatly reduced by retrieving the tile images for client.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2015/0109338 to McKinnon et al. in view of U.S. 2003/0187819 to Gutierrez et al., further in view of U.S. PGPubs 2011/0276592 to Gautama et al..
Regarding claim 4, McKinnon et al. as modified by Gutierrez et al. teach all the limitation of claim 2, but keep silent for teaching wherein encoding the generated geometric data in a GIS format comprises encoding the generated geometric data in a vector-based GIS format and/or in a raster-based GIS format.
In related endeavor, Gautama et al. teach wherein encoding the generated geometric data in a GIS format comprises encoding the generated geometric data in a vector-based GIS format and/or in a raster-based GIS format (par 0064-0066, par 0075-0076, “processing of the raster-based information allowing deriving geo-information such as for example a new geometry like an adjusted position of a center line of a road or an identification of the presence of a new road is illustrated. Such a new geo-information may be based on a plurality of collected tracks, collected from different location aware devices. The underlying raster structure of the representation can naturally impose a spatial clustering of tracks. This can preempt the neighborhood search process which would need to be performed when using a vector representation. Neighborhood search is a processor intensive process in vector GIS and can become a limiting factor if the number of traces and/or sample density is significant. Exploiting the inherent spatial clustering of a raster based representation may allow new geo-information such as for example a new geometry to be derived”, par 0082, “ new geo-information such as for example a new geometry is generated for the geographic information system (GIS) based on the raster-based representation, e.g. matrix space-time representation, of collected data of a plurality of location aware devices, such as for example global positioning systems.”, par 0092, “the vector-based location data can be preprocessed into a raster-based data and advantageously as a raster-based index. The local raster-based indices can be grouped into encoded and/or compressed packets to further improve the communication. In this way, such a dataflow could be used for example to transform a GPS-location in a mobile device into a more compact index … The indexing can be done in a hierarchical manner, where the local processor can create a combined index indicating a specific geographical zone (e.g. a school) with a more detailed raster index within this zone. In another example, the index can be based on raw cell phone data instead of location data”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified McKinnon et al. as modified by Gutierrez et al. to include wherein encoding the generated geometric data in a GIS format comprises encoding the generated geometric data in a vector-based GIS format and/or in a raster-based GIS format as taught by Gautama et al. to insert information from the vector-based data into a raster-based data structure so as to derive geo-information based on the raster-based data structure to provide a contribution for the topological position and perform in an efficient way.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2015/0109338 to McKinnon et al. in view of U.S. PGPubs 20210104025 to Holub.
Regarding claim 9, McKinnon et al. teach all the limitation of claim 1, but keep silent for teaching further comprising performing a geometric distortion on each of the generated geometries based on at least a subset of the attributes of the element associated with the generated geometry, wherein performing the geometric distortion comprises performing an affine transform on each of the generated geometries based on the at least a subset of the attributes of the element associated with the generated geometry.
In related endeavor, Holub teaches further comprising performing a geometric distortion on each of the generated geometries based on at least a subset of the attributes of the element associated with the generated geometry, wherein performing the geometric distortion comprises performing an affine transform on each of the generated geometries based on the at least a subset of the attributes of the element associated with the generated geometry (par 0011-0021, “The affine part of this matrix corresponds to parameters: a.sub.11, a.sub.12, a.sub.21, a.sub.22, and the purely perspective part of the matrix correspond to parameters: a.sub.31, a.sub.32. The translation part corresponds to parameters a.sub.13 and a.sub.23. Recovery of the affine parameters may approximate a perspective distortion, but this approximation is not always sufficient and some amount of correction for the perspective part is sometimes necessary. In one approach, a direct least squares method is used to recover affine parameters and additional corrections are applied to correct the rest of the parameters (pure perspective and translation)”, par 0072-0075, “the least squares method determines the best fit affine transform of the original reference signal to the reference signal detected in the suspect image. The reference signal is comprised of a set of components at coordinates (u, v). The least squares method has, for each component, corresponding coordinates (u′, v′) in the suspect image provided by the coordinate update process of block 24”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified McKinnon et al. as modified by Gutierrez et al. to include further comprising performing a geometric distortion on each of the generated geometries based on at least a subset of the attributes of the element associated with the generated geometry, wherein performing the geometric distortion comprises performing an affine transform on each of the generated geometries based on the at least a subset of the attributes of the element associated with the generated geometry as taught by Holub to provide an effective way to estimate geometric transform parameters with improved efficiency and accuracy while making efficient use of limited processing resources in mobile devices.
Allowable Subject Matter
Claims 5 and 8 are objected to as being dependent upon a rejected base, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 5, including " wherein generating the geometric data from the input data is a deterministic process, wherein determining locations in the space for the elements based on attributes of the elements comprises, for each element, determining a location in space based on at least a subset of the attributes of the element, and wherein determining a location in space for an element based on at least a subset of the attributes of the element is a reversible process such that the at least a subset of the attributes of the element are determinable based on the determined location of the element".
The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claim 8, including " wherein generating geometries in the space for the elements at the respective determined locations comprises, for each element, generating a geometry at the determined location based on at least a subset of the attributes of the element, wherein, for each element, generating a geometry at the determined location based on at least a subset of the attributes of the element is a reversible process such that the at least a subset of the attributes of the element are determinable based on the geometry generated for the element, wherein generating geometries in the space for the elements at the respective determined locations comprises, for each element, selecting a shape from among a plurality of different shapes based on at least a subset of the attributes of the element, wherein the plurality of shapes comprises a plurality of different two-dimensional shapes and each selected shape is a two-dimensional shape, the method further comprising generating a three-dimensional shape for each element based on the selected two-dimensional shape, wherein generating the three-dimensional shape comprises, for each element, determining a thickness of the two-dimensional shape in a third dimension based on at least a subset of the attributes of the element".
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIN . GE
Examiner
Art Unit 2619
/JIN GE/ Primary Examiner, Art Unit 2619