Prosecution Insights
Last updated: April 19, 2026
Application No. 17/897,154

Automated Analysis Of Visual Data Of Images To Determine The Images' Acquisition Locations On Building Floor Plans

Non-Final OA §101§102§103§112
Filed
Aug 27, 2022
Examiner
WHITE, JAY MICHAEL
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Mftb Holdco Inc.
OA Round
1 (Non-Final)
12%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
1 granted / 8 resolved
-42.5% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
30.3%
-9.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is responsive to the claims filed on August 27, 2022. Claims 1-29 are under examination. Claim 18 is rejected under 35 USC 112(b) as being indefinite. Claims 1-29 are rejected under 35 USC 101 as ineligible. Claims 1-10, 12-14, and 16-29 are rejected under 35 USC 102 as anticipated by Colburn. Claim 11 is rejected under 35 USC 103 over Colburn and Christopher. Claim 15 is rejected under 35 USC 103 over Colburn and Masson. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) (IDS(s)) submitted on August 27, 2022 are/were filed prior to this action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 18 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The term “second latent space features” in claim 18 is used by the claim to mean “visual information,” while the accepted meaning is “hidden variable that is not visible.” The term is indefinite because the specification does not clearly redefine the term. For purposes of examination, the term, “second latent space features,” will be interpreted to mean “objects visible in the room.” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Subject Matter Eligibility Claims 1-29 are rejected under 35 U.S.C. 101 for being directed to a judicial exception without significantly more. Please note that claims 1-29 are similar to the recitations of Patent Eligibility Example 47, Claim 2, which was found ineligible, because the claims ingest information to produce more information without sufficient application or inventiveness. These claims are also similar to the ones in Electric Power Group which also involve analyzing information to output information that is not integrated into a practical application and does not constitute an inventive concept. Step 1 Claims 1-20 are directed to a process. Claims 21-29 are directed to a machine. Independent Claims Step 2A, Prong 1 Independent claims -----1, 5, 21, 26 recite a mental process. Claim 1 Claim 1 recites: generating […] building location description information for the house, including: (Mental Evaluation, Mental Process – Generating building location description information for a house can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) generating a two-dimensional point cloud having a plurality of points that represents structure of the house by sampling structural locations of the house shown on the rasterized two-dimensional floor plan, including associating information with each point that includes a two-dimensional location of that point on the two-dimensional floor plan and includes normal direction information for a group of adjacent points for that point and includes semantic information for that point about any locations of the doors and windows and inter-wall borders corresponding to that point; (Mental Evaluation, Mental Process – Generating a 2D map of a house with descriptions and scale can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) determining […] first latent space features associated with points of the two-dimensional point cloud; and (Mental Evaluation, Mental Process – Determining latent space features of points on a map can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) generating building location circular descriptors for a plurality of building locations in a specified grid pattern through the multiple rooms of the house, including, for each of the building locations, determining angular directions from the building location in 360 horizontal degrees to at least some points of the point cloud, and encoding, in one of the building location circular descriptors associated with the building location, information about some of the first latent space features that are associated with the at least some points; (Mental Evaluation, Mental Process – generating relative 360 descriptors of a room on a map can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) generating […] an image circular descriptor for a panorama image that is taken in one of the multiple rooms and has 360 horizontal degrees of visual information, including determining second latent space features associated with visual data of the panorama image by supplying the panorama image to a [brain] and wherein the image circular descriptor encodes information identifying specified directions within the visual data to the second latent space features; (Mental Evaluation, Mental Process – Generating relative 360 descriptors for a panorama image taken at a position on a map with relative position of features within the panoramic view can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) comparing […] the image circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors whose encoded information best matches the encoded information of the image circular descriptor; (Mental Evaluation, Mental Process – Comparing descriptors can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) associating […] based on the comparing, the panorama image with a determined position on the two- dimensional floor plan, wherein the determined position includes the building location in the one room associated with the determined one building location circular descriptor and further includes orientation information to correlate the determined angular directions for that building location to the identified specified directions for the panorama image; and (Mental Evaluation, Mental Process – Associating an image with the location of a map where the image was taken, while accounting for orientation of elements relative to the perspective of the image capure can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) using […] the determined position of the panorama image on the two-dimensional floor plan of the house for navigation of at least the one room of the house. (Mental Evaluation, Mental Process – Correlating an image with a position on a map to determine position and trajectory can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) Claim 1 recites mental processes and, hence, under MPEP 2106.04(a)(2)(III), an abstract idea. Claim 1 recites an abstract idea. Claim 5 Claim 5 recites: generating […] an image circular descriptor for a panorama image that is captured in a room of the building and that includes visual information about at least some walls of the room, wherein the image circular descriptor has second angular information about second latent space features identified from the visual information of the panorama image at specified directions by a second trained neural network; (Mental Evaluation, Mental Process – Generating relative 360 descriptors for a panorama image taken at a position on a map with relative position of features within the panoramic view can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) comparing […] the image circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors that is in the room and has first angular information best matching the second angular information of the image circular descriptor; (Mental Evaluation, Mental Process – Comparing descriptors to make a determination can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) associating, […] based on the comparing, the panorama image with a determined position and orientation in the room, the determined position based on the building location with which the determined one building location circular descriptor is associated, and the determined orientation identifying at least one direction from that building location corresponding to a specified part of the visible information in the panorama image; and (Mental Evaluation, Mental Process – Correlating an image with a position on a map to determine position, orientation, and trajectory can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) Claim 21 Claim 21 recites: comparing […] the image circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors that has angular information best matching the information included in the image circular descriptor; (Mental Evaluation, Mental Process – Comparing descriptors to determine matching angular information can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) associating […] the image with a determined position for the building that is based on the associated building location for the determined one building location circular descriptor; and (Mental Evaluation, Mental Process – Correlating an image with a position on a map to determine position, orientation, and trajectory can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) providing […] information for the image about the determined position for the building. (Mental Evaluation, Mental Process – Providing information about a position on a map can be practically performed in the mind or with the aid of pen, paper, and/or a calculator.) Claim 26 Claim 26 recites: generating an additional circular descriptor for information recorded at a recording location in the area, wherein the additional circular descriptor includes information identifying features associated with at least some of the structural elements that are identifiable from the recorded information at specified directions from the recording location; (Mental Evaluation, Mental Process – Generating 360 description information about a position on a map can be practically performed in the mind or with the aid of pen, paper, and/or a calculator.) comparing the additional circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors that has angular information best matching the information included in the additional circular descriptor; (Mental Evaluation, Mental Process – Comparing descriptors to determine matching angular information can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) associating, based on the comparing, the recorded information with a position in the area that is determined for the recording location based on the building location associated with the determined one building location circular descriptor; and (Mental Evaluation, Mental Process – Correlating an image with a position on a map to determine position, orientation, and trajectory can be practically performed in the mind or with aid of pen, paper, and/or a calculator.) providing information about the determined position in the area for the recorded information. (Mental Evaluation, Mental Process – Providing information about a position on a map can be practically performed in the mind or with the aid of pen, paper, and/or a calculator.) Claims 1, 5, 21, and 26 recite a mental process. Claims 1, 5, 21, and 26 recite an abstract idea. Step 2A, Prong 2 The claims fail to recite additional limitations that integrate the abstract idea into a practical application. The additional limitations: GENERIC COMPUTING ELEMENTS Claim 1 - { […] computer implemented […] […] by [the] one or more computing devices […] […] a first trained neural network […] […] a second trained neural network […] } Claim 5 – { […] computer implemented […] […] by the computing device […] […] a first trained neural network […] […] a second trained neural network […] } Claim 21 – { A non-transitory computer-readable medium having stored contents that cause one or more computing devices to perform automated operations including at least: […] […] by the one or more computing devices […] } Claim 26 – { A system comprising: one or more hardware processors of one or more computing devices; and one or more memories with stored instructions that, when executed by at least one of the one or more hardware processors, cause at least one of the one or more computing devices to perform automated operations including at least: […] } These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. MERE DATA GATHERING/WURC Claim 1 - { obtaining […] for a house with multiple rooms, a rasterized two-dimensional floor plan of the house that has associated semantic information about locations of doors and windows and inter-wall borders of the multiple rooms; presenting […] information that includes the two- dimensional floor plan of the building and shows the room with a visual indication identifying at least the determined position for the panorama image, to cause use of the presented information for navigating the building. } Claim 5 – { obtaining, […] for a building, building location description information including a plurality of building location circular descriptors for a plurality of building locations in the building, wherein each building location circular descriptor is associated with one of the building locations and has first angular information about first latent space features identified for structural elements of the building at specified angular directions from the associated building location, wherein the first latent space features are identified by a first trained neural network using a two-dimensional floor plan of the building; } Claim 21 – { obtaining […] for an image captured in an area associated with a building and including visual information about at least some structural elements of the building, an image circular descriptor for the image that includes information identifying features associated with the at least some structural elements at specified directions within the visual information; obtaining […] building location circular descriptors each associated with a building location and including angular information about features associated with points of structural elements of the building at specified angular directions from the associated building location; } Claim 26 – { obtaining description information for an area of a building that includes building location circular descriptors for a plurality of building locations in the area, wherein each building location circular descriptor is associated with one of the building locations and has angular information about features associated with structural elements of the building at specified angular directions from the associated building location; } These obtaining and presenting steps are mere data gathering or selecting a particular data source or type of data to be manipulated, which are insignificant extra-solution activity similar to the MPEP 2106.05(g) examples: “e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” “iv. Obtaining information about transactions using the Internet to verify credit card transactions” “v. Consulting and updating an activity log” “vi. Determining the level of a biomarker in blood” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display.” The obtaining and presenting steps are insignificant extra-solution activity, and, under MPEP 2106.05(g), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Should it be found that the using and providing steps of claims 1, 21, and 26 are not elements of the abstract idea, the steps are mere apply it steps similar to the MPEP 2106.05(f) example: “vi. A method of assigning hair designs to balance head shape with a final step of using a tool (scissors) to cut the hair, In re Brown, 645 Fed. App'x 1014, 1017 (Fed. Cir. 2016).” Therefore, under MPEP 2106.05(f), the steps fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Should it be found that representative metrics of the real-world quantities that the parameters represent in the claims are not elements of the abstract idea or insignificant extra-solution activity, these elements merely limit the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2. Claims 1, 5, 21, and 26 fail to provide additional limitations that integrate the abstract ideas into a practical application. Claims 1, 5, 21, and 26 are directed to the abstract idea. Step 2B The claims fail to recite additional limitations that combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. The additional limitations: GENERIC COMPUTING ELEMENTS Claim 1 - { […] computer implemented […] […] by [the] one or more computing devices […] […] a first trained neural network […] […] a second trained neural network […] } Claim 5 – { […] computer implemented […] […] by the computing device […] […] a first trained neural network […] […] a second trained neural network […] } Claim 21 – { A non-transitory computer-readable medium having stored contents that cause one or more computing devices to perform automated operations including at least: […] […] by the one or more computing devices […] } Claim 26 – { A system comprising: one or more hardware processors of one or more computing devices; and one or more memories with stored instructions that, when executed by at least one of the one or more hardware processors, cause at least one of the one or more computing devices to perform automated operations including at least: […] } These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. at Step 2B. MERE DATA GATHERING/WURC Claim 1 - { obtaining […] for a house with multiple rooms, a rasterized two-dimensional floor plan of the house that has associated semantic information about locations of doors and windows and inter-wall borders of the multiple rooms; presenting […] information that includes the two- dimensional floor plan of the building and shows the room with a visual indication identifying at least the determined position for the panorama image, to cause use of the presented information for navigating the building. } Claim 5 – { obtaining, […] for a building, building location description information including a plurality of building location circular descriptors for a plurality of building locations in the building, wherein each building location circular descriptor is associated with one of the building locations and has first angular information about first latent space features identified for structural elements of the building at specified angular directions from the associated building location, wherein the first latent space features are identified by a first trained neural network using a two-dimensional floor plan of the building; } Claim 21 – { obtaining […] for an image captured in an area associated with a building and including visual information about at least some structural elements of the building, an image circular descriptor for the image that includes information identifying features associated with the at least some structural elements at specified directions within the visual information; obtaining […] building location circular descriptors each associated with a building location and including angular information about features associated with points of structural elements of the building at specified angular directions from the associated building location; } Claim 26 – { obtaining description information for an area of a building that includes building location circular descriptors for a plurality of building locations in the area, wherein each building location circular descriptor is associated with one of the building locations and has angular information about features associated with structural elements of the building at specified angular directions from the associated building location; } The obtaining and presenting steps are well-understood, routine, and conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “v. Electronically scanning or extracting data from a physical document” “i. Determining the level of a biomarker in blood by any means” “iv. Presenting offers and gathering statistics,” “vi. Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price.” The obtaining and presenting steps are WURC and, as previously demonstrated, insignificant extra-solution activity, and, under MPEP 2106.05(d) and 2106.05(g), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Should it be found that the using and providing steps of claims 1, 21, and 26 are not elements of the abstract idea, the steps are mere apply it steps similar to the MPEP 2106.05(f) example: “vi. A method of assigning hair designs to balance head shape with a final step of using a tool (scissors) to cut the hair, In re Brown, 645 Fed. App'x 1014, 1017 (Fed. Cir. 2016).” Therefore, under MPEP 2106.05(f), the steps fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Should it be found that representative metrics of the real-world quantities that the parameters represent in the claims are not elements of the abstract idea or insignificant extra-solution activity, these elements merely limit the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fail to to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Claims 1, 5, 21, and 26 fail to provide additional limitations that combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B. Claims 1, 5, 21, and 26 are ineligible. Dependent Claims: The dependent claims are also ineligible for the following reasons. Generic computing elements (MPEP 2106.05(f)) and the real-world representative parameters in data (MPEP 2106.05(h)) already treated in the independent claims will not be treated again here. Further, for any claims that claim real-world representative parameters in data (MPEP 2106.05(h)) introduced in the dependent claims fail to confer eligibility under MPEP 2106.05(h). Claim 2 wherein the generating of the building location circular descriptors further includes: obtaining a first enumerated group of ranges of incident angles, obtaining a second enumerated group of ranges of distances, and (This fails to confer eligibility for the same reasons as the obtaining steps of the independent claims.) performing the encoding for each of the building location circular descriptors of the information about some of the first latent space features by, (Mental Evaluation, Mental Process – Encoding is an activity that can be performed in the mind or with the aid of pen, paper, and/or a calculator.) for each of the at least some points for the building location of that building location circular descriptor, encoding information in that building location circular descriptor for one of the 360 horizontal degrees from that building location to that point that includes one of the ranges of incident angles from the first enumerated group and one of the ranges of distances from the second enumerated group. (Mental Evaluation, Mental Process – Encoding is an activity that can be performed in the mind or with the aid of pen, paper, and/or a calculator.) These encoding steps are abstract elements of the abstract idea and fail to provide an additional element. Claim 2 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 2 is ineligible. Claim 3 further comprising using […] the two-dimensional floor plan to further control navigation activities by an autonomous vehicle, (Mental Evaluation, Mental Process – A person can make a map and use the map to control navigation of an autonomous vehicle (e.g., via manual controls available as an alternative for the autonomous vehicle). This provides no additional limitations to confer eligibility on its own.) including providing the two-dimensional floor plan for use by the autonomous vehicle in moving between the multiple rooms of the house. (This is mere data transfer/gathering and is insignificant extra-solution activity and WURC and fails to confer eligibility for the same reasons as the obtaining and presenting steps of the independent claims.) Claim 3 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 3 is ineligible. Claim 4 wherein the using of the determined position further includes displaying, by the one or more computing devices, the two-dimensional floor plan showing the multiple rooms and including one or more visual indications on the displayed two-dimensional floor plan of the determined position and the orientation information for the panorama image in the one room. (This fails to confer eligibility for the same reasons as the presenting step of the independent claim.) Claim 4 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 4 is ineligible. Claim 6 wherein the presenting of the floor plan further includes visually indicating the determined orientation, and wherein the method further comprises presenting […] in response to a user selection of the visual indication on the presented floor plan, at least a portion of the panorama image corresponding to the determined orientation. (This fails to confer eligibility for the same reasons as the obtaining and presenting steps of claim 1, as it involves mere presentation of information and data transfer/storage.) Claim 6 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 6 is ineligible. Claim 7 wherein the visual information of the panorama image includes 360 horizontal degrees of visual coverage from an acquisition location of the panorama image, wherein the image circular descriptor includes, for each of the 360 horizontal degrees of visual coverage from the acquisition location, information about at least some of the second latent space features associated with any structural elements of the room that are visible in a direction from the acquisition location corresponding to the horizontal degree of visual coverage, and wherein each of the building location circular descriptors includes, for each of 360 horizontal degrees from the building location associated with the building location circular descriptor, information about at least some of the first latent space features associated with any structural elements of a surrounding room that are visible in a direction from the that building location corresponding to the horizontal degree of visual coverage. This stored data merely limits the abstract to a particular field of technology and, under MPEP 2106.05(h), fails to confer eligibility. Claim 7 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 7 is ineligible. Claim 8 wherein the structural elements of the building include at least one door, at least one window, and at least one inter-wall border, and (This merely limits the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fails to confer eligibility.) wherein the obtaining of the building location description information includes (See the rejection of the obtaining step in the independent claim) generating the building location circular descriptors, including generating from the two-dimensional floor plan a two- dimensional point cloud having a plurality of points, including associating information with each of the points that includes two-dimensional location information for the point and normal direction information for the point and semantic information about any structural elements associated with the point, and including analyzing the points and the associated information to generate the first latent space features, wherein each of the points is associated with at least one of the first latent space features. (Mental Evaluation, Mental Process – Generating descriptors, associating information, analyzing map points, and generating latent space features is practically performable in the mind or with aid of pen, paper, and/or a calculator. This is an abstract idea and fails to recite additional limitations to confer eligibility.) Claim 8 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 8 is ineligible. Claim 9 further comprising determining the one building location circular descriptor having angular information best matching the information included in the image circular descriptor by performing the generating and the comparing without using any depth information acquired from any depth sensor about a depth from the acquisition location to any surrounding elements of the room. (Mental Evaluation, Mental Process – Using available information to compare and determine a best match is practically performable in the mind or with the aid of pen, paper, and/or a calculator. This is an abstract idea with no additional limitations.) Claim 9 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 9 is ineligible. Claim 10 further comprising selecting the plurality of building locations in the building by specifying a grid of building locations covering floors of at least some rooms of multiple rooms of the building. (Mental Evaluation, Mental Process – Selecting by specifying things can be practically performed in the mind or with the aid of pen, paper, and/or a calculator. This is an abstract idea with no additional limitations.) Claim 10 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 10 is ineligible. Claim 11 wherein the comparing of the image circular descriptor to the building location circular descriptors includes performing a nearest-neighbor search of the building locations of the grid, including identifying the determined one building location circular descriptor by repeatedly moving from at least one current building location in the grid to at least one neighbor building location in the grid if the at least one neighbor building location has a smaller dissimilarity with the image circular descriptor than does the at least one current building location. (Mental Evaluation, Mental Process; Mathematical Calculation, Mathematical Concept – Conducting a nearest neighbor analysis can be practically performed in the mind or with the aid of pen, paper, and/or a calculator. It is also a mathematical operation, which is a mathematical concept. This is an abstract idea with no additional limitations.) Claim 11 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 11 is ineligible. Claim 12 wherein the comparing of the image circular descriptor to the building location circular descriptors further includes: (See the claim from which this depends for the rejection of the comparing step.) analyzing the visual information to identify, for a characteristic of a specified type, at least one of the 360 horizontal degrees of visual coverage from the acquisition location for which the characteristic is present; for each of at least some of the building location circular descriptors, comparing the image circular descriptor to the building location circular descriptor by: identifying one or more of the 360 horizontal degrees from the building location associated with the building location circular descriptor at which the characteristic is present; and synchronizing locations of each of the identified at least one of the 360 horizontal degrees of visual coverage from the acquisition location to locations of each of the identified one or more 360 horizontal degrees from the building location to determine if, relative to the synchronized locations, information at other horizontal degrees of coverage in the image circular descriptor matches information at other horizontal degrees of coverage in the building location circular descriptor; and selecting one of the at least some building location circular descriptors as the determined one building location circular descriptor based on the selected one building location circular descriptor having an identified synchronized location for which the information at the other horizontal degrees of coverage in the building location circular descriptor best matches the information at the other horizontal degrees of coverage in the image circular descriptor, and using the identified synchronized location to determine the orientation in the room for the panorama image. (Mental Evaluation, Mental Process - All of these steps are practically performable in the mind or with the aid of pen, paper, and/or a calculator. This is an abstract idea with no additional limitations.) Claim 12 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 12 is ineligible. Claim 13 wherein the characteristic of the specified type is one of a visible wall being orthogonal to a line along an identified horizontal degree of visual coverage, or a specified type of wall element being visible at the identified horizontal degree of visual coverage. This merely limits the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fails to confer eligibility. Claim 13 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 13 is ineligible. Claim 14 wherein the comparing of the image circular descriptor to the building location circular descriptors includes, for each of at least some of the building location circular descriptors, determining a probability that the image circular descriptor and the building location circular descriptor are a match by differing less than a specified threshold, and selecting one of the at least some building location circular descriptors that has a highest probability of matching the image angular detector as the determined one building location circular descriptor. (Mental Evaluation, Mental process – Comparing descriptors, determining/estimating a probability, and selection based on the probability are practically performable in the mind or with the aid of pen, paper, and/or a calculator. This is an abstract idea without any additional limitations.) Claim 14 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 14 is ineligible. Claim 15 wherein the comparing of the image circular descriptor to the building location circular descriptors includes, for each of at least some of the building location circular descriptors, using a circular earth mover's distance measurement of a distance between the image circular descriptor and the building location circular descriptor, and selecting one of the at least some building location circular descriptors that has a smallest measured distance to the image angular detector as the determined one building location circular descriptor. (Mental Evaluation, Mental Process - Comparing descriptors, using measurements, and selecting descriptors based on the data are practically performable in the mind with the aid of a pen, paper, and a calculator. This is an abstract idea without any additional limitations.) Claim 15 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 15 is ineligible. Claim 16 further comprising obtaining a first enumerated group of ranges of angles, obtaining a second enumerated group of ranges of distances, and (These fail to confer eligibility for the same reasons as the obtaining steps of the independent claims.) generating each of the building location circular descriptors by encoding information in that building location circular descriptor about some of the first latent space features by, for each of the at least some points of the structural elements that are visible from the building location of that building location circular descriptor, encoding information in that building location circular descriptor for one of 360 horizontal degrees from that building location to that point that includes one of the ranges of angles from the first enumerated group and one of the ranges of distances from the second enumerated group. (Mental Evaluation, Mental Process – Generating descriptors/describing things by encoding is practically performable in the mind or with the aid of a pen, paper, and/or a calculator. This is an abstract idea and fails to recite additional limitations to confer eligibility.) Claim 16 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 16 is ineligible. Claim 17 further comprising determining the position of the panorama image in the room by supplying, to a [mind], the panorama image and building location with which the determined one building location circular descriptor is associated, and receiving an adjusted position that is based on that building location and is adjusted to reflect the visual information of the panorama image. (Mental Evaluation, Mental Process – Modifying an estimated position of where a photograph was taken within an image/simulation is practically performable in the mind or with the aid of a pen, paper, and/or a calculator. This is an abstract idea without any additional limitations.) […] a refinement neural network […] (This is a generic computing element and, under MPEP 2106.05(f), fails to confer eligibility.) Claim 17 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 17 is ineligible. Claim 18 wherein the associating of the panorama image with the determined position and orientation further includes, by the computing device: generating, for each of multiple building location circular descriptors associated with one of multiple building locations in the room, additional visual information for that building location circular descriptor that represents a view from the building location with which that building location circular descriptor is associated and that includes at least some of the second latent space features that are visible at the specified angular directions for that building location circular descriptor; and determining an acquisition location of an additional image captured in the room by comparing an additional image circular descriptor generated for the additional image to the multiple building location circular descriptors, including using the generated additional visual information for the multiple building location circular descriptors. (Mental Evaluation, Mental process – Associating images with a position on a map using descriptors and determining where a picture was taken based on a comparison of the available data are practically performable in the mind or with the aid of pen, paper, and/or a calculator. This is an abstract idea without any additional limitations.) Claim 18 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 18 is ineligible. Claim 19 further comprising generating a graph having multiple nodes and with at least one node representing each of multiple rooms of the building, associating the multiple building location circular descriptors with one of the multiple nodes that represents the room, and further associating, after determining the position of the panorama image, the panorama image with the one node that represents the room. (Mental Evaluation, Mental Process – Generating a graph with rooms of a building as nodes and associating descriptive information and positions where photos were taken with those nodes is practically performable in the mind or with the aid of a pen, paper, and/or a calculator. This is an abstract idea without any additional limitations.) Claim 19 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 19 is ineligible. Claim 20 wherein the comparing of the image circular descriptor to the building location circular descriptors includes using [inferential mental powers] to identify the determined one building location circular descriptor as being most similar to the image circular descriptor. (Mental Evaluation, Mental Process – comparing descriptors and identifying a most similar descriptor to another descriptor is practically performable in the mind or with the aid of a pen, paper, and/or a calculator. This is an abstract idea and fails to recite additional limitations to confer eligibility.) […] machine learning […] (This is a generic computing element and, under MPEP 2106.05(f), fails to confer eligibility.) Claim 20 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 20 is ineligible. Claim 22 wherein the image is a panorama image with 360 degrees horizontally of visual information, (This merely limits the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility.) wherein the obtaining of the image circular descriptor includes (See the rejection for the obtaining of the image circular descriptor step from a claim from which this claim depends.) generating the image circular descriptor […] via analysis of the image by a [brain], and (Mental Evaluation, Mental Process – Comparing descriptors and identifying a most similar descriptor to another descriptor is practically performable in the mind or with the aid of a pen, paper, and/or a calculator. This is an abstract idea and fails to recite additional limitations to confer eligibility.) wherein the providing of the information about the determined position for the image includes presenting a floor plan for the building that includes a visual indication of the determined position for the image. (This fails to confer eligibility for the same reasons as the obtaining and presenting steps of the independent claims which deal with data gathering/transfer and data display.) […] a trained neural network […] (This is a generic computing element and, under MPEP 2106.05(f), fails to confer eligibility.) Claim 22 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 22 is ineligible. Claim 23 wherein the area associated with the building includes at least one of multiple rooms of the building, and wherein the structural elements of the building include multiple of a door or a window or an inter-wall border. (This merely limits the abstract idea to a particular technological field and, under MPEP 2106.05(h), fails to confer eligibility.) Claim 23 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 23 is ineligible. Claim 24 wherein the area associated with the building includes at least one external area proximate to the building, and wherein the structural elements of the building include multiple of a door or a window or an inter-wall border. (This merely limits the abstract idea to a particular technological field and, under MPEP 2106.05(h), fails to confer eligibility.) Claim 24 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 24 is ineligible. Claim 25 wherein the visual information for the image has less than 360 horizonal degrees of coverage, wherein the determined one additional circular descriptor is for a panorama image that is taken at the determined position and that has 360 horizonal degrees of coverage, (This merely limits the abstract idea to a particular technological field and, under MPEP 2106.05(h), fails to confer eligibility.) and wherein the comparing of the circular descriptor for the image to the additional circular descriptors includes matching the angular description for the image to a subset of the determined one additional circular descriptor for the panorama image. (Mental Evaluation, Mental Process – Comparing and matching descriptors is practically performable in the mind or with the aid of a pen, paper, and/or a calculator. This is an abstract idea and fails to recite additional limitations to confer eligibility.) Claim 25 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 25 is ineligible. Claim 27 wherein the recorded information includes a panorama image with visual information, wherein the structural elements include wall elements having at least one of a door or a window or an inter-wall border, and (This merely limits the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility.) wherein the providing of the information about the determined position in the room includes presenting a floor plan for the building that includes the area, wherein the presented floor plan includes a visual indication of the determined position in the area. (This fails to confer eligibility for at least the same reasons as the presenting step in independent claim 1.) Claim 27 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 27 is ineligible. Claim 28 wherein the area of the building is one of multiple rooms of the building. (This merely limits the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility.) Claim 28 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 28 is ineligible. Claim 29 wherein the area of the building is an external area adjacent to the building. (This merely limits the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility.) Claim 28 fails to provide any additional limitations that confer eligibility at Step 2A, Prong 2 or Step 2B. Claim 29 is ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-10, 12-14, and 16-29: Colburn Claim(s) 1-10, 12-14, and 16-29 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by US 2020/0116493 A1 to Colburn et al. (Colburn). Claim 1 Regarding claim 1, Colburn teaches: A computer-implemented method comprising: (Colburn Abstract “Techniques are described for using computing devices to perform automated operations to generate mapping information using inter - connected images of a defined area , and for using the generated mapping information in further automated manners . In at least some situations, the defined area includes an interior of a multi-room building, and the generated information includes a floor map of the building, such as from an automated analysis of multiple panorama images or other images acquired at various viewing locations within the building” – Computer implemented method.) obtaining, by one or more computing devices, and for a house with multiple rooms, a rasterized two-dimensional floor plan of the house that has associated semantic information about locations of doors and windows and inter-wall borders of the multiple rooms; (Colburn [0014] “In some embodiments , one or more types of additional processing may be performed , such as to determine additional mapping - related information for a generated floor map or to otherwise associate additional information with a generated floor map. As one example, one or more types of additional information about a building may be received and associated with the floor map (e.g., with particular locations in the floor map ), such as additional images, annotations or other descriptions of particular rooms or other locations , overall dimension information, etc. As another example, in at least some embodiments , additional processing of images is performed to determine estimated distance information of one or more types, such as to measure sizes in images of objects of known size, and use such information to estimate room width, length and / or height. Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D ( three - dimensional ) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed. Such generated floor maps and optionally additional associated information may further be used in various manners, as discussed elsewhere herein.” – A 2-D floor plan with multiple rooms and distances between elements is generated. [0022] “video and / or taking a succession of images, and may include a number of objects or other features (e.g., structural details ) that may be visible in images (e.g., video frames ) captured from the viewing location in the example of FIG . 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and / or sliding doors ) , windows 196 , corners or edges 195 (including corner 195-1 in the northwest corner of the building 198 , and corner 195-2 in the northeast corner of the first room ), furniture 191-193 (e.g., a couch 191; chairs 192 , such as 192-1 and 192-2 ; tables 193 , such as 193-1 and 193-2 ; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2 ) hung on walls, light fixtures, etc. The user may also optionally provide a textual or auditory identifier to be associated with a viewing location , such as “ entry ” 142a for viewing location 210A or "living room ” 1426 for viewing location 210B , while in other embodiments the ICA system may automatically generate such identifiers ( e.g. , by automatically analyzing video and / or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning ) or the identifiers may not be used .” – The floor plan includes objects, such as doors, windows, semantic information for which is generated and incorporated by the system.) generating, by the one or more computing devices, building location description information for the house, including: (Colburn [0039] “While illustrated only with respect to room 229a and two viewing locations, it will be appreciated that similar analysis may be performed for each of the viewing locations 210A - 210H , and with respect to some or all of the rooms in the building.” generating a two-dimensional point cloud having a plurality of points that represents structure of the house by sampling structural locations of the house shown on the rasterized two-dimensional floor plan, including associating information with each point that includes a two-dimensional location of that point on the two-dimensional floor plan and includes normal direction information for a group of adjacent points for that point and includes semantic information for that point about any locations of the doors and windows and inter-wall borders corresponding to that point; (Colburn [0014] “In some embodiments , one or more types of additional processing may be performed , such as to determine additional mapping - related information for a generated floor map or to otherwise associate additional information with a generated floor map. As one example, one or more types of additional information about a building may be received and associated with the floor map (e.g., with particular locations in the floor map ), such as additional images, annotations or other descriptions of particular rooms or other locations , overall dimension information, etc. As another example, in at least some embodiments , additional processing of images is performed to determine estimated distance information of one or more types, such as to measure sizes in images of objects of known size, and use such information to estimate room width, length and / or height. Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D ( three - dimensional ) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed. Such generated floor maps and optionally additional associated information may further be used in various manners, as discussed elsewhere herein.” – A 2D point map is generated with all relative spacing and angular information. [0028] “Based on a similar analysis of departing direction from viewing location 210B, arrival direction at viewing location 210C, and intervening velocity and location for some or all data points for which acceleration data is captured along the travel path 115 bc, the user's movement for travel path 115 bc may be modeled, and resulting direction 215-BC and corresponding distance between viewing locations 210B and 210C may be determined.” Given the above framework, a valid placement should satisfy these constraints as much as possible. The goal is to place the estimated room shapes (polygons or 3D shapes) into a global map such that the constraints on the initial placement is matched and satisfies the topological constraints. The main topological constraints that the room-shape matching should satisfy is to match the connecting passages between rooms, with the initial placements constraining the relative scale and alignment of the room shapes, with the room-shape matching algorithm thus less sensitive to small geometric and topological errors. […] The polygon points and camera centers are defined as a set of 2D points in homogenous coordinates and the edges are pairs of polygon node indices.” – The relative positions and orientations are harmonized to provide a map that relates all position and orientation information relative to and derived from and further relatable to further captured panoramic images. [0014] “As one example, one or more types of additional information about a building may be received and associated with the floor map (e.g., with particular locations in the floor map), such as additional images, annotations or other descriptions of particular rooms or other locations, overall dimension information, etc. ” [0038] “In addition, the image analysis identifies various other features of the room for possible later use, including connecting doorway passages 233 in and/or out of the room (as well as interior doorways or other openings 237 within the room), connecting window passages 234 (e.g., from the room to an exterior of the building), etc.—it will be appreciated that the example connecting passages are shown for only a subset of the possible connecting passages, and that some types of connecting passages (e.g., windows, interior doorways or other openings, etc.) may not be used in some embodiments.” – A 2-D point cloud map is generated that incorporates all information between all points, including adjacent points, within the space. This includes normal directions and associated annotation/semantic information that were mapped in the earlier elements.) determining, by supplying the two-dimensional point cloud to a first trained neural network, first latent space features associated with points of the two-dimensional point cloud; and (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted). One example of a system for estimating room shape from an image is RoomNet (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2017 IEEE International Conference On Computer Vision, August 2017), and another example of a system for estimating room shape from an image is Room Net (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2018 IEEE/CVF Conference On Computer Vision And Pattern Recognition, June 2018). […] In addition, if multiple room shape estimates are available for a room (e.g., from multiple viewing locations within the room), one may be selected for further use (e.g., based on positions of the viewing locations within the room, such as a most central), or instead the multiple shapes estimates may be combined, optionally in a weighted manner. Such automated estimation of a room shape may further be performed in at least some embodiments by using one or more techniques such as SfM (structure from motion), Visual SLAM (simultaneous localization and mapping), sensor fusion, etc. – 2D elements are mapped and input into a machine learning model to render a 3D model.) generating building location circular descriptors for a plurality of building locations in a specified grid pattern through the multiple rooms of the house, including, for each of the building locations, determining angular directions from the building location in 360 horizontal degrees to at least some points of the point cloud, and encoding, in one of the building location circular descriptors associated with the building location, information about some of the first latent space features that are associated with the at least some points; (Colburn [0041] “In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty). In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated. In this case, for example, while viewing location 210I outside the building may not be used as part of the final generation of the floor map due to its exterior location, its inter-connection to viewing location 210H may nonetheless be used when determining the relative global position of viewing location 210H, such that the relative global position of viewing location 210H is not shown with the same type of uncertainty in this example as that of viewing location 210D.” – Panoramic images are used together to generate an environment based on the elements in the images. [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location), from multiple images acquired in multiple directions from the viewing location (e.g., from a smartphone or other mobile device held by a user turning at that viewing location), etc. It will be appreciated that such a panorama image may in some situations be represented in a spherical coordinate system and cover up to 360° around horizontal and/or vertical axes, such that a user viewing a starting panorama image may move the viewing direction within the starting panorama image to different orientations to cause different images (or “views”) to be rendered within the starting panorama image (including, if the panorama image is represented in a spherical coordinate system, to convert the image being rendered into a planar coordinate system). Furthermore, acquisition metadata regarding the capture of such panorama images may be obtained and used in various manners, such as data acquired from IMU (inertial measurement unit) sensors or other sensors of a mobile device as it is carried by a user or otherwise moved between viewing locations. Additional details are included below related to the acquisition and usage of panorama images or other images for a building.” – The images are 360 degree images and have associated metadata, including inertially determined, relative position information. [0011] “In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms. Additional details are included below regarding determining information from analysis of images that includes relative positions of images' viewing locations within rooms, including with respect to FIG. 2A and its associated description.” – For each position, data is generated and correlated such that each 360 degree view from each position is encoded in data, e.g., as a circular descriptor.) generating, by the one or more computing devices, an image circular descriptor for a panorama image that is taken in one of the multiple rooms and has 360 horizontal degrees of visual information, including determining second latent space features associated with visual data of the panorama image by supplying the panorama image to a second trained neural network, and wherein the image circular descriptor encodes information identifying specified directions within the visual data to the second latent space features; (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location), from multiple images acquired in multiple directions from the viewing location (e.g., from a smartphone or other mobile device held by a user turning at that viewing location), etc. It will be appreciated that such a panorama image may in some situations be represented in a spherical coordinate system and cover up to 360° around horizontal and/or vertical axes, such that a user viewing a starting panorama image may move the viewing direction within the starting panorama image to different orientations to cause different images (or “views”) to be rendered within the starting panorama image (including, if the panorama image is represented in a spherical coordinate system, to convert the image being rendered into a planar coordinate system). Furthermore, acquisition metadata regarding the capture of such panorama images may be obtained and used in various manners, such as data acquired from IMU (inertial measurement unit) sensors or other sensors of a mobile device as it is carried by a user or otherwise moved between viewing locations. Additional details are included below related to the acquisition and usage of panorama images or other images for a building.” – All data is mapped based on a relative angular/spherical coordinate system. [0011] “In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms. Additional details are included below regarding determining information from analysis of images that includes relative positions of images' viewing locations within rooms, including with respect to FIG. 2A and its associated description.” - Automated analysis correlates and links all data to make a single picture that can be viewed from any point and at any angle and relative to identified objects in the room. [0022] “The view capture may be performed by recording a video and/or taking a succession of images, and may include a number of objects or other features (e.g., structural details) that may be visible in images (e.g., video frames) captured from the viewing location—in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196, corners or edges 195 (including corner 195-1 in the northwest corner of the building 198, and corner 195-2 in the northeast corner of the first room), furniture 191-193 (e.g., a couch 191; chairs 192, such as 192-1 and 192-2; tables 193, such as 193-1 and 193-2; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2) hung on walls, light fixtures, etc. The user may also optionally provide a textual or auditory identifier to be associated with a viewing location, such as “entry” 142a for viewing location 210A or “living room” 142 b for viewing location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) or the identifiers may not be used.” – Information generated includes descriptive information, such as identification of objects in the images/rooms. [0026] “In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” - Again, position and angular data are correlated. [0041] “As previously noted with respect to FIGS. 1C-1D and FIG. 2A, relative positional information between two or more panorama images may be determined in various manners in various embodiments, including by analyzing metadata about the panorama acquisition, such as linking information as discussed with respect to FIGS. 1C-1D, and/or by analyzing the respective panorama images to determine common objects or features visible in multiple panorama images, as discussed further with respect to FIG. 2A. It will be noted that, as the number of viewing locations that are visible from each other increases, the precision of a location of a particular viewing location may similarly increase, such as for embodiments in which the relative position information is determined based at least in part on matching corresponding objects or other features in the panorama images. In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty). In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated. In this case, for example, while viewing location 210I outside the building may not be used as part of the final generation of the floor map due to its exterior location, its inter-connection to viewing location 210H may nonetheless be used when determining the relative global position of viewing location 210H, such that the relative global position of viewing location 210H is not shown with the same type of uncertainty in this example as that of viewing location 210D.” – All of the data is encoded relative to each position, such that each position in the grid (and/or at least the positions from which panoramas are taken) includes a 360 degree “circular” descriptor. [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted).” [0071] “After block 435, the routine continues to block 440 to use the obtained or acquired image and inner-connection information to determine, for the viewing locations of images inside the building, relative global positions of the viewing locations in a common coordinate system or other common frame of reference, such as to determine directions and optionally distances between the respective viewing locations. After block 440, the routine in block 450 analyzes the acquired or obtained panoramas or other images to determine, for each room in the building that has one or more viewing locations, a position within the room of those viewing locations, as discussed in greater detail elsewhere herein. In block 455, the routine further analyzes the images and/or the acquisition metadata for them to determine, for each room in the building, any connecting passages in or out of the room, as discussed in greater detail elsewhere herein. In block 460, the routine then receives or determines estimated room shape information and optionally room type information for some or all rooms in the building, such as based on analysis of images, information supplied by one or more users, etc., as discussed in greater detail elsewhere herein.” – All data is so correlated, including data generated by automation or received from a user.) comparing, by the one or more computing devices, the image circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors whose encoded information best matches the encoded information of the image circular descriptor; (Colburn [0026] “In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” [0041] “As previously noted with respect to FIGS. 1C-1D and FIG. 2A, relative positional information between two or more panorama images may be determined in various manners in various embodiments, including by analyzing metadata about the panorama acquisition, such as linking information as discussed with respect to FIGS. 1C-1D, and/or by analyzing the respective panorama images to determine common objects or features visible in multiple panorama images, as discussed further with respect to FIG. 2A. It will be noted that, as the number of viewing locations that are visible from each other increases, the precision of a location of a particular viewing location may similarly increase, such as for embodiments in which the relative position information is determined based at least in part on matching corresponding objects or other features in the panorama images.” – The descriptors of the images are compared to determine which is the closest and allow for a consistent position determination partially based on the information.) associating, by the one or more computing devices and based on the comparing, the panorama image with a determined position on the two- dimensional floor plan, wherein the determined position includes the building location in the one room associated with the determined one building location circular descriptor and further includes orientation information to correlate the determined angular directions for that building location to the identified specified directions for the panorama image; and (Colburn [0041] “As previously noted with respect to FIGS. 1C-1D and FIG. 2A, relative positional information between two or more panorama images may be determined in various manners in various embodiments, including by analyzing metadata about the panorama acquisition, such as linking information as discussed with respect to FIGS. 1C-1D, and/or by analyzing the respective panorama images to determine common objects or features visible in multiple panorama images, as discussed further with respect to FIG. 2A. It will be noted that, as the number of viewing locations that are visible from each other increases, the precision of a location of a particular viewing location may similarly increase, such as for embodiments in which the relative position information is determined based at least in part on matching corresponding objects or other features in the panorama images.” - The position is determined based on the comparison. [0008] “In at least some embodiments, the defined area includes an interior of a multi-room building (e.g., a house, office, etc.), and the generated information includes a floor map of the building, such as from an automated analysis of multiple panorama images or other images acquired at various viewing locations within the building—in at least some such embodiments, the generating is further performed without having or using detailed information about distances from the images' viewing locations to walls or other objects in the surrounding building. The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc.“ - The navigation contemplates the position being within and expressed relative to a room in a multi-room building.) using, by the one or more computing devices, the determined position of the panorama image on the two-dimensional floor plan of the house for navigation of at least the one room of the house. (Colburn [0008] “In at least some embodiments, the defined area includes an interior of a multi-room building (e.g., a house, office, etc.), and the generated information includes a floor map of the building, such as from an automated analysis of multiple panorama images or other images acquired at various viewing locations within the building—in at least some such embodiments, the generating is further performed without having or using detailed information about distances from the images' viewing locations to walls or other objects in the surrounding building. The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc.“ The determined floor plan and position data is used for navigation, e.g., by a mobile device.) Claim 2 Regarding Claim 2, Colburn teaches the features of claim 1 and further teaches: The computer-implemented method of claim 1 wherein the generating of the building location circular descriptors further includes obtaining a first enumerated group of ranges of incident angles, obtaining a second enumerated group of ranges of distances, and performing the encoding for each of the building location circular descriptors of the information about some of the first latent space features by, for each of the at least some points for the building location of that building location circular descriptor, encoding information in that building location circular descriptor for one of the 360 horizontal degrees from that building location to that point that includes one of the ranges of incident angles from the first enumerated group and one of the ranges of distances from the second enumerated group. (Colburn [0041] “In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty). In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated.” [0043] “In particular, a viewing location's position with respect to features in the room may be determined (as discussed with respect to FIG. 2A), and FIG. 2C further illustrates information 226 with respect to viewing location 210A to indicate such relative angles and optionally distance of the viewing location 210A to a southwest corner of the room, to a south wall of the room, and to the exterior doorway, with various other possible features (e.g., interior doorway to the hallway, northeast corner 195-2, etc.) also available to be used in this manner. Such information may be used to provide an initial estimated position of the estimated room shape 242 for room 229 a around viewing location 210A, such as by minimizing the total error for the initial placement of the estimated room shape with respect to each such feature's measured position information for the viewing location.” [0046] “For any camera center $C_i$, find the transformation matrix $T_i$ that projects the coordinates of the camera center to the global coordinate such that the pairwise camera angle relations is preserved as much as possible. For any camera center $C_j$ for which its pairwise angle $\theta$ to $C_i$ is known, calculate the distance ($d_{(i,j)}$) of that point from the line that passes through $C_i$ with angle of $\theta$. The error of the initial preferred placements is measured as the sum of all possible distances $d_{(i,j)}$. Therefore, given a set of pairwise panorama image relations (i,j), the placement problem is defined as finding the set of transformation matrixes $T_i$s such that $d$ constraint is bounded $d<\epsilon$. Given the above framework, a valid placement should satisfy these constraints as much as possible.” [0050] “camera centers are registered against a common global angle (e.g., global north)” – All of the angles for each position are registered relative to a North direction and with respect to all distances within the building domain. [0041] “As previously noted with respect to FIGS. 1C-1D and FIG. 2A, relative positional information between two or more panorama images may be determined in various manners in various embodiments, including by analyzing metadata about the panorama acquisition, such as linking information as discussed with respect to FIGS. 1C-1D, and/or by analyzing the respective panorama images to determine common objects or features visible in multiple panorama images, as discussed further with respect to FIG. 2A. It will be noted that, as the number of viewing locations that are visible from each other increases, the precision of a location of a particular viewing location may similarly increase, such as for embodiments in which the relative position information is determined based at least in part on matching corresponding objects or other features in the panorama images.” [0031] “The image 150 e includes several objects in the surrounding environment of the living room, such as windows 196, a picture or painting 194-1, chair 192-1, table 193-1, a lighting fixture 130 a, and inter-wall and floor and ceiling borders including border 195-2. In addition, because the panorama images for viewing locations 210A and 210B are linked, the image 150 e includes a generated virtual user-selectable control 141 b to visually indicate that the user may select that control to move from the location at which image 150 e was taken (the viewing location 210A) to the linked panorama image at viewing location 210B, with the additional text label 142 b of “living room” from FIG. 1B added along with the user-selectable control 141 b to reference that viewing location 210B.” [0041] “ In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated. In this case, for example, while viewing location 210I outside the building may not be used as part of the final generation of the floor map due to its exterior location, its inter-connection to viewing location 210H may nonetheless be used when determining the relative global position of viewing location 210H, such that the relative global position of viewing location 210H is not shown with the same type of uncertainty in this example as that of viewing location 210D.” – All of the spatial data is determined with respect to each position, such that this information is related for all object elements. When determining the final position of information from linking the elements of the images, this can be done based on angles and distances and/or based on metadata associated with objects within images that are identified.) Claim 3 Regarding claim 3, Colburn teaches the features of claim 1 and further teaches: further comprising using, by the one or more computing devices, the two-dimensional floor plan to further control navigation activities by an autonomous vehicle, including providing the two-dimensional floor plan for use by the autonomous vehicle in moving between the multiple rooms of the house. (Colburn [0008] “The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc. Additional details are included below regarding the automated generation and use of mapping information, and some or all of the techniques described herein may, in at least some embodiments, be performed via automated operations of a Floor Map Generation Manager (“FMGM”) system, as discussed further below.” [0021] “FIG. 1B depicts a block diagram of an exemplary building interior environment in which linked panorama images have been generated and are ready for use by the FMGM system to generate and provide a corresponding building floor map, as discussed in greater detail with respect to FIGS. 2A-2D, as well as for use in presenting the linked panorama images to users.” – Colburn teaches using the generated floor map to control the navigation of an autonomous vehicle.) Claim 4 Regarding claim 4, Colburn teaches the features of claim 1 and further teaches: wherein the using of the determined position further includes displaying, by the one or more computing devices, the two-dimensional floor plan showing the multiple rooms and including one or more visual indications on the displayed two-dimensional floor plan of the determined position and the orientation information for the panorama image in the one room. (Colburn Abstract “The generated floor map and other mapping-related information may be used in various manners, including for controlling navigation of devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding graphical user interfaces, etc.” [0014] “Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed—if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D (three-dimensional) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed.“ [0018] “One or more users (not shown) of one or more client computing devices 175 may further interact over one or more computer networks 170 with the FMGM system 140 and optionally the ICA system 160, such as to obtain, display and interact with a generated floor map and/or one or more associated linked panorama images (e.g., to change between a floor map view and a view of a particular panorama image at a viewing location within or near the floor map;” – A floor map/plan is determined and is displayed. [0021] “In addition, while directional indicator 109 is provided for reference of the viewer, the mobile device and/or ICA system may not use such absolute directional information in at least some embodiments, such as to instead determine relative directions and distances between panorama images 210 without regard to actual geographical positions or directions.” - This is a visual indication of orientation. [0031] “Since image 150 e is in the direction of the viewing location 210 b but from a greater distance with a wider angle of view, a subset of image 150 e corresponds to an image view 150 h that would be visible in the same direction from the panorama image at viewing location 210B, as shown in FIG. 1 E with dashed lines for the purpose of illustration, although such dashed lines may also not be displayed as part of the image 150 e shown to a user. The image 150 e includes several objects in the surrounding environment of the living room, such as windows 196, a picture or painting 194-1, chair 192-1, table 193-1, a lighting fixture 130 a, and inter-wall and floor and ceiling borders including border 195-2. In addition, because the panorama images for viewing locations 210A and 210B are linked, the image 150 e includes a generated virtual user-selectable control 141 b to visually indicate that the user may select that control to move from the location at which image 150 e was taken (the viewing location 210A) to the linked panorama image at viewing location 210B, with the additional text label 142 b of “living room” from FIG. 1B added along with the user-selectable control 141 b to reference that viewing location 210B.” - These are visual indications of position.) Claim 5 Regarding claim 5, Colburn teaches: A computer-implemented method comprising: (Colburn Abstract “Techniques are described for using computing devices to perform automated operations to generate mapping information using inter - connected images of a defined area , and for using the generated mapping information in further automated manners . In at least some situations, the defined area includes an interior of a multi-room building, and the generated information includes a floor map of the building, such as from an automated analysis of multiple panorama images or other images acquired at various viewing locations within the building” – Computer implemented method.) obtaining, by a computing device and for a building, building location description information including a plurality of building location circular descriptors for a plurality of building locations in the building, wherein each building location circular descriptor is associated with one of the building locations (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location), from multiple images acquired in multiple directions from the viewing location (e.g., from a smartphone or other mobile device held by a user turning at that viewing location), etc. It will be appreciated that such a panorama image may in some situations be represented in a spherical coordinate system and cover up to 360° around horizontal and/or vertical axes, such that a user viewing a starting panorama image may move the viewing direction within the starting panorama image to different orientations to cause different images (or “views”) to be rendered within the starting panorama image (including, if the panorama image is represented in a spherical coordinate system, to convert the image being rendered into a planar coordinate system). Furthermore, acquisition metadata regarding the capture of such panorama images may be obtained and used in various manners, such as data acquired from IMU (inertial measurement unit) sensors or other sensors of a mobile device as it is carried by a user or otherwise moved between viewing locations. Additional details are included below related to the acquisition and usage of panorama images or other images for a building.” – All data is mapped based on a relative angular/spherical coordinate system. [0011] “In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms. Additional details are included below regarding determining information from analysis of images that includes relative positions of images' viewing locations within rooms, including with respect to FIG. 2A and its associated description.” - Automated analysis correlate and links all data to make a single picture that can be viewed from any point and at any angle and relative to identified objects in the room. [0022] “The view capture may be performed by recording a video and/or taking a succession of images, and may include a number of objects or other features (e.g., structural details) that may be visible in images (e.g., video frames) captured from the viewing location—in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196, corners or edges 195 (including corner 195-1 in the northwest corner of the building 198, and corner 195-2 in the northeast corner of the first room), furniture 191-193 (e.g., a couch 191; chairs 192, such as 192-1 and 192-2; tables 193, such as 193-1 and 193-2; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2) hung on walls, light fixtures, etc. The user may also optionally provide a textual or auditory identifier to be associated with a viewing location, such as “entry” 142a for viewing location 210A or “living room” 142 b for viewing location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) or the identifiers may not be used.” – Information generated includes descriptive information, such as identification of objects in the images/rooms. [0026] “In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” - Again, position and angular data are correlated. [0041] “As previously noted with respect to FIGS. 1C-1D and FIG. 2A, relative positional information between two or more panorama images may be determined in various manners in various embodiments, including by analyzing metadata about the panorama acquisition, such as linking information as discussed with respect to FIGS. 1C-1D, and/or by analyzing the respective panorama images to determine common objects or features visible in multiple panorama images, as discussed further with respect to FIG. 2A. It will be noted that, as the number of viewing locations that are visible from each other increases, the precision of a location of a particular viewing location may similarly increase, such as for embodiments in which the relative position information is determined based at least in part on matching corresponding objects or other features in the panorama images. In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty). In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated. In this case, for example, while viewing location 210I outside the building may not be used as part of the final generation of the floor map due to its exterior location, its inter-connection to viewing location 210H may nonetheless be used when determining the relative global position of viewing location 210H, such that the relative global position of viewing location 210H is not shown with the same type of uncertainty in this example as that of viewing location 210D.” – All of the data is encoded relative to each position, such that each position in the grid (and/or at least the positions from which panoramas are taken) includes a 360 degree “circular” descriptor. [0071] “After block 435, the routine continues to block 440 to use the obtained or acquired image and inner-connection information to determine, for the viewing locations of images inside the building, relative global positions of the viewing locations in a common coordinate system or other common frame of reference, such as to determine directions and optionally distances between the respective viewing locations. After block 440, the routine in block 450 analyzes the acquired or obtained panoramas or other images to determine, for each room in the building that has one or more viewing locations, a position within the room of those viewing locations, as discussed in greater detail elsewhere herein. In block 455, the routine further analyzes the images and/or the acquisition metadata for them to determine, for each room in the building, any connecting passages in or out of the room, as discussed in greater detail elsewhere herein. In block 460, the routine then receives or determines estimated room shape information and optionally room type information for some or all rooms in the building, such as based on analysis of images, information supplied by one or more users, etc., as discussed in greater detail elsewhere herein.” – All data is so correlated, including data generated by automation or received from a user. The data is retrieved at each position.) and has first angular information about first latent space features identified for structural elements of the building at specified angular directions from the associated building location, (Colburn [0041] “In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty).” [0043] “In particular, a viewing location's position with respect to features in the room may be determined (as discussed with respect to FIG. 2A), and FIG. 2C further illustrates information 226 with respect to viewing location 210A to indicate such relative angles and optionally distance of the viewing location 210A to a southwest corner of the room, to a south wall of the room, and to the exterior doorway, with various other possible features (e.g., interior doorway to the hallway, northeast corner 195-2, etc.) also available to be used in this manner.” – Angular information is encoded for each position.) wherein the first latent space features are identified by a first trained neural network using a two-dimensional floor plan of the building; (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted).” [0014] “Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed—if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D (three-dimensional) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed.” – A deep neural network is used, taking information of the 2D floor map (e.g., room shape) as input. In training the DEEP neural network, a hidden layer with latent space feature variables is trained to determine values of the latent space feature variables.) generating, by the computing device, an image circular descriptor for a panorama image that is captured in a room of the building and that includes visual information about at least some walls of the room, (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location), from multiple images acquired in multiple directions from the viewing location (e.g., from a smartphone or other mobile device held by a user turning at that viewing location), etc. It will be appreciated that such a panorama image may in some situations be represented in a spherical coordinate system and cover up to 360° around horizontal and/or vertical axes, such that a user viewing a starting panorama image may move the viewing direction within the starting panorama image to different orientations to cause different images (or “views”) to be rendered within the starting panorama image (including, if the panorama image is represented in a spherical coordinate system, to convert the image being rendered into a planar coordinate system). Furthermore, acquisition metadata regarding the capture of such panorama images may be obtained and used in various manners, such as data acquired from IMU (inertial measurement unit) sensors or other sensors of a mobile device as it is carried by a user or otherwise moved between viewing locations. Additional details are included below related to the acquisition and usage of panorama images or other images for a building.” – All data is mapped based on a relative angular/spherical coordinate system. [0011] “In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms. Additional details are included below regarding determining information from analysis of images that includes relative positions of images' viewing locations within rooms, including with respect to FIG. 2A and its associated description.” - Automated analysis correlate and links all data to make a single picture that can be viewed from any point and at any angle and relative to identified objects in the room. [0022] “The view capture may be performed by recording a video and/or taking a succession of images, and may include a number of objects or other features (e.g., structural details) that may be visible in images (e.g., video frames) captured from the viewing location—in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196, corners or edges 195 (including corner 195-1 in the northwest corner of the building 198, and corner 195-2 in the northeast corner of the first room), furniture 191-193 (e.g., a couch 191; chairs 192, such as 192-1 and 192-2; tables 193, such as 193-1 and 193-2; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2) hung on walls, light fixtures, etc. The user may also optionally provide a textual or auditory identifier to be associated with a viewing location, such as “entry” 142a for viewing location 210A or “living room” 142 b for viewing location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) or the identifiers may not be used.” – Information generated includes descriptive information, such as identification of objects in the images/rooms. [0026] “In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” - Again, position and angular data are correlated. [0041] “As previously noted with respect to FIGS. 1C-1D and FIG. 2A, relative positional information between two or more panorama images may be determined in various manners in various embodiments, including by analyzing metadata about the panorama acquisition, such as linking information as discussed with respect to FIGS. 1C-1D, and/or by analyzing the respective panorama images to determine common objects or features visible in multiple panorama images, as discussed further with respect to FIG. 2A. It will be noted that, as the number of viewing locations that are visible from each other increases, the precision of a location of a particular viewing location may similarly increase, such as for embodiments in which the relative position information is determined based at least in part on matching corresponding objects or other features in the panorama images. In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty). In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated. In this case, for example, while viewing location 210I outside the building may not be used as part of the final generation of the floor map due to its exterior location, its inter-connection to viewing location 210H may nonetheless be used when determining the relative global position of viewing location 210H, such that the relative global position of viewing location 210H is not shown with the same type of uncertainty in this example as that of viewing location 210D.” – All of the data is encoded relative to each position, such that each position in the grid (and/or at least the positions from which panoramas are taken) includes a 360 degree “circular” descriptor. [0071] “After block 435, the routine continues to block 440 to use the obtained or acquired image and inner-connection information to determine, for the viewing locations of images inside the building, relative global positions of the viewing locations in a common coordinate system or other common frame of reference, such as to determine directions and optionally distances between the respective viewing locations. After block 440, the routine in block 450 analyzes the acquired or obtained panoramas or other images to determine, for each room in the building that has one or more viewing locations, a position within the room of those viewing locations, as discussed in greater detail elsewhere herein. In block 455, the routine further analyzes the images and/or the acquisition metadata for them to determine, for each room in the building, any connecting passages in or out of the room, as discussed in greater detail elsewhere herein. In block 460, the routine then receives or determines estimated room shape information and optionally room type information for some or all rooms in the building, such as based on analysis of images, information supplied by one or more users, etc., as discussed in greater detail elsewhere herein.” – All data is so correlated, including data generated by automation or received from a user. The data is retrieved at each position.) wherein the image circular descriptor has second angular information about second latent space features identified from the visual information of the panorama image at specified directions by a second trained neural network; (Colburn [0026] “In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” – Angular data is an element of the 2D data representing the floor plan. [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted).” [0014] “Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed—if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D (three-dimensional) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed.” – A deep neural network is used, taking information of the 2D floor map (e.g., room shape) as input. In training the DEEP neural network, a hidden layer with latent space feature variables is trained to determine values of the latent space feature variables. The latent features relate all input and output parameters, including spatial and orientation features, such as those in the data at each point that has 36o cegree data, e.g., circular descriptors.) comparing, by the computing device, the image circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors that is in the room and has first angular information best matching the second angular information of the image circular descriptor; (Colburn [0026] “In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” – Data regarding the surroundings of each point in a map is compared with the analogous data of an image. [0037] “Using feature 195-1 in the northwest corner of the room 229 a as an example, a corresponding viewing direction 227A in the direction of that feature from viewing location 210A is shown, with an associated frame in viewing location 210A′s panorama image being determined, and a corresponding viewing direction 228A with associated frame from viewing location 210C to that feature is also shown—given such matching frames/images to the same feature in the room from the two viewing locations, information in those two frames/images may be compared in order to determine a relative rotation and translation between viewing locations 210A and 210C (assuming that sufficient overlap in the two images is available). It will be appreciated that multiple frames from both viewing locations may include at least some of the same feature (e.g., corner 195-1), and that a given such frame may include other information in addition to that feature (e.g., portions of the west and north walls, the ceiling and/or floor, possible contents of the room, etc.)—for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap, although in actuality each image/frame from viewing location 210A in the approximate direction of 227A that includes any of corner 195-1 may be compared to each image/frame from viewing location 210C in the approximate direction of 228A that includes any of corner 195-1 (and similarly for any other discernible features in the room). Furthermore, by using the determined relative rotation and translation for multiple such matching frames/images for one or more features, the precision of the positions of the corresponding viewing locations may be increased.” associating, by the computing device and based on the comparing, the panorama image with a determined position and orientation in the room, the determined position based on the building location with which the determined one building location circular descriptor is associated, and the determined orientation identifying at least one direction from that building location corresponding to a specified part of the visible information in the panorama image; and (Colburn [0026] “It will be appreciated that the order of obtaining such linking information may vary, such as if the user instead started at viewing location 210B and captured linking information as he or she traveled along path 115 bc to viewing location 210C, and later proceeded from viewing location 210A to viewing location 210B along travel path 115 ab with corresponding linking information captured (optionally after moving from viewing location 210C to 210A without capturing linking information). In this example, FIG. 1D illustrates that the user departs from the viewing location 210A at a point 137 in a direction that is just west of due north (as previously indicated with respect to directional indicator 109 of FIG. 1B), proceeding in a primarily northward manner for approximately a first half of the travel path 115ab, and then beginning to curve in a more easterly direction until arriving at an incoming point 138 to viewing location 210B in a direction that is mostly eastward and a little northward. In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” – The image is associated with position and orientation data of the navigation data that includes 360 relative position and orientation data (e.g., derived from prior images and inertial motion data) for all rooms and objects in a building.) presenting, by the computing device, information that includes the two- dimensional floor plan of the building and shows the room with a visual indication identifying at least the determined position for the panorama image, to cause use of the presented information for navigating the building. (Colburn [0008] “The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc. Additional details are included below regarding the automated generation and use of mapping information, and some or all of the techniques described herein may, in at least some embodiments, be performed via automated operations of a Floor Map Generation Manager (“FMGM”) system, as discussed further below.” – Elements including generated floor maps are displayed in a GUI. [0020] “After the viewing locations' videos and linking information are captured, the techniques may include analyzing video captured at each viewing location to create a panorama image from that viewing location that has visual data in multiple directions (e.g., a 360 degree panorama around a vertical axis), analyzing information to determine relative positions/directions between each of two or more viewing locations, creating inter-panorama positional/directional links in the panoramas to each of one or more other panoramas based on such determined positions/directions, and then providing information to display or otherwise present multiple linked panorama images for the various viewing locations within the house. Additional details related to embodiments of a system providing at least some such functionality of an ICA system are included in co-pending U.S. Non-Provisional patent application Ser. No. 15/649,434, filed Jul. 13, 2017 and entitled “Connecting And Using Building Interior Data Acquired From Mobile Devices” (which includes disclosure of an example BICA system that is generally directed to obtaining and using panorama images from within one or more buildings or other structures); in U.S. Non-Provisional patent application Ser. No. 15/950,881, filed Apr. 11, 2018 and entitled “Presenting Image Transition Sequences Between Viewing Locations” (which includes disclosure of an example ICA system that is generally directed to obtaining and using panorama images from within one or more buildings or other structures); and in U.S. Provisional Patent Application No. 62/744,472, filed Oct. 11, 2018 and entitled “Automated Mapping Information Generation From Inter-Connected Images”; each of which is incorporated herein by reference in its entirety.” – The information displayed includes the positions of the images which can be viewed separately or stitched together with other images taken at the same position. [0035]-[0036] “FIGS. 2A-2D illustrate examples of automatically generating and presenting a floor map for a building using inter-connected panorama images of the building interior, such as based on the building 198 and inter-connected panorama images 210 discussed in FIGS. 1B-1H. In particular, FIG. 2A illustrates information 230 a about analysis techniques that may be performed using information in various panorama images (or other types of images) in order to determine approximate position of each viewing location within its room, as well as to optionally determine additional information as well (e.g., locations of connecting passages between rooms, relative positional information between viewing locations, estimates of room shapes, determinations of room types, etc.).” – The capture locations can be displayed on the map. See also, for example, FIG. 2C illustrating a 2-D floor plan with indications of locations of the image capture positions. PNG media_image1.png 200 400 media_image1.png Greyscale Claim 6 Regarding claim 6, Colburn teaches the features of claim 5 and further teaches: wherein the presenting of the floor plan further includes visually indicating the determined orientation, (Colburn [0021] “In addition, while directional indicator 109 is provided for reference of the viewer, the mobile device and/or ICA system may not use such absolute directional information in at least some embodiments, such as to instead determine relative directions and distances between panorama images 210 without regard to actual geographical positions or directions.” – This is a visual indicator of orientation.) and wherein the method further comprises presenting, by the computing device and in response to a user selection of the visual indication on the presented floor plan, at least a portion of the panorama image corresponding to the determined orientation. (Colburn [0031]-[0033] “In addition, because the panorama images for viewing locations 210A and 210B are linked, the image 150 e includes a generated virtual user-selectable control 141 b to visually indicate that the user may select that control to move from the location at which image 150 e was taken (the viewing location 210A) to the linked panorama image at viewing location 210B, with the additional text label 142 b of “living room” from FIG. 1B added along with the user-selectable control 141 b to reference that viewing location 210B. The user may further manipulate the displayed panorama image view 150 e of FIG. 1E in various ways, with FIG. 1F illustrating an altered view 150 f from the same panorama image corresponding to the user dragging, scrolling or otherwise moving the view direction to the left. Altered view 150 f includes some of the same information as view 150 e, and further includes additional objects visible to the left in the living room, including additional lighting FIG. 130b and table 193-2. The representation 141 b and corresponding text 142 b have also been altered to reflect the changed direction to viewing location 210B from the view 150 f. FIG. 1G illustrates a different altered view 150 g that the user may initiate from view 150 e of FIG. 1E, which in this example corresponds to the user zooming in, such that the objects from FIG. 1 E which continue to be visible in FIG. 1G are shown in an enlarged form. FIG. 1H continues the example, and illustrates the effect of selecting the control 141 b in one of views 150e, 150 f or 150 g, to cause a view 150 h of the panorama image at viewing location 210B to be shown. In this example, since the view from the viewing location 210B′s panorama image was initiated from viewing location 210A′s panorama image, the initial view 150 h shown in FIG. 1H is in the direction of the link 215-AB between the two viewing locations (as also shown in FIG. 1E), although the user can subsequently manipulate the panorama image for viewing location 210B in a manner similar to that discussed for the panorama image for viewing location 210A. Initial images to be displayed for viewing location 210B are able to be selected in other manners in other embodiments, with one example being the user changing from the panorama image at viewing location 210C to the panorama image of viewing location 210B, in which case the initial view of viewing location 210B′s panorama image in this example would be in the direction of link 215-BC.” – The GUI provides options for changing a virtual position within the virtual map and viewing the image captured at that position. The GUI provides selectable options for changing the orientation of the view in the panoramic image.) Claim 7 Regarding claim 7, Colburn teaches the features of claim 5 and further teaches: wherein the visual information of the panorama image includes 360 horizontal degrees of visual coverage from an acquisition location of the panorama image, (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location), from multiple images acquired in multiple directions from the viewing location (e.g., from a smartphone or other mobile device held by a user turning at that viewing location), etc. It will be appreciated that such a panorama image may in some situations be represented in a spherical coordinate system and cover up to 360° around horizontal and/or vertical axes, such that a user viewing a starting panorama image may move the viewing direction within the starting panorama image to different orientations to cause different images (or “views”) to be rendered within the starting panorama image (including, if the panorama image is represented in a spherical coordinate system, to convert the image being rendered into a planar coordinate system). Furthermore, acquisition metadata regarding the capture of such panorama images may be obtained and used in various manners, such as data acquired from IMU (inertial measurement unit) sensors or other sensors of a mobile device as it is carried by a user or otherwise moved between viewing locations. Additional details are included below related to the acquisition and usage of panorama images or other images for a building.” – The images are 360 degree images and have associated metadata, including inertially determined, relative position information.) wherein the image circular descriptor includes, for each of the 360 horizontal degrees of visual coverage from the acquisition location, information about at least some of the second latent space features associated with any structural elements of the room that are visible in a direction from the acquisition location corresponding to the horizontal degree of visual coverage, and (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted).” [0014] “Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed—if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D (three-dimensional) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed.” – A deep neural network is used, taking information of the 2D floor map (e.g., room shape) as input. In training the DEEP neural network, a hidden layer with latent space feature variables is trained to determine values of the latent space feature variables. The latent space hidden layer node values are related to all data input and output by the weights and summation and are, therefore, “about” those data points. The first latent space data could be a weight and bias of a first hidden node.) wherein each of the building location circular descriptors includes, for each of 360 horizontal degrees from the building location associated with the building location circular descriptor, information about at least some of the first latent space features associated with any structural elements of a surrounding room that are visible in a direction from the that building location corresponding to the horizontal degree of visual coverage. (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted).” [0014] “Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed—if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D (three-dimensional) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed.” – A deep neural network is used, taking information of the 2D floor map (e.g., room shape) as input. In training the DEEP neural network, a hidden layer with latent space feature variables is trained to determine values of the latent space feature variables. The latent space hidden layer node values are related to all data input and output by the weights and summation and are, therefore, “about” those data points. The second latent space data could be a weight and bias of a second hidden node in the same or a different hidden layer in the deep neural network.) Claim 8 Regarding claim 8, Colburn teaches the features of claim 7, and further teaches: wherein the structural elements of the building include at least one door, at least one window, and at least one inter-wall border, (Colburn [0022] “The view capture may be performed by recording a video and/or taking a succession of images, and may include a number of objects or other features (e.g., structural details) that may be visible in images (e.g., video frames) captured from the viewing location—in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196, corners or edges 195 (including corner 195-1 in the northwest corner of the building 198, and corner 195-2 in the northeast corner of the first room), furniture 191-193 (e.g., a couch 191; chairs 192, such as 192-1 and 192-2; tables 193, such as 193-1 and 193-2; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2) hung on walls, light fixtures, etc.” [0031] “The image 150 e includes several objects in the surrounding environment of the living room, such as windows 196, a picture or painting 194-1, chair 192-1, table 193-1, a lighting fixture 130 a, and inter-wall and floor and ceiling borders including border 195-2.” [0037] “In particular, in the example of FIG. 2A, individual images from within two or more panorama images (e.g., corresponding to separate video frames from recorded video used to generate the panorama images) may be analyzed to determine overlapping features and other similarities between such images. In the example of FIG. 2A, additional details are shown in room 229 a for viewing locations 210A and 210C, such as based on structural features (e.g., corners, borders, doorways, window frames, etc.) and/or content features (e.g., furniture) of the room.” – The structural elements of the building include at least one door, at least one window, and at least one inter-wall border.) and wherein the obtaining of the building location description information includes generating the building location circular descriptors, including generating from the two-dimensional floor plan a two-dimensional point cloud having a plurality of points, including associating information with each of the points that includes two-dimensional location information for the point and normal direction information for the point and (Colburn [0014] “In some embodiments , one or more types of additional processing may be performed , such as to determine additional mapping - related information for a generated floor map or to otherwise associate additional information with a generated floor map. As one example, one or more types of additional information about a building may be received and associated with the floor map (e.g., with particular locations in the floor map), such as additional images, annotations or other descriptions of particular rooms or other locations , overall dimension information, etc. As another example, in at least some embodiments , additional processing of images is performed to determine estimated distance information of one or more types, such as to measure sizes in images of objects of known size, and use such information to estimate room width, length and / or height. Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D ( three - dimensional ) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed. Such generated floor maps and optionally additional associated information may further be used in various manners, as discussed elsewhere herein.” – A 2D point map is generated with all relative spacing and angular information. [0028] “Based on a similar analysis of departing direction from viewing location 210B, arrival direction at viewing location 210C, and intervening velocity and location for some or all data points for which acceleration data is captured along the travel path 115 bc, the user's movement for travel path 115 bc may be modeled, and resulting direction 215-BC and corresponding distance between viewing locations 210B and 210C may be determined.” [0046]-[0047] “Given the above framework, a valid placement should satisfy these constraints as much as possible. The goal is to place the estimated room shapes (polygons or 3D shapes) into a global map such that the constraints on the initial placement is matched and satisfies the topological constraints. The main topological constraints that the room-shape matching should satisfy is to match the connecting passages between rooms, with the initial placements constraining the relative scale and alignment of the room shapes, with the room-shape matching algorithm thus less sensitive to small geometric and topological errors. […]The polygon points and camera centers are defined as a set of 2D points in homogenous coordinates and the edges are pairs of polygon node indices.” The relative positions and orientations are harmonized to provide a map that relates all position and orientation information relative to and derived from and further relatable to further captured panoramic images. [0014] “As one example, one or more types of additional information about a building may be received and associated with the floor map (e.g., with particular locations in the floor map), such as additional images, annotations or other descriptions of particular rooms or other locations, overall dimension information, etc. ” [0038] “ In addition, the image analysis identifies various other features of the room for possible later use, including connecting doorway passages 233 in and/or out of the room (as well as interior doorways or other openings 237 within the room), connecting window passages 234 (e.g., from the room to an exterior of the building), etc.—it will be appreciated that the example connecting passages are shown for only a subset of the possible connecting passages, and that some types of connecting passages (e.g., windows, interior doorways or other openings, etc.) may not be used in some embodiments.” – A 2-D point cloud map is generated that incorporates all information between all points, including adjacent points, within the space. This includes normal directions and associated annotation/semantic information that were mapped in the earlier elements.) semantic information about any structural elements associated with the point, and including analyzing the points and the associated information to generate the first latent space features, wherein each of the points is associated with at least one of the first latent space features. (Colburn [0014] “As one example, one or more types of additional information about a building may be received and associated with the floor map (e.g., with particular locations in the floor map), such as additional images, annotations or other descriptions of particular rooms or other locations, overall dimension information, etc.” [0022] “The view capture may be performed by recording a video and/or taking a succession of images, and may include a number of objects or other features (e.g., structural details) that may be visible in images (e.g., video frames) captured from the viewing location—in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196, corners or edges 195 (including corner 195-1 in the northwest corner of the building 198, and corner 195-2 in the northeast corner of the first room), furniture 191-193 (e.g., a couch 191; chairs 192, such as 192-1 and 192-2; tables 193, such as 193-1 and 193-2; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2) hung on walls, light fixtures, etc. The user may also optionally provide a textual or auditory identifier to be associated with a viewing location, such as “entry” 142a for viewing location 210A or “living room” 142 b for viewing location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) or the identifiers may not be used.” [0038] “In addition, the image analysis identifies various other features of the room for possible later use, including connecting doorway passages 233 in and/or out of the room (as well as interior doorways or other openings 237 within the room), connecting window passages 234 (e.g., from the room to an exterior of the building), etc.—it will be appreciated that the example connecting passages are shown for only a subset of the possible connecting passages, and that some types of connecting passages (e.g., windows, interior doorways or other openings, etc.) may not be used in some embodiments.” [0063] “The FMGM system 340 may further, during its operation, store and/or retrieve various types of data on storage 320 (e.g., in one or more databases or other data structures), such as various types of user information 322, optionally linked panorama image information 324 (e.g., for analysis to generate floor maps; to provide to users of client computing devices 360 for display; etc.), generated floor maps and optionally other associated information 326 (e.g., generated and saved 3D models, building and room dimensions for use with associated floor plans, additional images and/or annotation information, etc.) and/or various types of optional additional information 328 (e.g., various analytical information related to presentation or other use of one or more building interiors or other environments captured by an ICA system).” – A 2-D point cloud map is generated that incorporates all information between all points, including adjacent points, within the space. This includes normal directions and associated annotation/semantic information that were mapped in the earlier elements.) Claim 9 Regarding claim 9, Colburn teaches the features of claim 7, and further teaches: further comprising determining the one building location circular descriptor having angular information best matching the information included in the image circular descriptor by performing the generating and the comparing without using any depth information acquired from any depth sensor about a depth from the acquisition location to any surrounding elements of the room. (Colburn [0020] “For example, in at least some such embodiments, such techniques may include using one or more mobile devices (e.g., a smart phone held by a user, a camera held by or mounted on a user or the user's clothing, etc.) to capture video data from a sequence of multiple viewing locations (e.g., video captured at each viewing location while a mobile device is rotated for some or all of a full 360 degree rotation at that viewing location) within multiple rooms of a house (or other building), and to further capture data linking the multiple viewing locations, but without having distances between the viewing locations being measured or having other measured depth information to objects in an environment around the viewing locations (e.g., without using any depth-sensing sensors separate from the camera).” - No depth sensing required.) Claim 10 Regarding claim 10, Colburn teaches the features of claim 7 and further teaches: further comprising selecting the plurality of building locations in the building by specifying a grid of building locations covering floors of at least some rooms of multiple rooms of the building. (Colburn [0012] “After positions of images' viewing locations in their enclosing rooms and relative to each other in a common global coordinate system have been determined, and estimated room shape information is obtained for the building's rooms, the generation of the floor map for the building may further include automatically determining initial placement positions of each room's estimated room shape, by placing a room's estimated room shape around any image viewing locations that the room contains. In at least some embodiments, such initial placements are performed separately for each room, but using the determined relative positions of the viewing locations in the common global coordinate system. In this manner, a rough approximation of the floor map may be determined. Additional details are included below regarding automatically determining initial placement positions of each room's estimated room shape in the common global coordinate system, including with respect to FIG. 2C and its associated description.” [0071] “After block 435, the routine continues to block 440 to use the obtained or acquired image and inner-connection information to determine, for the viewing locations of images inside the building, relative global positions of the viewing locations in a common coordinate system or other common frame of reference, such as to determine directions and optionally distances between the respective viewing locations. After block 440, the routine in block 450 analyzes the acquired or obtained panoramas or other images to determine, for each room in the building that has one or more viewing locations, a position within the room of those viewing locations, as discussed in greater detail elsewhere herein.” – A coordinate/grid system is established for reference to specify locations within the building for image capture, analysis, recognition, data output, and navigation. This coordinate system is used throughout the rooms of the building and makes any point within the coordinate system/grid selectable and associable with the building, any of the rooms, and any of the objects or images captures therein.) Claim 12 Regarding claim 12, Colburn teaches the features of claim 7, and further teaches: wherein the comparing of the image circular descriptor to the building location circular descriptors further includes: analyzing the visual information to identify, for a characteristic of a specified type, at least one of the 360 horizontal degrees of visual coverage from the acquisition location for which the characteristic is present; (Colburn [0037] “As non-exclusive illustrative examples, the additional Information in FIG. 2A illustrates various viewing directions 227 from viewing location 210A that each has an associated frame in the panorama image for that viewing location, with the illustrated viewing directions 227 corresponding to various features in the room 229 a—it will be appreciated that only a subset of the possible features are illustrated. Similarly, the additional information also illustrates various viewing directions 228 from viewing location 210C that each has an associated frame in the panorama image for that viewing location, with the illustrated viewing directions 228 generally corresponding to the same features in the room 229 a as the viewing directions 227—however, some features may be visible from only one viewing location, such as for the northeast corner 195-2, and thus may not be used for the comparison and analysis of the panorama images from these two viewing locations (although it may be used for the comparison of panorama images from viewing locations 210A and 210B).” – Each angle is assessed to determine whether a characteristic, such as visibility of a room feature, from a particular position.) for each of at least some of the building location circular descriptors, comparing the image circular descriptor to the building location circular descriptor by: identifying one or more of the 360 horizontal degrees from the building location associated with the building location circular descriptor at which the characteristic is present; and synchronizing locations of each of the identified at least one of the 360 horizontal degrees of visual coverage from the acquisition location to locations of each of the identified one or more 360 horizontal degrees from the building location to determine if, relative to the synchronized locations, information at other horizontal degrees of coverage in the image circular descriptor matches information at other horizontal degrees of coverage in the building location circular descriptor; and (Colburn [0036] “In this example, various details are discussed with respect to the panorama images acquired at viewing locations 210A and 210C in the living room 229 a of the illustrated building—it will be appreciated that similar analysis may be performed for that same room by using the panorama image information for viewing location 210B (e.g., in comparison to the panorama images for viewing locations 210A and 210C individually in a pairwise fashion, or instead to simultaneously compare the panorama images for all three viewing locations), and that similar techniques may be performed for the other rooms 229 b-229 f. In this example, room 229 g does not have any viewing locations within or closely adjacent to the room, and thus an analysis may not be performed for it with respect to viewing location position within rooms, although information from other viewing locations with visibility into room 229 g (e.g., viewing locations 210G and 210H) may be used at least in part for other types of information acquired from analysis of panorama images.” – Images are compared to determine common features viewable in more than one image, suggesting an overlapping portion of a perspective view. [0037] “In particular, in the example of FIG. 2A, individual images from within two or more panorama images (e.g., corresponding to separate video frames from recorded video used to generate the panorama images) may be analyzed to determine overlapping features and other similarities between such images. In the example of FIG. 2A, additional details are shown in room 229 a for viewing locations 210A and 210C, such as based on structural features (e.g., corners, borders, doorways, window frames, etc.) and/or content features (e.g., furniture) of the room. As non-exclusive illustrative examples, the additional Information in FIG. 2A illustrates various viewing directions 227 from viewing location 210A that each has an associated frame in the panorama image for that viewing location, with the illustrated viewing directions 227 corresponding to various features in the room 229 a—it will be appreciated that only a subset of the possible features are illustrated. Similarly, the additional information also illustrates various viewing directions 228 from viewing location 210C that each has an associated frame in the panorama image for that viewing location, with the illustrated viewing directions 228 generally corresponding to the same features in the room 229 a as the viewing directions 227—however, some features may be visible from only one viewing location, such as for the northeast corner 195-2, and thus may not be used for the comparison and analysis of the panorama images from these two viewing locations (although it may be used for the comparison of panorama images from viewing locations 210A and 210B).” – Again, some will have the same features and some will not. The ones that do can be synthesized to more accurately map the environment. selecting one of the at least some building location circular descriptors as the determined one building location circular descriptor based on the selected one building location circular descriptor having an identified synchronized location for which the information at the other horizontal degrees of coverage in the building location circular descriptor best matches the information at the other horizontal degrees of coverage in the image circular descriptor, and using the identified synchronized location to determine the orientation in the room for the panorama image. (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted). One example of a system for estimating room shape from an image is RoomNet (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2017 IEEE International Conference On Computer Vision, August 2017), and another example of a system for estimating room shape from an image is Room Net (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2018 IEEE/CVF Conference On Computer Vision And Pattern Recognition, June 2018).” – Colburn uses the information determined regarding each position as input to a machine learning model that determines the orientation of the image relative to a room in a building.) Claim 13 Regarding claim 13, Colburn teaches the features of claim 12, and further teaches: wherein the characteristic of the specified type is one of a visible wall being orthogonal to a line along an identified horizontal degree of visual coverage, or a specified type of wall element being visible at the identified horizontal degree of visual coverage. (Colburn [0037] “It will be appreciated that multiple frames from both viewing locations may include at least some of the same feature (e.g., corner 195-1), and that a given such frame may include other information in addition to that feature (e.g., portions of the west and north walls, the ceiling and/or floor, possible contents of the room, etc.)—for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap, although in actuality each image/frame from viewing location 210A in the approximate direction of 227A that includes any of corner 195-1 may be compared to each image/frame from viewing location 210C in the approximate direction of 228A that includes any of corner 195-1 (and similarly for any other discernible features in the room).” – The automated analysis of Colburn identifies features, such as characteristics of walls (e.g., ceilings, discontinuities, corners, etc.) in each direction, which means, at each angle in the coordinate system established.) Claim 14 Regarding claim 14, Colburn teaches the features of claim 5, and further teaches: wherein the comparing of the image circular descriptor to the building location circular descriptors includes, for each of at least some of the building location circular descriptors, determining a probability that the image circular descriptor and the building location circular descriptor are a match by differing less than a specified threshold, and selecting one of the at least some building location circular descriptors that has a highest probability of matching the image angular detector as the determined one building location circular descriptor. (Colburn [0044] “After such an initial placement of each room's estimated room shape is made around the determined relative global positions of the viewing locations in the building's interior, additional information may be used to adjust the initial placements into final placements for use with the generated floor map. In particular, in at least some embodiments, one or more types of constraints are applied relative to inter-room placement, and an optimal or otherwise preferred solution is determined for those constraints. FIG. 2C further illustrates examples of such constraints, including by matching 231 connecting passage information for adjacent rooms so that the locations of those passages are co-located in the final placement. Further possible constraints include optional use of room shape information, such as by matching 232 shapes of adjacent rooms in order to connect those shapes (e.g., as shown for rooms 229 d and 229 e), although in other embodiments such information may not be used. FIG. 2C also illustrates information 238 about one or more exact or approximate dimensions that may be available for use as constraints in placing of rooms and/or determining dimensions of particular rooms, such as based on additional metadata available regarding the building, analysis of images from one or more viewing locations external to the building (not shown), etc.—if dimensions are estimated, they may be generated to attempt to obtain a specified threshold level of accuracy, such as +/−0.1 meters, 0.5 meters, 1 meter, 5 meters, etc. Exterior connecting passages may further be identified and used as constraints 239, such as to prevent another room from being placed at a location that has been identified as a passage to the building's exterior.” – The images are compared to determine the likely overlap, and the distance data for each position of image capture is modified to conform to constraints, including a threshold distance difference for each data point.) Claim 16 Regarding claim 16, Colburn teaches the features of claim 5, and further teaches: further comprising obtaining a first enumerated group of ranges of angles, obtaining a second enumerated group of ranges of distances, and generating each of the building location circular descriptors by encoding information in that building location circular descriptor about some of the first latent space features by, for each of the at least some points of the structural elements that are visible from the building location of that building location circular descriptor, encoding information in that building location circular descriptor for one of 360 horizontal degrees from that building location to that point that includes one of the ranges of angles from the first enumerated group and one of the ranges of distances from the second enumerated group. (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted). One example of a system for estimating room shape from an image is RoomNet (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2017 IEEE International Conference On Computer Vision, August 2017), and another example of a system for estimating room shape from an image is Room Net (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2018 IEEE/CVF Conference On Computer Vision And Pattern Recognition, June 2018).” – Colburn uses the information determined regarding each position as input to a machine learning model that determines the orientation of the image relative to a room in a building. This information is encoded for each point/location in the coordinate system. When using this data as input to the machine learning model, the output is attributed to the particular points/locations. For example, this can allow for annotation or 3D rendering. To this end, each point in the 3D space is encoded with information “about” the latent variables in the neural network (the hidden layer’s weights and biases) and vice versa. The final product encodes data for each point in all directions (at all angles). It is initially derived from all 2D data that assesses all horizontal directions/angles from each point/location. This covers the stated interrelationships in the claim.) Claim 17 Regarding claim 17, Colburn teaches the features of claim 5, and further teaches: further comprising determining the position of the panorama image in the room by supplying, to a refinement neural network, the panorama image and building location with which the determined one building location circular descriptor is associated, and receiving an adjusted position that is based on that building location and is adjusted to reflect the visual information of the panorama image. (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted).” [0014] “Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed—if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D (three-dimensional) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed.” - A deep neural network is used, taking information of the 2D floor map (e.g., room shape) as input. The machine learning network takes all of this data in, including the 2D positions and the information about distances to objects and borders in all directions (at all angles). The machine learning model outputs a 3D model with 3D position information derived from the 2D information.) Claim 18 Regarding claim 18, Colburn teaches the features of claim 5, and further teaches: wherein the associating of the panorama image with the determined position and orientation further includes, by the computing device: generating, for each of multiple building location circular descriptors associated with one of multiple building locations in the room, additional visual information for that building location circular descriptor that represents a view from the building location with which that building location circular descriptor is associated and that includes at least some of [visible (See the 35 USC 112(b) rejection of claim 18)] features that are visible at the specified angular directions for that building location circular descriptor; and (Colburn [0012] “In addition, the generation of the floor map for the building may further include automatically determining, for each room in the building, the relative position within the room of any image viewing locations, and the positions of any connecting passages in and/or out of the room, such as based at least in part on automated analysis of each such image to determine directions to multiple features in the room (e.g., corners, doorways, etc.), thus allowing the relative position of the image to those multiple features to be determined from those determined directions. The connecting passages may include one or more of doorways, windows, stairways, non-room hallways, etc., and the automated analysis of the images may identify such features based at least in part on identifying the outlines of the passages, identifying different content within the passages than outside them (e.g., different colors, shading, etc.), etc. In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms. Additional details are included below regarding determining information from analysis of images that includes relative positions of images' viewing locations within rooms, including with respect to FIG. 2A and its associated description.” – A 2D map of all features is determined in a coordinate system based on the panorama images captured at some of the positions. The distances between elements, including visible objects is determined. [0022] “The user may also optionally provide a textual or auditory identifier to be associated with a viewing location, such as “entry” 142a for viewing location 210A or “living room” 142 b for viewing location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) or the identifiers may not be used.” – The determination of this data can be automated. [0037] “Similarly, the additional information also illustrates various viewing directions 228 from viewing location 210C that each has an associated frame in the panorama image for that viewing location, with the illustrated viewing directions 228 generally corresponding to the same features in the room 229 a as the viewing directions 227—however, some features may be visible from only one viewing location, such as for the northeast corner 195-2, and thus may not be used for the comparison and analysis of the panorama images from these two viewing locations (although it may be used for the comparison of panorama images from viewing locations 210A and 210B). Using feature 195-1 in the northwest corner of the room 229 a as an example, a corresponding viewing direction 227A in the direction of that feature from viewing location 210A is shown, with an associated frame in viewing location 210A′s panorama image being determined, and a corresponding viewing direction 228A with associated frame from viewing location 210C to that feature is also shown—given such matching frames/images to the same feature in the room from the two viewing locations, information in those two frames/images may be compared in order to determine a relative rotation and translation between viewing locations 210A and 210C (assuming that sufficient overlap in the two images is available). It will be appreciated that multiple frames from both viewing locations may include at least some of the same feature (e.g., corner 195-1), and that a given such frame may include other information in addition to that feature (e.g., portions of the west and north walls, the ceiling and/or floor, possible contents of the room, etc.)—for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap, although in actuality each image/frame from viewing location 210A” – The overall floor plan is based on a comparison of images and so is the determination of a position based on image data after a map has been constructed. These determinations include relative angles and distances between all points and visible elements, such as objects in a room.) determining an acquisition location of an additional image captured in the room by comparing an additional image circular descriptor generated for the additional image to the multiple building location circular descriptors, including using the generated additional visual information for the multiple building location circular descriptors. (Colburn [0037] [0037] “Similarly, the additional information also illustrates various viewing directions 228 from viewing location 210C that each has an associated frame in the panorama image for that viewing location, with the illustrated viewing directions 228 generally corresponding to the same features in the room 229 a as the viewing directions 227—however, some features may be visible from only one viewing location, such as for the northeast corner 195-2, and thus may not be used for the comparison and analysis of the panorama images from these two viewing locations (although it may be used for the comparison of panorama images from viewing locations 210A and 210B). Using feature 195-1 in the northwest corner of the room 229 a as an example, a corresponding viewing direction 227A in the direction of that feature from viewing location 210A is shown, with an associated frame in viewing location 210A′s panorama image being determined, and a corresponding viewing direction 228A with associated frame from viewing location 210C to that feature is also shown—given such matching frames/images to the same feature in the room from the two viewing locations, information in those two frames/images may be compared in order to determine a relative rotation and translation between viewing locations 210A and 210C (assuming that sufficient overlap in the two images is available). It will be appreciated that multiple frames from both viewing locations may include at least some of the same feature (e.g., corner 195-1), and that a given such frame may include other information in addition to that feature (e.g., portions of the west and north walls, the ceiling and/or floor, possible contents of the room, etc.)—for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap, although in actuality each image/frame from viewing location 210A” – The overall floor plan is based on a comparison of images and so is the determination of a position based on image data after a map has been constructed. The data will be compared with the data from each point/capture location to determine current position of a captured image.) Claim 19 Regarding claim 19, Colburn teaches the features of claim 18 and further teaches; further comprising generating a graph having multiple nodes and with at least one node representing each of multiple rooms of the building, associating the multiple building location circular descriptors with one of the multiple nodes that represents the room, and further associating, after determining the position of the panorama image, the panorama image with the one node that represents the room. (Colburn [0071] “After block 435, the routine continues to block 440 to use the obtained or acquired image and inner-connection information to determine, for the viewing locations of images inside the building, relative global positions of the viewing locations in a common coordinate system or other common frame of reference, such as to determine directions and optionally distances between the respective viewing locations.” – The entirety of the data is based on a virtual representation of a physical coordinate system, which is the same as a graph. [0057] “In addition, in at least some embodiments, further pruning and optimization is performed to convert the matched room-shape nodes, lines, and polygons into a final output, such as to prune, merge, and unify the polygons and represent the wall widths and unknown/unobserved regions in the house.” – Rooms are modeled as nodes within the coordinate system. [0008] “ for subsequently using the generated mapping information in one or more further automated manners. In at least some embodiments, the defined area includes an interior of a multi-room building (e.g., a house, office, etc.), and the generated information includes a floor map of the building, such as from an automated analysis of multiple panorama images or other images acquired at various viewing locations within the building—in at least some such embodiments, the generating is further performed without having or using detailed information about distances from the images' viewing locations to walls or other objects in the surrounding building. The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc.” – The system can use a camera for navigation, to determine a position (e.g., within a room) of a mobile device, e.g., based on image comparison to data in the generated map.) Claim 20 Regarding claim 20, Colburn teaches the features of claim 5 and further teaches: wherein the comparing of the image circular descriptor to the building location circular descriptors includes using machine learning to identify the determined one building location circular descriptor as being most similar to the image circular descriptor. (Colburn [0022] “The user may also optionally provide a textual or auditory identifier to be associated with a viewing location, such as “entry” 142a for viewing location 210A or “living room” 142 b for viewing location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) or the identifiers may not be used.” – Machine learning may be used to identify elements in images used to make the map that can be used in the comparison for navigation. [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted).” – Also, machine learning is used to determine a location within the map based on an image and data associated with a capture position. [0058] “In addition, textual labels have been added in the example of FIG. 2C for each of the rooms 229 a-229 f, such as based on an automated analysis of the information to identify estimated room types as discussed above (e.g., by using machine learning to match room features to types of room), or instead from other sources (e.g., textual labels associated with the panorama images during their acquisition and generation process, information manually supplied by a user, etc.).” – Also, the labels of rooms can be generated by a machine learning model and used in the comparison of a captured image with the map data to determine a current position for navigation.) Claim 21 Regarding claim 21, Colburn teaches: A non-transitory computer-readable medium having stored contents that cause one or more computing devices to perform automated operations including at least: [0066] “Some or all of the components, systems and data structures may also be stored (e.g., as software instructions or structured data) on a non-transitory computer-readable storage mediums” – CRM.) obtaining, by the one or more computing devices, and for an image captured in an area associated with a building and including visual information about at least some structural elements of the building, an image circular descriptor for the image that includes information identifying features associated with the at least some structural elements at specified directions within the visual information; (Colburn [0008] “The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc.” [0009] “ The determination of the relative positions of the images' viewing locations may be performed in various manners in various embodiments, including to use information from the images themselves (e.g., by successively identifying common features in two different images to determine their relative positions to each other), from the received information about the inter-connected images (e.g., from previously generated links and/or directions between at least some pairs of images), and/or from metadata about acquisition of the images (e.g., by analyzing information about a path traveled by a device or user between viewing locations in order to determine their relative positions).” [0011] “In addition, the generation of the floor map for the building may further include automatically determining, for each room in the building, the relative position within the room of any image viewing locations, and the positions of any connecting passages in and/or out of the room, such as based at least in part on automated analysis of each such image to determine directions to multiple features in the room (e.g., corners, doorways, etc.), thus allowing the relative position of the image to those multiple features to be determined from those determined directions. The connecting passages may include one or more of doorways, windows, stairways, non-room hallways, etc., and the automated analysis of the images may identify such features based at least in part on identifying the outlines of the passages, identifying different content within the passages than outside them (e.g., different colors, shading, etc.), etc. In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms.” – Received images are processed to determine information such as objects and/or room features in the images. This can be done for comparison with other data associated with positions on the map (with associated annotation data) to determine a location in a navigation setting. [0018] “One or more users (not shown) of one or more client computing devices 175 may further interact over one or more computer networks 170 with the FMGM system 140 and optionally the ICA system 160, such as to obtain, display and interact with a generated floor map and/or one or more associated linked panorama images (e.g., to change between a floor map view and a view of a particular panorama image at a viewing location within or near the floor map; to change the horizontal and/or vertical viewing direction from which a corresponding view of a panorama image is displayed, such as to determine a portion of a panorama image in a 3D spherical coordinate system to which a current user viewing direction is directed, and to render a corresponding planar image that illustrates that portion of the panorama image without the curvature or other distortions present in the original panorama image; etc.). In addition, while not illustrated in FIG. 1A, a floor map (or portion of it) may be linked to or otherwise associated with one or more other types of information, including for a floor map of a multi-story or otherwise multi-level building to have multiple associated sub-floor maps for different stories or levels that are interlinked (e.g., via connecting stairway passages), for a two-dimensional (“2D”) floor map of a building to be linked to or otherwise associated with a three-dimensional (“3D”) rendering of the building (referred to at times as a “dollhouse view”), etc. In addition, while not illustrated in FIG. 1A, in some embodiments the client computing devices 175 (or other devices, not shown), may receive and use generated floor maps and/or other generated mapping-related information in additional manners, such as to control or assist automated navigation activities by those devices (e.g., by autonomous vehicles or other devices), whether instead of or in addition to display of the generate information.” – The data of the image captured and data generated therefrom can be used for navigation to determine the position of a captured image. [0020] “After the viewing locations' videos and linking information are captured, the techniques may include analyzing video captured at each viewing location to create a panorama image from that viewing location that has visual data in multiple directions (e.g., a 360 degree panorama around a vertical axis), analyzing information to determine relative positions/directions between each of two or more viewing locations, creating inter-panorama positional/directional links in the panoramas to each of one or more other panoramas based on such determined positions/directions, and then providing information to display or otherwise present multiple linked panorama images for the various viewing locations within the house.” - The data of the image captured and data generated therefrom can be used for navigation.”) obtaining, by the one or more computing devices, building location circular descriptors each associated with a building location and including angular information about features associated with points of structural elements of the building at specified angular directions from the associated building location; (Colburn [0062] “ ICA server computing system(s) 380 (e.g., on which an ICA system executes to generate and provide linked panorama images 386), optionally other computing systems 390 (e.g., used to store and provide additional information related to buildings; used to capture building interior data; used to store and provide information to client computing devices, such as linked panorama images instead of server computing systems 380 or 300 or instead additional supplemental information associated with those panoramas and their encompassing buildings or other surrounding environment; etc.), and optionally other navigable devices 395 that receive and use floor maps and optionally other generated information for navigation purposes (e.g., for use by semi-autonomous or fully autonomous vehicles or other devices).” – Stored data associated with the generated map is retrievable from storage.) comparing, by the one or more computing devices, the image circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors that has angular information best matching the information included in the image circular descriptor; (Colburn [0037] “for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap, although in actuality each image/frame from viewing location 210A in the approximate direction of 227A that includes any of corner 195-1 may be compared to each image/frame from viewing location 210C in the approximate direction of 228A that includes any of corner 195-1 (and similarly for any other discernible features in the room).” – The information associated with each position of the generated map is compared, for example, to determine which ones have the most matching information/overlap.) associating, by the one or more computing devices, the image with a determined position for the building that is based on the associated building location for the determined one building location circular descriptor; and (Colburn [0037] “Furthermore, by using the determined relative rotation and translation for multiple such matching frames/images for one or more features, the precision of the positions of the corresponding viewing locations may be increased.” – The positions of images are associated based on the similarity/overlap of the images.) providing, by the one or more computing devices, information for the image about the determined position for the building. (Colburn [0018] “may receive and use generated floor maps and/or other generated mapping-related information in additional manners, such as to control or assist automated navigation activities by those devices (e.g., by autonomous vehicles or other devices), whether instead of or in addition to display of the generate information.” – The data comparison is used to aid in navigation, information about the determined position of the image capture.) Claim 22 Regarding claim 22, Colburn teaches the features of claim 21 and further teaches: wherein the image is a panorama image with 360 degrees horizontally of visual information, (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location)” – 360 degree panorama images.) wherein the obtaining of the image circular descriptor includes generating the image circular descriptor by the one or more computing devices via analysis of the image by a trained neural network, and (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted). One example of a system for estimating room shape from an image is RoomNet (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2017 IEEE International Conference On Computer Vision, August 2017), and another example of a system for estimating room shape from an image is Room Net (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2018 IEEE/CVF Conference On Computer Vision And Pattern Recognition, June 2018).” – Neural nets provide 3D position info based on the input data from the generated floor plan and images.) wherein the providing of the information about the determined position for the image includes presenting a floor plan for the building that includes a visual indication of the determined position for the image. (Colburn [0008] “The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc.” – The generated floor plan is presented in a GUI, the generated floor plan including all positions in the room.) Claim 23 Regarding claim 23, Colburn teaches the features of claim 21 and further teaches: wherein the area associated with the building includes at least one of multiple rooms of the building, and (Colburn [0015] “The described techniques provide various benefits in various embodiments, including to allow floor maps of multi-room buildings and other structures to be automatically generated from images acquired in the buildings or other structures, including without having or using detailed information about distances from images' viewing locations to walls or other objects in a surrounding building or other structure.” – The building can be a multi-room building, including room areas in the generated floorplan.) wherein the structural elements of the building include multiple of a door or a window or an inter-wall border. (Colburn [0022] “in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196,” – Doors and windows… [0031] “The image 150 e includes several objects in the surrounding environment of the living room, such as windows 196, a picture or painting 194-1, chair 192-1, table 193-1, a lighting fixture 130 a, and inter-wall and floor and ceiling borders including border 195-2.” – Inter-wall borders.) Claim 24 Regarding claim 24, Colburn teaches the features of claim 21 and further teaches: wherein the area associated with the building includes at least one external area proximate to the building, and (Colburn [0023] “ This process may repeat from some or all rooms of the building and optionally external to the building, as illustrated for viewing locations 210C-210J. The acquired video and/or other images for each viewing location are further analyzed to generate a panorama image for each of viewing locations 210A-210J, including in some embodiments to match objects and other features in different images. In addition to generating such panorama images, further analysis may be performed in order to clink' at least some of the panoramas together with lines 215 between them, such as to determine relative positional information between pairs of viewing locations that are visible to each other and, to store corresponding inter-panorama links (e.g., links 215-AB, 215-BC and 215-AC between viewing locations A and B, B and C, and A and C, respectively), and in some embodiments and situations to further link at least some viewing locations that are not visible to each other (e.g., link 215-BE between viewing locations B and E).” – The areas include areas external from the building.) wherein the structural elements of the building include multiple of a door or a window or an inter-wall border. (Colburn [0022] “in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196,” – Doors and windows… [0031] “The image 150 e includes several objects in the surrounding environment of the living room, such as windows 196, a picture or painting 194-1, chair 192-1, table 193-1, a lighting fixture 130 a, and inter-wall and floor and ceiling borders including border 195-2.” – Inter-wall borders.) Claim 25 Regarding claim 25, Colburn teaches the features of claim 25 and further teaches: wherein the visual information for the image has less than 360 horizonal degrees of coverage, (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location)” – Some panorama images used are 360 and some are not (e.g., less than, because 360 is a full horizontal angular view). wherein the determined one additional circular descriptor is for a panorama image that is taken at the determined position and that has 360 horizonal degrees of coverage, (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location)” – Some panorama images used are 360.). and wherein the comparing of the circular descriptor for the image to the additional circular descriptors includes matching the angular description for the image to a subset of the determined one additional circular descriptor for the panorama image. (Colburn [0041] “In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty).” – The data of each image is compared to determine angular image data for each capture location. [0043] “In particular, a viewing location's position with respect to features in the room may be determined (as discussed with respect to FIG. 2A), and FIG. 2C further illustrates information 226 with respect to viewing location 210A to indicate such relative angles and optionally distance of the viewing location 210A to a southwest corner of the room, to a south wall of the room, and to the exterior doorway, with various other possible features (e.g., interior doorway to the hallway, northeast corner 195-2, etc.) also available to be used in this manner. Such information may be used to provide an initial estimated position of the estimated room shape 242 for room 229 a around viewing location 210A, such as by minimizing the total error for the initial placement of the estimated room shape with respect to each such feature's measured position information for the viewing location.” These further refine the data at each point to present a further refined map with final angular information determined for and relative to each point.) Claim 26 Regarding claim 26, Colburn teaches: A system comprising: one or more hardware processors of one or more computing devices; and one or more memories with stored instructions that, when executed by at least one of the one or more hardware processors, cause at least one of the one or more computing devices to perform automated operations including at least: (Colburn [0061] “FIG. 3 is a block diagram illustrating an embodiment of one or more server computing systems 300 executing an implementation of a FMGM system 340—the server computing system(s) and FMGM system may be implemented using a plurality of hardware components that form electronic circuits suitable for and configured to, when in combined operation, perform at least some of the techniques described herein. In the illustrated embodiment, each server computing system 300 includes one or more hardware central processing units (“CPU”) or other hardware processors 305, various input/output (“I/O”) components 310, storage 320, and memory 330” – A computing system with processor and memory configured to perform operations of the application.) obtaining description information for an area of a building that includes building location circular descriptors for a plurality of building locations in the area, wherein each building location circular descriptor is associated with one of the building locations and has angular information about features associated with structural elements of the building at specified angular directions from the associated building location; (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location), from multiple images acquired in multiple directions from the viewing location (e.g., from a smartphone or other mobile device held by a user turning at that viewing location), etc. It will be appreciated that such a panorama image may in some situations be represented in a spherical coordinate system and cover up to 360° around horizontal and/or vertical axes, such that a user viewing a starting panorama image may move the viewing direction within the starting panorama image to different orientations to cause different images (or “views”) to be rendered within the starting panorama image (including, if the panorama image is represented in a spherical coordinate system, to convert the image being rendered into a planar coordinate system). Furthermore, acquisition metadata regarding the capture of such panorama images may be obtained and used in various manners, such as data acquired from IMU (inertial measurement unit) sensors or other sensors of a mobile device as it is carried by a user or otherwise moved between viewing locations. Additional details are included below related to the acquisition and usage of panorama images or other images for a building.” – All data is mapped based on a relative angular/spherical coordinate system. [0011] “In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms. Additional details are included below regarding determining information from analysis of images that includes relative positions of images' viewing locations within rooms, including with respect to FIG. 2A and its associated description.” - Automated analysis correlate and links all data to make a single picture that can be viewed from any point and at any angle and relative to identified objects in the room. [0022] “The view capture may be performed by recording a video and/or taking a succession of images, and may include a number of objects or other features (e.g., structural details) that may be visible in images (e.g., video frames) captured from the viewing location—in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196, corners or edges 195 (including corner 195-1 in the northwest corner of the building 198, and corner 195-2 in the northeast corner of the first room), furniture 191-193 (e.g., a couch 191; chairs 192, such as 192-1 and 192-2; tables 193, such as 193-1 and 193-2; etc.), pictures or paintings or televisions or other objects 194 (such as 194-1 and 194-2) hung on walls, light fixtures, etc. The user may also optionally provide a textual or auditory identifier to be associated with a viewing location, such as “entry” 142a for viewing location 210A or “living room” 142 b for viewing location 210B, while in other embodiments the ICA system may automatically generate such identifiers (e.g., by automatically analyzing video and/or other recorded information for a building to perform a corresponding automated determination, such as by using machine learning) or the identifiers may not be used.” – Information generated includes descriptive information, such as identification of objects in the images/rooms. [0026] “In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” - Again, position and angular data are correlated. [0041] “As previously noted with respect to FIGS. 1C-1D and FIG. 2A, relative positional information between two or more panorama images may be determined in various manners in various embodiments, including by analyzing metadata about the panorama acquisition, such as linking information as discussed with respect to FIGS. 1C-1D, and/or by analyzing the respective panorama images to determine common objects or features visible in multiple panorama images, as discussed further with respect to FIG. 2A. It will be noted that, as the number of viewing locations that are visible from each other increases, the precision of a location of a particular viewing location may similarly increase, such as for embodiments in which the relative position information is determined based at least in part on matching corresponding objects or other features in the panorama images. In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty). In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated. In this case, for example, while viewing location 210I outside the building may not be used as part of the final generation of the floor map due to its exterior location, its inter-connection to viewing location 210H may nonetheless be used when determining the relative global position of viewing location 210H, such that the relative global position of viewing location 210H is not shown with the same type of uncertainty in this example as that of viewing location 210D.” – All of the data is encoded relative to each position, such that each position in the grid (and/or at least the positions from which panoramas are taken) includes a 360 degree “circular” descriptor. [0071] “After block 435, the routine continues to block 440 to use the obtained or acquired image and inner-connection information to determine, for the viewing locations of images inside the building, relative global positions of the viewing locations in a common coordinate system or other common frame of reference, such as to determine directions and optionally distances between the respective viewing locations. After block 440, the routine in block 450 analyzes the acquired or obtained panoramas or other images to determine, for each room in the building that has one or more viewing locations, a position within the room of those viewing locations, as discussed in greater detail elsewhere herein. In block 455, the routine further analyzes the images and/or the acquisition metadata for them to determine, for each room in the building, any connecting passages in or out of the room, as discussed in greater detail elsewhere herein. In block 460, the routine then receives or determines estimated room shape information and optionally room type information for some or all rooms in the building, such as based on analysis of images, information supplied by one or more users, etc., as discussed in greater detail elsewhere herein.” – All data is so correlated, including data generated by automation or received from a user. The data is retrieved at each position.) generating an additional circular descriptor for information recorded at a recording location in the area, wherein the additional circular descriptor includes information identifying features associated with at least some of the structural elements that are identifiable from the recorded information at specified directions from the recording location; (Colburn [0040] “In some embodiments, an automated determination of a position within a room of a viewing location and/or of an estimated room shape may be further performed using machine learning, such as via a deep convolution neural network that estimates a 3D layout of a room from a panorama image (e.g., a rectangular, or “box” shape; non-rectangular shapes; etc.). Such determination may include analyzing the panorama image to align the image so that the floor is level and the walls are vertical (e.g., by analyzing vanishing points in the image) and to identify and predict corners and boundaries, with the resulting information fit to a 3D form (e.g., using 3D layout parameters, such as for an outline of floor, ceiling and walls to which image information is fitted). One example of a system for estimating room shape from an image is RoomNet (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2017 IEEE International Conference On Computer Vision, August 2017), and another example of a system for estimating room shape from an image is Room Net (as discussed in “RoomNet: End-to-End Room Layout Estimation” by Chen-Yu Lee et al., 2018 IEEE/CVF Conference On Computer Vision And Pattern Recognition, June 2018). In addition, in some embodiments humans may provide manual indications of estimated room shapes for rooms from images, which may be used in generation of a corresponding floor map, as well as later used to train models for use in corresponding subsequent automated generation of room shapes for other rooms from their images.” – Further images are analyzed to determine data about the point of capture, including 360 degree location and annotation data.) comparing the additional circular descriptor to the building location circular descriptors to determine one of the building location circular descriptors that has angular information best matching the information included in the additional circular descriptor; (Colburn [0024] “While the example of FIGS. 1C and 1D uses information about a travel path that the user takes between viewing locations to perform linking operations between panorama images for those viewing locations, linking operations between panorama images may be performed in part or in whole using other techniques in other embodiments, such as by identifying the same features in different panorama images that have overlapping fields of view (e.g., for different panorama images in the same room) and by using the relative locations of those features in the different images to determine relative position information between the viewing locations of the panorama images, with additional related details discussed with respect to FIG. 2A.” [0037] “It will be appreciated that multiple frames from both viewing locations may include at least some of the same feature (e.g., corner 195-1), and that a given such frame may include other information in addition to that feature (e.g., portions of the west and north walls, the ceiling and/or floor, possible contents of the room, etc.)—for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap” – Image data from different captured images is processed to determine which images have the most overlap (e.g., have the most similar information)). associating, based on the comparing, the recorded information with a position in the area that is determined for the recording location based on the building location associated with the determined one building location circular descriptor; and providing information about the determined position in the area for the recorded information. (Colburn [0038] “After analyzing multiple such features in room 229 a between the panorama images from the viewing locations 210A and 210C, various information may be determined regarding the positions of the viewing locations 210A and 210C in the room 229 a. Note that in this example the viewing location 210C is on the border between rooms 229 a and 229 c, and thus may provide information for and be associated with one or both of those rooms, as well as may provide some information regarding room 229 d based on overlap through the doorway to that room with the panorama image acquired from viewing location 210D. In addition, the image analysis identifies various other features of the room for possible later use, including connecting doorway passages 233 in and/or out of the room (as well as interior doorways or other openings 237 within the room), connecting window passages 234 (e.g., from the room to an exterior of the building), etc.—it will be appreciated that the example connecting passages are shown for only a subset of the possible connecting passages, and that some types of connecting passages (e.g., windows, interior doorways or other openings, etc.) may not be used in some embodiments.” – The comparison of capture position associated data provides information about the positions, including relative positions and doorways or other connections between rooms with the points of image capture.) Claim 27 Regarding claim 27, Colburn teaches the features of claim 26 and further teaches: wherein the recorded information includes a panorama image with visual information, (Colburn [0009] “In at least some embodiments and situations, some or all of the images acquired for a building may be panorama images that are each acquired at one of multiple viewing locations in or around the building, such as to optionally generate a panorama image at a viewing location from a video at that viewing location (e.g., a 360° video taken from a smartphone or other mobile device held by a user turning at that viewing location)” – Panorama images with visual information.) wherein the structural elements include wall elements having at least one of a door or a window or an inter-wall border, and (Colburn [0022] “in the example of FIG. 1B, such objects or other features include the doorways 190 and 197 (e.g., with swinging and/or sliding doors), windows 196,” – Doors and windows… [0031] “The image 150 e includes several objects in the surrounding environment of the living room, such as windows 196, a picture or painting 194-1, chair 192-1, table 193-1, a lighting fixture 130 a, and inter-wall and floor and ceiling borders including border 195-2.” – Inter-wall borders.) wherein the providing of the information about the determined position in the room includes presenting a floor plan for the building that includes the area, wherein the presented floor plan includes a visual indication of the determined position in the area. (Colburn [0008] “The generated floor map and/or other generated mapping-related information may be further used in various manners in various embodiments, including for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc.” – The generated floor plan/map is presented in a GUI, the generated floor plan/map including all positions in the room.) Claim 28 Regarding claim 28, Colburn teaches the features of claim 26, and further teaches: wherein the area of the building is one of multiple rooms of the building. (Colburn [0015] “The described techniques provide various benefits in various embodiments, including to allow floor maps of multi-room buildings and other structures to be automatically generated from images acquired in the buildings or other structures, including without having or using detailed information about distances from images' viewing locations to walls or other objects in a surrounding building or other structure.” – The building can be a multi-room building, including room areas in the generated floorplan.) Claim 29 Regarding claim 29, Colburn teaches the features of claim 26, and further teaches: wherein the area of the building is an external area adjacent to the building. (Colburn [0023] “ This process may repeat from some or all rooms of the building and optionally external to the building, as illustrated for viewing locations 210C-210J. The acquired video and/or other images for each viewing location are further analyzed to generate a panorama image for each of viewing locations 210A-210J, including in some embodiments to match objects and other features in different images. In addition to generating such panorama images, further analysis may be performed in order to clink' at least some of the panoramas together with lines 215 between them, such as to determine relative positional information between pairs of viewing locations that are visible to each other and, to store corresponding inter-panorama links (e.g., links 215-AB, 215-BC and 215-AC between viewing locations A and B, B and C, and A and C, respectively), and in some embodiments and situations to further link at least some viewing locations that are not visible to each other (e.g., link 215-BE between viewing locations B and E).” – The areas include areas external from the building.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 11: Colburn and Christopher Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0116493 A1 to Colburn et al. (Colburn) in view of NPL “K-Nearest Neighbor” by Christopher (Christopher). Claim 11 Regarding claim 11, Colburn teaches the features of 10, and further teaches: wherein the comparing of the image circular descriptor to the building location circular descriptors includes performing a [similarity] search of the building locations of the grid, (Colburn [0037] “It will be appreciated that multiple frames from both viewing locations may include at least some of the same feature (e.g., corner 195-1), and that a given such frame may include other information in addition to that feature (e.g., portions of the west and north walls, the ceiling and/or floor, possible contents of the room, etc.)—for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap” [0036] “it will be appreciated that similar analysis may be performed for that same room by using the panorama image information for viewing location 210B (e.g., in comparison to the panorama images for viewing locations 210A and 210C individually in a pairwise fashion, or instead to simultaneously compare the panorama images for all three viewing locations), and that similar techniques may be performed for the other rooms 229 b-229 f.” – Colburn contemplates conducting a similarity search to determine an image with the largest overlap.) building locationbuilding locationimage circular descriptorbuilding location. (Colburn [0010] “In other embodiments, the images may not be inter-connected, in which additional operations may optionally be performed to connect pairs of them. Additional details are included below regarding determining relative positions of images' viewing locations to each other in a common global coordinate system or other common frame of reference, including with respect to FIGS. 2A and 2B and their associated descriptions.” – Elements of circular descriptors for multiple building locations… [0011] “In addition, the generation of the floor map for the building may further include automatically determining, for each room in the building, the relative position within the room of any image viewing locations, and the positions of any connecting passages in and/or out of the room, such as based at least in part on automated analysis of each such image to determine directions to multiple features in the room (e.g., corners, doorways, etc.), thus allowing the relative position of the image to those multiple features to be determined from those determined directions. The connecting passages may include one or more of doorways, windows, stairways, non-room hallways, etc., and the automated analysis of the images may identify such features based at least in part on identifying the outlines of the passages, identifying different content within the passages than outside them (e.g., different colors, shading, etc.), etc. In addition, in at least some embodiments, the automated analysis of the images may further identify additional information such as an estimated room shape and/or room type, such as by using machine learning to identify features or characteristics corresponding to different room shapes and/or room types—in other embodiments, at least some such information may be obtained in other manners, such as to receive estimated room shape information and optionally room type information from one or more users (e.g., based on user mark-up of one or more images in the room, such as to identify borders between walls, ceiling and floor; based on other user input; etc.). In some embodiments, the automated analysis of the images may further identify additional information in one or more images, such as dimensions of objects (e.g., objects of known size) and/or of some or all of the rooms, as well as estimated actual distances of images' viewing locations from walls or other features in their rooms.” – Further elements of circular descriptors for multiple building locations… [0012] “After positions of images' viewing locations in their enclosing rooms and relative to each other in a common global coordinate system have been determined, and estimated room shape information is obtained for the building's rooms, the generation of the floor map for the building may further include automatically determining initial placement positions of each room's estimated room shape, by placing a room's estimated room shape around any image viewing locations that the room contains. In at least some embodiments, such initial placements are performed separately for each room, but using the determined relative positions of the viewing locations in the common global coordinate system. In this manner, a rough approximation of the floor map may be determined.” [0013] “After determining the initial placement positions of each room's estimated room shape in the common global coordinate system, the generation of the floor map for the building may further include automatically determining final placements of the estimated room shapes for the building's rooms, including by considering positions of rooms relative to each other. The automatic determination of the final placements of the estimated room shapes to complete the floor map may include applying constraints of one or more types, including connecting passages between rooms (e.g., to co-locate or otherwise match connecting passage information in two or more rooms that the passage connects), and optionally constraints of other types (e.g., locations of the building exterior where rooms should not be located, shapes of adjacent rooms, overall dimensions of the building and/or of particular rooms in the building, an exterior shape of some or all of the building, etc.). In some embodiments and in situations with a building having multiple stories or otherwise having multiple levels, the connecting passage information may further be used to associate corresponding portions on different sub-maps of different floors or levels.” - Further elements of circular descriptors for multiple building locations… [0014] “In some embodiments, one or more types of additional processing may be performed, such as to determine additional mapping-related information for a generated floor map or to otherwise associate additional information with a generated floor map. As one example, one or more types of additional information about a building may be received and associated with the floor map (e.g., with particular locations in the floor map), such as additional images, annotations or other descriptions of particular rooms or other locations, overall dimension information, etc. As another example, in at least some embodiments, additional processing of images is performed to determine estimated distance information of one or more types, such as to measure sizes in images of objects of known size, and use such information to estimate room width, length and/or height. Estimated size information for one or more rooms may be associated with the floor map, stored and optionally displayed—if the size information is generated for all rooms within a sufficient degree of accuracy, a more detailed floor plan of the building may further be generated, such as with sufficient detail to allow blueprints or other architectural plans to be generated. In addition, if height information is estimated for one or more rooms, a 3D (three-dimensional) model of some or all of the 2D (two dimensional) floor map may be created, associated with the floor map, stored and optionally displayed. Such generated floor maps and optionally additional associated information may further be used in various manners, as discussed elsewhere herein.” – Further elements of circular descriptors for multiple building locations…) Colburn contemplates using a similarity search, based on elements of circular descriptors, for building locations, but does not appear to explicitly teach, but Coburn in view of Christopher teaches: wherein the comparing of the image circular descriptor to the building location circular descriptors includes performing a nearest-neighbor search of the building locations of the grid, including identifying the determined one [data associated with a point] by repeatedly moving from at least one [point] in the grid to at least one neighbor [point] in the grid if the at least one neighbor [point] has a smaller dissimilarity with the [data associated with the at least one current point] than does the at least [point’s associated data]. (Christopher Page 2, First Paragraph “K-nearest neighbors (KNN) is a type of supervised learning algorithm used for both regression and classification. KNN tries to predict the correct class for the test data by calculating the distance between the test data and all the training points. Then select the K number of points which is closet to the test data. The KNN algorithm calculates the probability of the test data belonging to the classes of ‘K’ training data and class holds the highest probability will be selected. In the case of regression, the value is the mean of the ‘K’ selected training points.” Page 5, Bullets “Step-1: Select the number K of the neighbors Step-2: Calculate the Euclidean distance of K number of neighbors Step-3: Take the K nearest neighbors as per the calculated Euclidean distance. Step-4: Among these k neighbors, count the number of the data points in each category. Step-5: Assign the new data points to that category for which the number of the neighbor is maximum. Step-6: Our model is ready.” Page 11, First Paragraph “Lets consider for simple case with two dimension plot. If we look mathematically, the simple intuition is to calculate the euclidean distance from point of interest ( of whose class we need to determine) to all the points in training set. Then we take class with majority points. This is called brute force method.” – K nearest neighbors is a similarity search that determines clusters, such as points within a same room. Every point is compared to determine similarity. Page 11, Second Paragraph “k-d tree is a hierarchical binary tree. When this algorithm is used for k-NN classification, it rearranges the whole dataset in a binary tree structure, so that when test data is provided, it would give out the result by traversing through the tree, which takes less time than brute search.” Also See FIG. From Page 12, below – With the tree structure PNG media_image2.png 448 560 media_image2.png Greyscale It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the similarity search of Colburn by the K-nearest neighbor kd tree similarity search of Christopher because the person of ordinary skill in the art would be motivated based on the expressed aim of Colburn to determine positions that are within the same room based on a similarity search to look to Christopher for the K-nearest neighbors similarity search using the kd tree that can easily and quickly identify each category (room) with clusters of capture positions likely to be in each room. (Colburn [0011] “In addition, the generation of the floor map for the building may further include automatically determining, for each room in the building, the relative position within the room of any image viewing locations, and the positions of any connecting passages in and/or out of the room, such as based at least in part on automated analysis of each such image to determine directions to multiple features in the room (e.g., corners, doorways, etc.), thus allowing the relative position of the image to those multiple features to be determined from those determined directions.” [0037] “It will be appreciated that multiple frames from both viewing locations may include at least some of the same feature (e.g., corner 195-1), and that a given such frame may include other information in addition to that feature (e.g., portions of the west and north walls, the ceiling and/or floor, possible contents of the room, etc.)—for the purpose of this example, the pair of frames/images being compared from the two viewing locations corresponding to feature 195-1 may include the image/frame from each viewing location with the largest amount of overlap”; Christopher Page 4, First Paragraph “Suppose there are two categories, i.e., Category A and Category B, and we have a new data point x1, so this data point will lie in which of these categories. To solve this type of problem, we need a K-NN algorithm. With the help of K-NN, we can easily identify the category or class of a particular dataset.” Page 11, Second Paragraph “k-d tree is a hierarchical binary tree. When this algorithm is used for k-NN classification, it rearranges the whole dataset in a binary tree structure, so that when test data is provided, it would give out the result by traversing through the tree, which takes less time than brute search.”) Claim 15: Colburn and Masson Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0116493 A1 to Colburn et al. (Colburn) in view of NPL: “Robust statistical distances for machine learning” by Masson (Masson). Claim 15 Regarding claim 15, Colburn teaches the features of claim 5 and further teaches: wherein the comparing of the image circular descriptor to the building location circular descriptors includes, for each of at least some of the building location circular descriptors, using a [similarity] distance measurement of a distance between the image circular descriptor and the building location circular descriptor, and selecting one of the at least some building location circular descriptors that has a smallest measured distance to the image angular detector as the determined one building location circular descriptor. (Colburn [0026] “FIG. 1D provides additional information 103, including about portions of the path 115 ab and 115 bc that reflect the user moving from viewing location 210A to viewing location 210B, and subsequently from viewing location 210B to 210C, respectively. It will be appreciated that the order of obtaining such linking information may vary, such as if the user instead started at viewing location 210B and captured linking information as he or she traveled along path 115 bc to viewing location 210C, and later proceeded from viewing location 210A to viewing location 210B along travel path 115 ab with corresponding linking information captured (optionally after moving from viewing location 210C to 210A without capturing linking information). In this example, FIG. 1D illustrates that the user departs from the viewing location 210A at a point 137 in a direction that is just west of due north (as previously indicated with respect to directional indicator 109 of FIG. 1B), proceeding in a primarily northward manner for approximately a first half of the travel path 115ab, and then beginning to curve in a more easterly direction until arriving at an incoming point 138 to viewing location 210B in a direction that is mostly eastward and a little northward. In order to determine the departure direction from point 137 more specifically, including relative to the direction 120A at which the video acquisition previously began for viewing location 210A (and at which the resulting panorama image begins), initial video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210A in order to identify matching frames/images. In particular, by matching one or more best frames in that panorama image that correspond to the information in the initial one or more video frames/images taken as the user departs from point 137, the departure direction from point 137 may be matched to the viewing direction for acquiring those matching panorama images—while not illustrated, the resulting determination may correspond to a particular degree of rotation from the starting direction 120A to the one or more matching frames/images of the panorama image for that departure direction. In a similar manner, in order to determine the arrival direction at point 138 more specifically, including relative to the direction 120B at which the video acquisition began for viewing location 210B (and at which the resulting panorama image begins), final video information captured as the user travels along travel path 115 ab may be compared to the frames of the panorama image for viewing location 210B in order to identify matching frames/images, and in particular to frames/images in direction 139 (opposite to the side of viewing location 210B at which the user arrives).” – The image/video data and other data are compared to determined a position. [0036] “it will be appreciated that similar analysis may be performed for that same room by using the panorama image information for viewing location 210B (e.g., in comparison to the panorama images for viewing locations 210A and 210C individually in a pairwise fashion, or instead to simultaneously compare the panorama images for all three viewing locations), and that similar techniques may be performed for the other rooms 229 b-229 f.” – A similarity analysis is used to determine which point and its associated data in the map corresponds to the point of image capture to determine location.) Colburn teaches using a similarity analysis to determine whether a captured image and its associated data corresponds to, based on the similarity, the data associated with a particular position on a map, but Colburn does not appear to explicitly teach, but Colburn in view of Masson teaches: wherein the comparing of the image circular descriptor to the building location circular descriptors includes, for each of at least some of the building location circular descriptors, using a circular earth mover's distance measurement of a distance between the image circular descriptor and the building location circular descriptor, and selecting one of the at least some building location circular descriptors that has a smallest measured distance to the image angular detector as the determined one building location circular descriptor. (Masson Page 23, Second Paragraph “When comparing data sets, statistical tests can tell you whether or not two data samples are likely to be generated by the same process or if they are related in some way. However, there are cases where, rather than deciding whether to reject a statistical hypothesis, you want to measure how similar or far apart the data sets are without any assumptions. Statistical distances, as distances between samples, are an interesting answer to that problem.” Page 25, Second Paragraph “Another interesting statistical distance is the Earth Mover’s Distance (EMD), also known as the first Wasserstein distance. Its formal definition is a little technical, but its physical interpretation, which gives it its name, is easy to understand: imagine the two datasets to be piles of earth, and the goal is to move the first pile around to match the second. The Earth Mover’s Distance is the minimum amount of work involved, where “amount of work” is the amount of earth you have to move multiplied by the distance you have to move it. The EMD can also be shown to be equal to the area between the two empirical CDFs.” – Earth Mover’s Distance can be used as a similarity measure.) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the similarity determination of Colburn by the statistical similarity determinations of Masson because the person of ordinary skill in the art would be motivated based on the aim of Colburn to increase certainty of position determinations to look to Masson’s statistical similarity methods that measure similarity without any assumptions that would cause uncertainty. (Colburn [0041] “In this example, the panorama image acquired at viewing location 210D may be visible from only one other viewing location (viewing location 210C), and the information 230 b of FIG. 2B indicates that there may be some uncertainty with respect to the position of viewing location 210D in such a situation, such as is illustrated by indicators 210D-1 to 210D-N (in which the angle or direction between viewing locations 210C and 210D may be known, but in which the distance between viewing locations 210C and 210D has increased uncertainty). In other embodiments, such as when linking information is used for determining the relative positions, and/or if other information about dimensions is available (e.g., from other building metadata that is available, from analysis of sizes of known objects in images, etc.), such uncertainty may be reduced or eliminated.”; Masson Page 23, Second Paragraph “When comparing data sets, statistical tests can tell you whether or not two data samples are likely to be generated by the same process or if they are related in some way. However, there are cases where, rather than deciding whether to reject a statistical hypothesis, you want to measure how similar or far apart the data sets are without any assumptions. Statistical distances, as distances between samples, are an interesting answer to that problem.”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2019/0020817 A1 to Shan et al. (Teaches a similar use of panoramic images to generate a map without needing a distance sensor) US 2022/0076019 A1 to Moulon et al. (Teaches determining a position in a mapped environment based on image capture information) WO 2019/118599 A2 to Sheffield et al. (Teaches virtualizing a 3D environment including objects) WO 2005/091894-A2 to Stone et al. (Teaches virtualizing a 3D environment including objects in a store) NPL: “Zillow Indoor Dataset: Annotated Floor Plans With 360° Panoramas and 3D Room Layouts” by Cruz et al. (Teaches rendering 3D maps from panoramic images) Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY MICHAEL WHITE whose telephone number is (571) 272-7073. The examiner can normally be reached Mon-Fri 11:00-7:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.W./Examiner, Art Unit 2188 /RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188
Read full office action

Prosecution Timeline

Aug 27, 2022
Application Filed
Dec 31, 2025
Non-Final Rejection — §101, §102, §103
Mar 11, 2026
Interview Requested
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 24, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
12%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month