Prosecution Insights
Last updated: April 19, 2026
Application No. 18/393,427

SYSTEMS AND METHODS FOR EMERGENCY RESPONSE MAPPING AND VISUALIZATION IN THREE DIMENSIONS USING ORTHOGRAPHIC PROJECTION INFORMATION

Final Rejection §103§112
Filed
Dec 21, 2023
Examiner
CHEN, FRANK S
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Fnv Ip B V
OA Round
4 (Final)
82%
Grant Probability
Favorable
5-6
OA Rounds
2y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
24 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
4.8%
-35.2% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status 2. Claims 1, 12, and 14 are currently amended. 3. Claim 21 is new. 4. Claims 1-21 are pending in the present application. Claim Rejections - 35 USC § 112 5. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. 6. Claims 1-21 arerejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites the limitation wherein the 3D view is different from the 3D interior survey data which is not supported by Applicants’ Specification. In Applicants’ Specification, 3D interior survey data corresponds to 3D model, 3D scan data or point cloud of a particular floor (Applicant’s Specification in paragraphs [0063], [0065], and [0067]). 3D view at paragraphs [0018]-[0020] is described as a 3D view of at least a portion of the building that includes an overlay representation of the top-down view 2D orthographic projection of the particular floor. Applicants’ Specification at paragraph [0014] recites that top-down view 2D orthographic projection is generated from a portion of the 3D interior survey. Therefore, nothing states or describes how the 3D view is different the 3D interior survey data. For purposes of this examination, Examiner shall interpret the statement wherein the 3D view is different from the 3D interior survey data to mean viewing in 3D a portion of the 3D interior survey data is presented for 3D view and the 3D interior survey data includes 3D point cloud scan of every single floor of every single building. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 7. Claims 1, 5-9, and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Eduardo Juarez (U.S. Patent Application Publication No. 2024/0087254 A1) in view of Moulon et al. (U.S. Patent Application Publication No. 2021/0125397 A1) and further in view of Wang et al. (U.S. Patent Application Publication No. 2018/0140197 A1). 8. Regarding Claim 1 (Currently amended), Juarez discloses A method (Abstract reciting “Systems and methods are provided for a virtual reality security surveillance system (SSS) to generate a virtual reality map to display virtual reality presentations of a facility. …”) comprising: identifying a building of interest, wherein the building of interest is associated with an emergency incident or emergency incident report, (see FIG. 1; paragraph [0030] reciting “Structure data 232 comprises information regarding specific structures (e.g., office buildings, hotels, manufacturing plants, etc.) and/or specific areas (e.g., parks, parking lots, plazas, etc.) where security surveillance is being conducted. …”; paragraph [0048] reciting “Situational status component 216 obtains situational status information in real-time from security surveillance responder terminals 130, non-security terminals 140, and/or non-terminal identification components 150 in incident situations. Situational status information may include any information that provides additional context about the nature of the situation within the vicinity of a given user—e.g., number of other persons in the same vicinity as the user with either terminal 130, 140, an injury sustained by the user (or a person in the vicinity of the user), a reaction being experienced by the user (or a person in the vicinity of the user), an incident or event occurring in the vicinity of the user, a request for specific type of help needed or number of responders needed, and/or images, sounds, or video of the vicinity.”; paragraph [0049] reciting “Such situational status information may be provided as preconfigured messages/data the user can automatically send by tapping or otherwise selecting an associated buttons, icon or tile from their emergency management application, for example, or may be customized messages/data the user types or records into their unit via their emergency management application, for example. Examples of buttons, icons, or tiles that may be useful to transmit preconfigured messages via the target terminal version of the app may include one or more of the following: “CPR Needed,” or “Immobilized,” or “Children in Vicinity,” “Move on to other targets,” and the like. Example customized messages the non-security terminals 140 may transmit may include messages typed by the user of the unit, such as: “There is a person in my vicinity in need of CPR, please send CPR certified emergency responder as soon as possible,” or “I am in the vicinity of the incident but there is a clear path for my escape, no need to assist me, move on to others,” or “I am trapped and cannot find an exit,” or, “I've injured my leg and am immobile, please send two or more security surveillance responders to carry me out,” or “the smoke from the fire is becoming unbearable, people around me are losing consciousness and I expect I will shortly too,” and the like.”; paragraph [0073] reciting “The information collected from the sensor components in the environment, in combination with information of the environment stored in the structure data 232, equipment data 234, and pre-tagged data 236 of storage 206, and information of the environment collected from the user location component 208, camera component 210, display object component 212, situational status component 216 and equipment location component 224 of the SSS 100 (collectively, the “environment information”), may be used by the SSS 100 to generate a virtual reality map of the environment with the virtual reality generation component 228. …”; paragraph [0074] reciting “Using the obtained environment information, the real-time images, and/or other environment information of the environment, either in combination or individually, the SSS 100 may generate real-time 3D visualizations of one or more portions of the environment using the virtual reality generation component 228. As a non-limiting example, a real-time 3D visualization may display the layout of the entire environment, or a particular portion of the environment (i.e., a particular building in a campus, a floor of a building, a room on a particular floor of a building, etc.) with all of the structures, persons, objects, and incidents that are present in real-time in the particular floor of the environment at a given moment in time. A real-time 3D visualization may further display all of the persons, objects, and incidents in the particular portion of the environment at their accurate locations in real-time. …” Therefore, the buildings and areas of structure data 232 can be displayed in real-time 3D visualization layout. Any incidents including emergency incidents such as injuries to a person (or CPR requirement and emergency responder requests) can also be visually displayed in the environment at their accurate locations and in real-time. Thus, any building in the structure data 232 can be visualized in 3D with any real-time incidents also visualized on that building (accurate location).) and is an unfamiliar building for emergency responders; (paragraph [0045] reciting “Security surveillance responders may include public persons, groups, or entities. For instance, public security surveillance responders might include: a private security organization, a security department, a person security officer or group of security officers; a police department, a division of a police department (e.g., a task force, bomb squad, etc.), a person police officer or group of police officers; a fire department, a division of a fire department, a person fireman or group of firemen; a federal law enforcement agency (FBI, CIA, etc.), … For instance, private security surveillance responders might include security guards, property patrolmen, or any other private entity, person, or group of persons designated as such, and the like.” Public emergency responders will be unfamiliar with the buildings that they respond to.) determining a particular floor within the building of interest, wherein the particular floor is included in one or more floors of the building of interest; (see FIG. 3A, 3B, and 3C wherein a floor of a facility 310 is shown in a building shown to have multiple floors and incidents 330 and 340 are shown as well, indicating this is a floor of interest inside a building of interest.) and generating a 3D view of at least a portion of the building of interest to provide visualization information to the emergency responders, (see FIG. 3A, 3B, and 3C wherein the 3D view of a floor is achieved.; paragraph [0085] reciting “... The virtual reality presentation 300 may display a section of the facility 310 for security surveillance administrators to view. As a non-limiting example, virtual reality presentation 300 displays a real-time 3D visualization of the layout 312 of a particular floor of facility 310 with all of the structures 314, persons 316 and objects 318 that are present in real-time on the particular floor of facility 310 at a given moment in time. …”; paragraph [0103] reciting “The alerts may be text messages, audio messages, and/or video messages. The alerts may also include real-time images of the layout 312, structure 314, persons 316, objects 318, incidents 340, and/or devices that are in a particular area of facility 310. The SSS 100 may send alerts to one or more devices of persons 316 that have been selected for alerts to be sent to. The SSS 100 may automatically send alerts to one or more devices of persons 316 according to pre-defined settings. As an example, the SSS 100 may have pre-defined settings that a particular area of facility 310 is restricted, and SSS 100 may send alerts to any devices of persons 316 that enter into the restricted area. Authorized persons may also choose particular person 316 from one or more virtual reality presentations 300 of facility 310 to send one or more alerts to. Authorized persons may further identify particular areas from one or more virtual reality presentations 300 of the facility 310 to label and establish one or more settings to, e.g., label as restricted areas and cause the SSS 100 to automatically send alerts to any devices of persons 316 that enter into any of the restricted areas.” Thus, the emergency responders such as a police unit or firefighting unit can be presented 3D real time virtual visual layout 312 of the building with images of persons, objects, incidents, etc. displayed within the floor of the building being investigated for emergency incident.) While not explicitly disclosed by Juarez, Moulon discloses obtaining a top-down view two-dimensional (2D) orthographic projection of three-dimensional (3D) interior survey data corresponding to the particular floor, wherein the top-down view 2D orthographic projection includes one or more visual landmarks; (paragraph [0013] reciting “… As another non-exclusive example, if the images from the image group are video frames from a video acquired in one or more rooms, SLAM and/or SfM techniques may be used to generate a 3D point cloud for each of the room(s), with the 3D point cloud(s) representing a 3D shape of each of the room(s) and including 3D points along walls of the room and at least some of the ceiling and floor of the room, and optionally with 3D points corresponding to other objects in the room(s), if any. …”; paragraph [0010] reciting “… In at least some embodiments, the defined area includes an interior of a multi-room building (e.g., a house, office, etc.), and the generated information includes a 3D (three-dimensional) floor map model of the building that is generated from an analysis of image frames of continuous video acquired along a path through the interior of the building, with the image analysis identifying shapes and sizes of objects in the building interior (e.g., doors, windows, walls, etc.), as well as determining borders between walls, floors and ceilings. … In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. …”; A 2D orthographically projected floor map is generated from 3D floor map model wherein landmarks such as doors/windows/walls, etc. are also included. 3D survey data corresponds to the scanned data such as point clouds that are used to generated the 3D model of the floorplan.) wherein the 3D view is different from the 3D interior survey data, and includes a graphical overlay representation of the top-down view 2D orthographic projection of the particular floor rendered within the 3D view, (see FIG. 2N and 2M; paragraph [0010] reciting “…. In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. The generated 3D floor map model and/or 2D floor map and/or other generated mapping-related information may be further used in one or more manners in various embodiments, such as for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc. Additional details are included below regarding the automated operations of the computing device(s) involved in the generating of the mapping information, and some or all of the techniques described herein may, in at least some embodiments, be performed via automated operations of a Visual data-To-Floor Map (“VTFM”) system, as discussed further below.” The 2D floorplan is an orthographic top view is just the 3D floor map without height information. Thus, the 3D view of the 3D floor map is showing a graphical overlay of height data (walls etc.) atop of the top-down 2D orthographic projection of the floorplan of every floor that is scanned in Moulon.) It would have been obvious to a person of ordinary skills in the art before the effective filing of the present application to modify Juarez with Moulon so that the floors in Juarez can be generated using the technique of Moulon. Moulon uses capturing images of floors for image analysis to generate a 3D floor plan while also generating a 2D orthographic floor plan from the 3D plan. This technique can be applied to Juarez obviously because Juarez also needs to generate accurate virtual 3D floors of all of the buildings in order to visualize the incidents correctly. While the combination of Juarez and Moulon does not explicitly disclose, Wang discloses and wherein the 3D view includes one or more control elements for manipulating the 3D view. (paragraph [0048] reciting “… In some embodiments, operator commands to change a location or orientation are entered by touching the screen with a finger 203 or a device such as a touch pen 204. The touch commands can include pressing areas of the display showing virtual buttons 205, 206 for rotating the view in opposite directions about a first axis, and virtual buttons 207,208 for rotating the view in opposite directions about a second axis that is orthogonal to the first axis. The touching commands can include swiping, pinching, or spreading gestures to change the view by rotating, zooming, or panning. …” While only virtual buttons are for rotating are disclosed, similar buttons for zooming, panning, or moving can be inserted into the screen for user to use on the 3D view of the floorplan in virtual space.) It would have been obvious to a person of ordinary skills in the art before the effective filing of the present application to modify Juarez and Moulon with Wang so that virtual buttons can be displayed on screen for user to actuate to manipulate the rotating, panning, zooming or translating of the floorplan in virtual space. This is an obvious modification because Juarez at paragraph [0076] discloses administration controls for panning, zooming or rotating the 3D visualization of the floorplan model. Therefore, it is obviously beneficial that virtual buttons are display on the screen as administration controls 9. Regarding Claim 5 (Original), Moulon further discloses The method of claim 1, wherein obtaining the top-down view 2D orthographic projection comprises: obtaining one or more portions of 3D scan or 3D mapping data corresponding to the particular floor, wherein the one or more portions of 3D scan or 3D mapping data comprise the 3D interior survey data; (paragraph [0010] reciting “… The captured video may, for example, be 360° video (e.g., video with frames that are each a spherical panorama image having 360° of coverage along at least one plane, such as 360° of coverage along a horizontal plane and around a vertical axis) acquired using a video acquisition device with a spherical camera having one or more fisheye lenses to capture 360 degrees horizontally, and in at least some such embodiments, the generating of the mapping information is further performed without having or using information acquired from any depth-sensing equipment about distances from the acquisition locations of the video/images to walls or other objects in the surrounding building interior. …”) and performing orthographic projection of the one or more portions of 3D scan or 3D mapping data onto a 2D projection plane to thereby generate the top-down view 2D orthographic projection. (paragraph [0010] reciting “… In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. …”) 8. Regarding Claim 6 (Original), Moulon further discloses The method of claim 5, wherein the 2D projection plane is a horizontal plane parallel to a floor surface of the particular floor represented in the one or more portions of 3D scan or 3D mapping data. (paragraph [0013] reciting “… SfM analysis techniques may be used to generate a 3D point cloud for each of one or more rooms in which those images were acquired, with the 3D point cloud(s) representing a 3D shape of each of the room(s) and including 3D points along walls of the room and at least some of the ceiling and floor of the room, …” paragraph [0038] reciting “… Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc. …” The point cloud is used to determine floor plane which is horizontally parallel to the actual floor of the area captured by sensors. This would have been an obvious modification to Juarez) 9. Regarding Claim 7 (Original), Moulon further discloses The method of claim 5, wherein the 2D projection plane is a horizontal plane coplanar with a floor surface of the particular floor represented in the one or more portions of 3D scan or 3D mapping data. (paragraph [0013] reciting “… SfM analysis techniques may be used to generate a 3D point cloud for each of one or more rooms in which those images were acquired, with the 3D point cloud(s) representing a 3D shape of each of the room(s) and including 3D points along walls of the room and at least some of the ceiling and floor of the room, …” paragraph [0038] reciting “… Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc. …” The point cloud is used to determine floor plane which is coplanar to the actual physical floor of the area being captured by sensors. This would have been an obvious modification to Juarez) 10. Regarding Claim 8 (Original), Moulon further discloses The method of claim 1, wherein the top-down view 2D orthographic projection is generated from a portion of the 3D interior survey data associated with respective 3D height coordinates less than or equal to a configured threshold height value. (paragraph [0017] reciting “… In addition, if estimated size information includes height information (e.g., from floors to ceilings, such as may be obtained from results of SfM and/or MVS and/or SLAM processing), a 3D model (e.g., with full height information represented) and/or 2.5D (two-and-a-half dimensional) model (e.g., with partial representations of height shown) of some or all of the 2D (two-dimensional) floor map may be created (optionally with information from in-room images projected on the walls of the models), associated with the floor map, stored and optionally displayed. …” The height from 3D analysis is at most ceiling height.) 11. Regarding Claim 9 (Original), Moulon further discloses The method of claim 8, wherein the configured threshold height value is equal to a ceiling height for the particular floor within the building of interest. (paragraph [0017] reciting “… In addition, if estimated size information includes height information (e.g., from floors to ceilings, such as may be obtained from results of SfM and/or MVS and/or SLAM processing), a 3D model (e.g., with full height information represented) and/or 2.5D (two-and-a-half dimensional) model (e.g., with partial representations of height shown) of some or all of the 2D (two-dimensional) floor map may be created (optionally with information from in-room images projected on the walls of the models), associated with the floor map, stored and optionally displayed. …” The height from 3D analysis is at most ceiling height.) 12. Regarding Claim 12 (Currently amended), Juarez discloses A system (Abstract reciting “Systems and methods are provided for a virtual reality security surveillance system (SSS) to generate a virtual reality map to display virtual reality presentations of a facility. …”) comprising: one or more processors; and one or more computer-readable storage media having computer-readable instructions stored thereon, wherein the computer-readable instructions, when executed by the one or more processors, cause the one or more processors to: (paragraph [0010] reciting “FIG. 6 is an example computing component that includes one or more hardware processors and machine-readable storage media storing a set of machine-readable/machine-executable instructions that, when executed, cause the one or more hardware processors to perform an illustrative method for implementing virtual reality assisted security and distress location system according to various embodiments of the technology described in the present disclosure.”) identify a building of interest, wherein the building of interest is associated with an emergency incident or emergency incident report, (see FIG. 1; paragraph [0030] reciting “Structure data 232 comprises information regarding specific structures (e.g., office buildings, hotels, manufacturing plants, etc.) and/or specific areas (e.g., parks, parking lots, plazas, etc.) where security surveillance is being conducted. …”; paragraph [0048] reciting “Situational status component 216 obtains situational status information in real-time from security surveillance responder terminals 130, non-security terminals 140, and/or non-terminal identification components 150 in incident situations. Situational status information may include any information that provides additional context about the nature of the situation within the vicinity of a given user—e.g., number of other persons in the same vicinity as the user with either terminal 130, 140, an injury sustained by the user (or a person in the vicinity of the user), a reaction being experienced by the user (or a person in the vicinity of the user), an incident or event occurring in the vicinity of the user, a request for specific type of help needed or number of responders needed, and/or images, sounds, or video of the vicinity.”; paragraph [0049] reciting “Such situational status information may be provided as preconfigured messages/data the user can automatically send by tapping or otherwise selecting an associated buttons, icon or tile from their emergency management application, for example, or may be customized messages/data the user types or records into their unit via their emergency management application, for example. Examples of buttons, icons, or tiles that may be useful to transmit preconfigured messages via the target terminal version of the app may include one or more of the following: “CPR Needed,” or “Immobilized,” or “Children in Vicinity,” “Move on to other targets,” and the like. Example customized messages the non-security terminals 140 may transmit may include messages typed by the user of the unit, such as: “There is a person in my vicinity in need of CPR, please send CPR certified emergency responder as soon as possible,” or “I am in the vicinity of the incident but there is a clear path for my escape, no need to assist me, move on to others,” or “I am trapped and cannot find an exit,” or, “I've injured my leg and am immobile, please send two or more security surveillance responders to carry me out,” or “the smoke from the fire is becoming unbearable, people around me are losing consciousness and I expect I will shortly too,” and the like.”; paragraph [0073] reciting “The information collected from the sensor components in the environment, in combination with information of the environment stored in the structure data 232, equipment data 234, and pre-tagged data 236 of storage 206, and information of the environment collected from the user location component 208, camera component 210, display object component 212, situational status component 216 and equipment location component 224 of the SSS 100 (collectively, the “environment information”), may be used by the SSS 100 to generate a virtual reality map of the environment with the virtual reality generation component 228. …”; paragraph [0074] reciting “Using the obtained environment information, the real-time images, and/or other environment information of the environment, either in combination or individually, the SSS 100 may generate real-time 3D visualizations of one or more portions of the environment using the virtual reality generation component 228. As a non-limiting example, a real-time 3D visualization may display the layout of the entire environment, or a particular portion of the environment (i.e., a particular building in a campus, a floor of a building, a room on a particular floor of a building, etc.) with all of the structures, persons, objects, and incidents that are present in real-time in the particular floor of the environment at a given moment in time. A real-time 3D visualization may further display all of the persons, objects, and incidents in the particular portion of the environment at their accurate locations in real-time. …” Therefore, the buildings and areas of structure data 232 can be displayed in real-time 3D visualization layout. Any incidents including emergency incidents such as injuries to a person (or CPR requirement and emergency responder requests) can also be visually displayed in the environment at their accurate locations and in real-time. Thus, any building in the structure data 232 can be visualized in 3D with any real-time incidents also visualized on that building (accurate location).) and is an unfamiliar building for emergency responders; (paragraph [0045] reciting “Security surveillance responders may include public persons, groups, or entities. For instance, public security surveillance responders might include: a private security organization, a security department, a person security officer or group of security officers; a police department, a division of a police department (e.g., a task force, bomb squad, etc.), a person police officer or group of police officers; a fire department, a division of a fire department, a person fireman or group of firemen; a federal law enforcement agency (FBI, CIA, etc.), … For instance, private security surveillance responders might include security guards, property patrolmen, or any other private entity, person, or group of persons designated as such, and the like.” Public emergency responders will be unfamiliar with the buildings that they respond to.) determine a particular floor within the building of interest, wherein the particular floor is included in one or more floors of the building of interest; (see FIG. 3A, 3B, and 3C wherein a floor of a facility 310 is shown in a building shown to have multiple floors and incidents 330 and 340 are shown as well, indicating this is a floor of interest inside a building of interest.) and generate a 3D view of at least a portion of the building of interest to provide visualization information to the emergency responders, (see FIG. 3A, 3B, and 3C wherein the 3D view of a floor is achieved.; paragraph [0085] reciting “... The virtual reality presentation 300 may display a section of the facility 310 for security surveillance administrators to view. As a non-limiting example, virtual reality presentation 300 displays a real-time 3D visualization of the layout 312 of a particular floor of facility 310 with all of the structures 314, persons 316 and objects 318 that are present in real-time on the particular floor of facility 310 at a given moment in time. …”; paragraph [0103] reciting “The alerts may be text messages, audio messages, and/or video messages. The alerts may also include real-time images of the layout 312, structure 314, persons 316, objects 318, incidents 340, and/or devices that are in a particular area of facility 310. The SSS 100 may send alerts to one or more devices of persons 316 that have been selected for alerts to be sent to. The SSS 100 may automatically send alerts to one or more devices of persons 316 according to pre-defined settings. As an example, the SSS 100 may have pre-defined settings that a particular area of facility 310 is restricted, and SSS 100 may send alerts to any devices of persons 316 that enter into the restricted area. Authorized persons may also choose particular person 316 from one or more virtual reality presentations 300 of facility 310 to send one or more alerts to. Authorized persons may further identify particular areas from one or more virtual reality presentations 300 of the facility 310 to label and establish one or more settings to, e.g., label as restricted areas and cause the SSS 100 to automatically send alerts to any devices of persons 316 that enter into any of the restricted areas.” Thus, the emergency responders such as a police unit or firefighting unit can be presented 3D real time virtual visual layout 312 of the building with images of persons, objects, incidents, etc. displayed within the floor of the building being investigated for emergency incident.) While not explicitly disclosed by Juarez, Moulon discloses obtain a top-down view two-dimensional (2D) orthographic projection of three-dimensional (3D) interior survey data corresponding to the particular floor, wherein the top-down view 2D orthographic projection includes one or more visual landmarks; (paragraph [0013] reciting “… As another non-exclusive example, if the images from the image group are video frames from a video acquired in one or more rooms, SLAM and/or SfM techniques may be used to generate a 3D point cloud for each of the room(s), with the 3D point cloud(s) representing a 3D shape of each of the room(s) and including 3D points along walls of the room and at least some of the ceiling and floor of the room, and optionally with 3D points corresponding to other objects in the room(s), if any. …”; paragraph [0010] reciting “… In at least some embodiments, the defined area includes an interior of a multi-room building (e.g., a house, office, etc.), and the generated information includes a 3D (three-dimensional) floor map model of the building that is generated from an analysis of image frames of continuous video acquired along a path through the interior of the building, with the image analysis identifying shapes and sizes of objects in the building interior (e.g., doors, windows, walls, etc.), as well as determining borders between walls, floors and ceilings. … In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. …”; A 2D orthographically projected floor map is generated from 3D floor map model wherein landmarks such as doors/windows/walls, etc. are also included. 3D survey data corresponds to the scanned data such as point clouds that are used to generated the 3D model of the floorplan.) wherein the 3D view is different from the 3D interior survey data, and includes a graphical overlay representation of the top-down view 2D orthographic projection of the particular floor rendered within the 3D view, (see FIG. 2N and 2M; paragraph [0010] reciting “…. In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. The generated 3D floor map model and/or 2D floor map and/or other generated mapping-related information may be further used in one or more manners in various embodiments, such as for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc. Additional details are included below regarding the automated operations of the computing device(s) involved in the generating of the mapping information, and some or all of the techniques described herein may, in at least some embodiments, be performed via automated operations of a Visual data-To-Floor Map (“VTFM”) system, as discussed further below.” The 2D floorplan is an orthographic top view is just the 3D floor map without height information. Thus, the 3D view of the 3D floor map is showing a graphical overlay of height data (walls etc.) atop of the top-down 2D orthographic projection of the floorplan of every floor that is scanned in Moulon.) It would have been obvious to a person of ordinary skills in the art before the effective filing of the present application to modify Juarez with Moulon so that the floors in Juarez can be generated using the technique of Moulon. Moulon uses capturing images of floors for image analysis to generate a 3D floor plan while also generating a 2D orthographic floor plan from the 3D plan. This technique can be applied to Juarez obviously because Juarez also needs to generate accurate virtual 3D floors of all of the buildings in order to visualize the incidents correctly. While the combination of Juarez and Moulon does not explicitly disclose, Wang discloses and wherein the 3D view includes one or more control elements for manipulating the 3D view. (paragraph [0048] reciting “… In some embodiments, operator commands to change a location or orientation are entered by touching the screen with a finger 203 or a device such as a touch pen 204. The touch commands can include pressing areas of the display showing virtual buttons 205, 206 for rotating the view in opposite directions about a first axis, and virtual buttons 207,208 for rotating the view in opposite directions about a second axis that is orthogonal to the first axis. The touching commands can include swiping, pinching, or spreading gestures to change the view by rotating, zooming, or panning. …” While only virtual buttons are for rotating are disclosed, similar buttons for zooming, panning, or moving can be inserted into the screen for user to use on the 3D view of the floorplan in virtual space.) It would have been obvious to a person of ordinary skills in the art before the effective filing of the present application to modify Juarez and Moulon with Wang so that virtual buttons can be displayed on screen for user to actuate to manipulate the rotating, panning, zooming or translating of the floorplan in virtual space. This is an obvious modification because Juarez at paragraph [0076] discloses administration controls for panning, zooming or rotating the 3D visualization of the floorplan model. Therefore, it is obviously beneficial that virtual buttons are display on the screen as administration controls. 13. Regarding Claim 13 (Original), Moulon further discloses The system of claim 12, wherein, to obtain the top-down view 2D orthographic projection, the computer-readable instructions cause the one or more processors to: obtain one or more portions of 3D scan or 3D mapping data corresponding to the particular floor, wherein the one or more portions of 3D scan or 3D mapping data comprise the 3D interior survey data; (paragraph [0010] reciting “… The captured video may, for example, be 360° video (e.g., video with frames that are each a spherical panorama image having 360° of coverage along at least one plane, such as 360° of coverage along a horizontal plane and around a vertical axis) acquired using a video acquisition device with a spherical camera having one or more fisheye lenses to capture 360 degrees horizontally, and in at least some such embodiments, the generating of the mapping information is further performed without having or using information acquired from any depth-sensing equipment about distances from the acquisition locations of the video/images to walls or other objects in the surrounding building interior. …”) and perform orthographic projection of the one or more portions of 3D scan or 3D mapping data onto a 2D projection plane to thereby generate the top-down view 2D orthographic projection. (paragraph [0010] reciting “… In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. …”) 14. Regarding Claim 14 (Currently amended), Juarez discloses One or more non-transitory computer-readable media comprising computer-readable instructions, which when executed by one or more processors (paragraph [0010] reciting “FIG. 6 is an example computing component that includes one or more hardware processors and machine-readable storage media storing a set of machine-readable/machine-executable instructions that, when executed, cause the one or more hardware processors to perform an illustrative method for implementing virtual reality assisted security and distress location system according to various embodiments of the technology described in the present disclosure.”) of an emergency response mapping and visualization service, cause the emergency response mapping and visualization service to: (Abstract reciting “Systems and methods are provided for a virtual reality security surveillance system (SSS) to generate a virtual reality map to display virtual reality presentations of a facility. …”) identify a building of interest, wherein the building of interest is associated with an emergency incident or emergency incident report, (see FIG. 1; paragraph [0030] reciting “Structure data 232 comprises information regarding specific structures (e.g., office buildings, hotels, manufacturing plants, etc.) and/or specific areas (e.g., parks, parking lots, plazas, etc.) where security surveillance is being conducted. …”; paragraph [0048] reciting “Situational status component 216 obtains situational status information in real-time from security surveillance responder terminals 130, non-security terminals 140, and/or non-terminal identification components 150 in incident situations. Situational status information may include any information that provides additional context about the nature of the situation within the vicinity of a given user—e.g., number of other persons in the same vicinity as the user with either terminal 130, 140, an injury sustained by the user (or a person in the vicinity of the user), a reaction being experienced by the user (or a person in the vicinity of the user), an incident or event occurring in the vicinity of the user, a request for specific type of help needed or number of responders needed, and/or images, sounds, or video of the vicinity.”; paragraph [0049] reciting “Such situational status information may be provided as preconfigured messages/data the user can automatically send by tapping or otherwise selecting an associated buttons, icon or tile from their emergency management application, for example, or may be customized messages/data the user types or records into their unit via their emergency management application, for example. Examples of buttons, icons, or tiles that may be useful to transmit preconfigured messages via the target terminal version of the app may include one or more of the following: “CPR Needed,” or “Immobilized,” or “Children in Vicinity,” “Move on to other targets,” and the like. Example customized messages the non-security terminals 140 may transmit may include messages typed by the user of the unit, such as: “There is a person in my vicinity in need of CPR, please send CPR certified emergency responder as soon as possible,” or “I am in the vicinity of the incident but there is a clear path for my escape, no need to assist me, move on to others,” or “I am trapped and cannot find an exit,” or, “I've injured my leg and am immobile, please send two or more security surveillance responders to carry me out,” or “the smoke from the fire is becoming unbearable, people around me are losing consciousness and I expect I will shortly too,” and the like.”; paragraph [0073] reciting “The information collected from the sensor components in the environment, in combination with information of the environment stored in the structure data 232, equipment data 234, and pre-tagged data 236 of storage 206, and information of the environment collected from the user location component 208, camera component 210, display object component 212, situational status component 216 and equipment location component 224 of the SSS 100 (collectively, the “environment information”), may be used by the SSS 100 to generate a virtual reality map of the environment with the virtual reality generation component 228. …”; paragraph [0074] reciting “Using the obtained environment information, the real-time images, and/or other environment information of the environment, either in combination or individually, the SSS 100 may generate real-time 3D visualizations of one or more portions of the environment using the virtual reality generation component 228. As a non-limiting example, a real-time 3D visualization may display the layout of the entire environment, or a particular portion of the environment (i.e., a particular building in a campus, a floor of a building, a room on a particular floor of a building, etc.) with all of the structures, persons, objects, and incidents that are present in real-time in the particular floor of the environment at a given moment in time. A real-time 3D visualization may further display all of the persons, objects, and incidents in the particular portion of the environment at their accurate locations in real-time. …” Therefore, the buildings and areas of structure data 232 can be displayed in real-time 3D visualization layout. Any incidents including emergency incidents such as injuries to a person (or CPR requirement and emergency responder requests) can also be visually displayed in the environment at their accurate locations and in real-time. Thus, any building in the structure data 232 can be visualized in 3D with any real-time incidents also visualized on that building (accurate location).) and is an unfamiliar building for emergency responders (paragraph [0045] reciting “Security surveillance responders may include public persons, groups, or entities. For instance, public security surveillance responders might include: a private security organization, a security department, a person security officer or group of security officers; a police department, a division of a police department (e.g., a task force, bomb squad, etc.), a person police officer or group of police officers; a fire department, a division of a fire department, a person fireman or group of firemen; a federal law enforcement agency (FBI, CIA, etc.), … For instance, private security surveillance responders might include security guards, property patrolmen, or any other private entity, person, or group of persons designated as such, and the like.” Public emergency responders will be unfamiliar with the buildings that they respond to.) determine a particular floor within the building of interest, wherein the particular floor is included in one or more floors of the building of interest; (see FIG. 3A, 3B, and 3C wherein a floor of a facility 310 is shown in a building shown to have multiple floors and incidents 330 and 340 are shown as well, indicating this is a floor of interest inside a building of interest.) and generate a 3D view of at least a portion of the building of interest to provide visualization information to the emergency responders, (see FIG. 3A, 3B, and 3C wherein the 3D view of a floor is achieved.; paragraph [0085] reciting “... The virtual reality presentation 300 may display a section of the facility 310 for security surveillance administrators to view. As a non-limiting example, virtual reality presentation 300 displays a real-time 3D visualization of the layout 312 of a particular floor of facility 310 with all of the structures 314, persons 316 and objects 318 that are present in real-time on the particular floor of facility 310 at a given moment in time. …”; paragraph [0103] reciting “The alerts may be text messages, audio messages, and/or video messages. The alerts may also include real-time images of the layout 312, structure 314, persons 316, objects 318, incidents 340, and/or devices that are in a particular area of facility 310. The SSS 100 may send alerts to one or more devices of persons 316 that have been selected for alerts to be sent to. The SSS 100 may automatically send alerts to one or more devices of persons 316 according to pre-defined settings. As an example, the SSS 100 may have pre-defined settings that a particular area of facility 310 is restricted, and SSS 100 may send alerts to any devices of persons 316 that enter into the restricted area. Authorized persons may also choose particular person 316 from one or more virtual reality presentations 300 of facility 310 to send one or more alerts to. Authorized persons may further identify particular areas from one or more virtual reality presentations 300 of the facility 310 to label and establish one or more settings to, e.g., label as restricted areas and cause the SSS 100 to automatically send alerts to any devices of persons 316 that enter into any of the restricted areas.” Thus, the emergency responders such as a police unit or firefighting unit can be presented 3D real time virtual visual layout 312 of the building with images of persons, objects, incidents, etc. displayed within the floor of the building being investigated for emergency incident.) While not explicitly disclosed by Juarez, Moulon discloses obtain a top-down view two-dimensional (2D) orthographic projection of three- dimensional (3D) interior survey data corresponding to the particular floor, wherein the top- down view 2D orthographic projection includes one or more visual landmarks; (paragraph [0013] reciting “… As another non-exclusive example, if the images from the image group are video frames from a video acquired in one or more rooms, SLAM and/or SfM techniques may be used to generate a 3D point cloud for each of the room(s), with the 3D point cloud(s) representing a 3D shape of each of the room(s) and including 3D points along walls of the room and at least some of the ceiling and floor of the room, and optionally with 3D points corresponding to other objects in the room(s), if any. …”; paragraph [0010] reciting “… In at least some embodiments, the defined area includes an interior of a multi-room building (e.g., a house, office, etc.), and the generated information includes a 3D (three-dimensional) floor map model of the building that is generated from an analysis of image frames of continuous video acquired along a path through the interior of the building, with the image analysis identifying shapes and sizes of objects in the building interior (e.g., doors, windows, walls, etc.), as well as determining borders between walls, floors and ceilings. … In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. …”; A 2D orthographically projected floor map is generated from 3D floor map model wherein landmarks such as doors/windows/walls, etc. are also included. 3D survey data corresponds to the scanned data such as point clouds that are used to generated the 3D model of the floorplan.) wherein the 3D view is different from the 3D interior survey data, and includes a graphical overlay representation of the top-down view 2D orthographic projection of the particular floor rendered within the 3D view, (see FIG. 2N and 2M; paragraph [0010] reciting “…. In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. The generated 3D floor map model and/or 2D floor map and/or other generated mapping-related information may be further used in one or more manners in various embodiments, such as for controlling navigation of mobile devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding GUIs (graphical user interfaces), etc. Additional details are included below regarding the automated operations of the computing device(s) involved in the generating of the mapping information, and some or all of the techniques described herein may, in at least some embodiments, be performed via automated operations of a Visual data-To-Floor Map (“VTFM”) system, as discussed further below.” The 2D floorplan is an orthographic top view is just the 3D floor map without height information. Thus, the 3D view of the 3D floor map is showing a graphical overlay of height data (walls etc.) atop of the top-down 2D orthographic projection of the floorplan of every floor that is scanned in Moulon.) It would have been obvious to a person of ordinary skills in the art before the effective filing of the present application to modify Juarez with Moulon so that the floors in Juarez can be generated using the technique of Moulon. Moulon uses capturing images of floors for image analysis to generate a 3D floor plan while also generating a 2D orthographic floor plan from the 3D plan. This technique can be applied to Juarez obviously because Juarez also needs to generate accurate virtual 3D floors of all of the buildings in order to visualize the incidents correctly. It would have been obvious to a person of ordinary skills in the art before the effective filing of the present application to modify Juarez with Moulon so that the floors in Juarez can be generated using the technique of Moulon. Moulon uses capturing images of floors for image analysis to generate a 3D floor plan while also generating a 2D orthographic floor plan from the 3D plan. This technique can be applied to Juarez obviously because Juarez also needs to generate accurate virtual 3D floors of all of the buildings in order to visualize the incidents correctly. and wherein the 3D view includes one or more control elements for manipulating the 3D view. 15. Regarding Claim 15 (Original), Moulon further discloses The one or more non-transitory computer-readable media of claim 14, wherein the computer-readable instructions further cause the one emergency response mapping and visualization service to: obtain one or more portions of 3D scan or 3D mapping data corresponding to the particular floor, wherein the one or more portions of 3D scan or 3D mapping data comprise the 3D interior survey data; (paragraph [0010] reciting “… The captured video may, for example, be 360° video (e.g., video with frames that are each a spherical panorama image having 360° of coverage along at least one plane, such as 360° of coverage along a horizontal plane and around a vertical axis) acquired using a video acquisition device with a spherical camera having one or more fisheye lenses to capture 360 degrees horizontally, and in at least some such embodiments, the generating of the mapping information is further performed without having or using information acquired from any depth-sensing equipment about distances from the acquisition locations of the video/images to walls or other objects in the surrounding building interior. …”) and perform orthographic projection of the one or more portions of 3D scan or 3D mapping data onto a 2D projection plane to thereby generate the top-down view 2D orthographic projection. (paragraph [0010] reciting “… In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. …”) 16. Regarding Claim 16 (New), Moulon further discloses The system of claim 13, wherein the 2D projection plane is a horizontal plane parallel to a floor surface of the particular floor represented in the one or more portions of 3D scan or 3D mapping data. (paragraph [0013] reciting “… SfM analysis techniques may be used to generate a 3D point cloud for each of one or more rooms in which those images were acquired, with the 3D point cloud(s) representing a 3D shape of each of the room(s) and including 3D points along walls of the room and at least some of the ceiling and floor of the room, …” paragraph [0038] reciting “… Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc. …” The point cloud is used to determine floor plane which is horizontally parallel to the actual floor of the area captured by sensors. This would have been an obvious modification to Juarez) 17. Regarding Claim 17 (Previous presented), Moulon further discloses The system of claim 13, wherein the 2D projection plane is a horizontal plane coplanar with a floor surface of the particular floor represented in the one or more portions of 3D scan or 3D mapping data. (paragraph [0013] reciting “… SfM analysis techniques may be used to generate a 3D point cloud for each of one or more rooms in which those images were acquired, with the 3D point cloud(s) representing a 3D shape of each of the room(s) and including 3D points along walls of the room and at least some of the ceiling and floor of the room, …” paragraph [0038] reciting “… Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc. …” The point cloud is used to determine floor plane which is coplanar to the actual physical floor of the area being captured by sensors. This would have been an obvious modification to Juarez) 18. Regarding Claim 18 (Previously presented), Moulon further discloses The system of claim 12, wherein the top-down view 2D orthographic projection is generated from a portion of the 3D interior survey data associated with respective 3D height coordinates less than or equal to a configured threshold height value. 19. Regarding Claim 19 (Previously presented), Moulon further discloses The system of claim 18, wherein the configured threshold height value is equal to a ceiling height for the particular floor within the building of interest. (paragraph [0017] reciting “… In addition, if estimated size information includes height information (e.g., from floors to ceilings, such as may be obtained from results of SfM and/or MVS and/or SLAM processing), a 3D model (e.g., with full height information represented) and/or 2.5D (two-and-a-half dimensional) model (e.g., with partial representations of height shown) of some or all of the 2D (two-dimensional) floor map may be created (optionally with information from in-room images projected on the walls of the models), associated with the floor map, stored and optionally displayed. …” The height from 3D analysis is at most ceiling height.) 20. Claims 2-3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Juarez view of Moulon in view of Wang and further in view of Yu et al. (U.S. Patent Application Publication No. 2022/0157021 A1). 21 Regarding Claim 2 (Currently amended), while the combination of Juarez, Moulon, and Wang does not explicitly disclose, Yu discloses The method of claim 1, further comprising: outputting the generated 3D view in response to a request for visualization information or orientation information for determining a location of the emergency incident. (paragraph [0144] reciting “After a user clicks the button “View Alarm Events” in FIG. 7A, operations shown in FIGS. 7B to 7E may be performed sequentially. A building where the alarm event is located may be determined first (FIG. 7B), and the building may be cut out and split floor by floor (FIG. 7C) to obtain the floor where the alarm event is located, and then a position where the alarm event is located on the floor may be enlarged (FIGS. 7D and 7E), to display the 3D virtual image at the first position where the alarm event is located. …” The 3D split view of the building is generated when button “View Alarm Events” is clicked upon by a user.) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present application to modify Juarez, Moulon, and Wang with Yu so that a user interface with button is presented for user to selected which alarms or incidents to view. This is an obviously beneficial modification as control of which incident to view goes to the user. 22. Regarding Claim 3 (Currently amended), Juarez further discloses The method of claim 2, further comprising: receiving location information indicative of the location of the emergency incident within the particular floor, wherein the location information is based on one or more of the visual landmarks included in the overlay representation of the top-down view 2D orthographic projection. (paragraph [0097] reciting “FIG. 3C illustrates an example incident symbol for an incident displayed in a virtual reality presentation 300 similar to that shown in FIG. 3B. The SSS 100 not only can detect both the presence and locations of person 316 in the facility 310, but also the presence of any incidents, events, environmental changes, and/or emergencies, hereafter referred to as “incidents 340,” that occur in the facility 310. An incident 340 may include, but is not limited to, a fire, electrical blackout, water leakage, injury, sickness, use of lethal weapons, robbery, gun violence, bomb, etc. An incident 340 may also include a need to escape a dangerous situation.” Fires and water leakages will always involve some object with in the facility and that object is a landmark. Since the fire or leakage results in incident being displayed, the displayed incident will be placed over or near some physical landmark that is the cause/source of the fire or water leakage. Thus, the incident location will be based on some landmark within the 3D building floor.) 23. Regarding Claim 11 (Currently amended), while the combination of Juarez, Moulon, and Wang does not explicitly disclose, Yu discloses The method of claim 1, wherein identifying the building of interest is based on a determination that the building of interest corresponds to location information associated with the emergency incident. (paragraph [0144] reciting “After a user clicks the button “View Alarm Events” in FIG. 7A, operations shown in FIGS. 7B to 7E may be performed sequentially. A building where the alarm event is located may be determined first (FIG. 7B), and the building may be cut out and split floor by floor (FIG. 7C) to obtain the floor where the alarm event is located, and then a position where the alarm event is located on the floor may be enlarged (FIGS. 7D and 7E), to display the 3D virtual image at the first position where the alarm event is located. For example, the first position is a position where the triangle mark is located in FIG. 7D and FIG. 7E, and the 3D virtual image at the position where the triangle mark is located may be displayed.” The building is identified by pressing button to view alarm events which is present within the building.) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present application to modify Juarez, Moulon, and Wang with Yu so that a user interface with button is presented for user to selected which alarms or incidents to view. This is an obviously beneficial modification as control of which incident to view goes to the user. 24. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Juarez view of Moulon in view of Wang and further in view of Cier et al (U.S. Patent Application Publication No. 2024/0233260 A1). 25. Regarding Claim 4 (Original), while the combination of Juarez, Moulon, and Wang does not explicitly disclose, Cier further discloses The method of claim 1, wherein the top-down view 2D orthographic projection is obtained based on using an identifier of the particular floor to query a database, (paragraph [0049] reciting “… In addition, in this example a user-selectable control 228 is added to indicate a current story that is displayed for the floor plan, and to allow the end-user to select a different story to be displayed …”; paragraph [0057] reciting “… The BALDPM system 140 may further, during its operation, store and/or retrieve various types of data on storage 320 (e.g., in one or more databases or other data structures), such as various types of user/device data 143 and/or 144 and/or 156, images and floor plans and other associated information 155 and/or …”) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present application to modify the combination of Juarez, Moulon, and Wang with Cier so that the user can have a graphical user interface controls to select various floors of a building for display. This is an obviously beneficial modification since it allows the user the freedom to select various virtual floors for display. Moulon further discloses wherein the database includes a respective top-down view 2D orthographic projection corresponding to each floor of the one or more floors of the building of interest. (paragraph [0010] reciting “… In addition, in at least some embodiments, the mapping-related information generated from the analysis of the video image frames (or other sequence of images) includes a 2D (two-dimensional) floor map of the building, such as an overhead view (e.g., an orthographic top view) of a schematic floor map, but without including or displaying height information in the same manner as visualizations of the 3D floor map model—if the 3D floor map model is generated first based on three-dimensional information obtained from the image analysis, such a 2D floor map may, for example, be generated from the 3D floor map model by removing height-related information for the rooms of the building. …” A 3D view includes the top-down view of a 2D floor map.) 26. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Juarez in view of Moulon in view of Wang and further in view of Fleischman et al. (U.S. Patent Application Publication No. 2021/0375062 A1). 00. Regarding Claim 21 (New), while the combination of Juarez, Moulon, and Wang does not explicitly disclose, Fleischman discloses The method of claim 1, wherein the one or more control elements includes an incident location navigations control element, and where user selection of the incident location navigation control elements causes the 3D view of automatically zoom to a rendered view of a 3D environment surrounding the location of the incident. (paragraph [0035] reciting “… The visualization interface allows the user to view the 3D model in two ways. First, the visualization interface provides a 2D overhead map interface representing the corresponding floorplan of the environment from the floorplan storage 136. The 2D overhead map is an interactive interface in which each relative camera location indicated on the 2D map is interactive, such that clicking on a point on the map navigates to the portion of the 3D model corresponding to the selected point in space. …” Therefore, the user can click on incident 340 button in Juarez as shown in FIG. 3C and be automatically navigated to that 3D model area at the selected incident point 340.) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present application to modify the combination of Juarez, Moulon, and Wang with Fleischman so that when incident point occurs, user can easily click on it to view the 3D scene of the incident. This is an obviously beneficial modification since it allows the viewer a closer look of the incident and the surrounding 3D area the incident is occurring in. Allowable Subject Matter 27. Claims 10 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including ALL of the limitations of the base claim and any intervening claims. 27. The following is a statement of reasons for the indication of allowable subject matter: Claim 10 recites the limitation wherein the portion of the 3D interior survey data excludes 3D points or 3D data associated with light fixtures or ceiling-mounted objects represented in the 3D interior survey data which is neither disclosed nor suggested by the cited references, either singly or in combination. 28. Claim 20 recites the limitation wherein the portion of the 3D interior survey data excludes 3D points or 3D data associated with light fixtures or ceiling-mounted objects represented in the 3D interior survey data which is neither disclosed nor suggested by the cited references, either singly or in combination. Response to Arguments 00. Applicant’s arguments, see Remarks, filed 3/11/2026, with respect to the rejections of claims 1-20 under 30 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Wang. 00. On page 10 of the Remarks, Applicants argue that claimed 3D view is structurally distinct from the 3D interior survey data. The actual claim language states wherein the 3D view if different from the 3D interior survey data and, as discussed above, this is not disclosed by the Specification. In Applicants’ Specification the 3D interior survey data are used to generate the 2D orthographic projection of the 3D floor plan of a floor of the building, and the 3D view is viewing some portion of that 2D orthographic projection of the 3D floor plan of a floor of a building. Therefore, 3D view is viewing some portion of the floor plan that is created from the 3D interior survey data. 00. Moulon discuses generating point clouds of each scanned floor to generate a 3D floor map of each floor of a building. The 3D view corresponds to viewing a portion of the generated 3D floor map using scanned point clouds. However, this 3D view is of just a single floor map as disclosed in Juarez. Thus, the 3D viewing a single 3D floor map of multiple 3D floor maps and thus is different form viewing the plurality of 3D floor maps all at once. 00. Applicants further argue that Moulon fails to disclose rendering a visualization construct that is architecturally separate from the 3D model but this argument is moot since it is not claimed. 00. On page 11 of the Remarks, Applicants argue there is lack of 2D orthographic projection. Moulon at paragraph [0010] discloses a 2D floor map like an overhead orthographic top view of the schematic floor map. The 2D floor map is merely a 3D floor map without the height. Therefore, a 3D floor map model is an orthographic projection of height onto an orthographic 2D floor map. Applicants Specification at paragraph [0064] recites “Systems and techniques are described herein that can be used to provide orthographic projections of the interior of a building, where the orthographic projection comprises a 2D representation generated or otherwise obtained from 3D model and/or scan data of the interior of the building. In one illustrative example, the orthographic projections described herein can be utilized in combination with the example 3D mapping and visualization system(s) described above with respect to FIGS. 1-3 . For instance, in some embodiments, the rendered floor view 212 of FIG. 2 and/or FIG. 3 can be an orthographic projection generated from 3D mapping or scan data of a model of the interior of the building (and corresponding to the particular building floor being presented in rendered floor view 212).” (emphasis added) The 2D floor plan is from the 3D model that is generated from interior scan. This is exactly what Moulon discloses. We have a 2D floor plan which is just the removal of height information from a 3D floor map and the 3D floor map in Moulon is generated from scanning of the floor map. Since Moulon’s 2D floor map is just a schematic floor map without heigh information the 3D floor map is just the 2D floor map with orthographic height information rendered. 00. On page 12 of the Remarks, Applicants argue that overlay requirement is not satisfied by the existence of 2D and 3D representations. Again, there is no such thing as a 3D view in Applicants Specification that is not looking at anything generated from a 3D survey data. Thus, Examiner interprets the 3D view as merely viewing a portion of the 3D floor maps out of a plurality of 3D floor maps, which is what Juarez discloses. 00. Moulon discloses a 2D overhead orthographic view that has removed height information, which means that the 3D schematic floor plan is a 2D floor plan with orthographic height projection. The bottom layer is the 2D layer and with orthographic heigh it becomes a 3D schematic floor plan. 00. On page 13 of the Remarks, Applicants argue that combination does not supply the missing structure. Again, the structure of an entire building is partially seems, 1 floor at a time. A floor with height information build upon a 2D overhead floor map is shown in Juarez modified by Moulon. This height information is added to an orthographic floor plan, thus the 3D schematic floor plan is also orthographic but in 3D. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. CONTACT Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK S CHEN whose telephone number is (571)270-7993. The examiner can normally be reached Mon - Fri 8-11:30 and 1:30-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 5712727794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANK S CHEN/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Dec 21, 2023
Application Filed
Jul 08, 2025
Non-Final Rejection — §103, §112
Aug 07, 2025
Response Filed
Aug 21, 2025
Final Rejection — §103, §112
Oct 21, 2025
Response after Non-Final Action
Nov 25, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Dec 08, 2025
Non-Final Rejection — §103, §112
Mar 03, 2026
Applicant Interview (Telephonic)
Mar 03, 2026
Examiner Interview Summary
Mar 11, 2026
Response Filed
Mar 22, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597111
SYSTEMS AND METHODS FOR DULL GRADING
2y 5m to grant Granted Apr 07, 2026
Patent 12596007
DISPLAY CONTROL APPARATUS, DISPLAY SYSTEM, DISPLAY METHOD, AND COMPUTER READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592029
SYSTEMS AND METHODS FOR MEDIA CONTENT GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586308
GENERATING OBJECT REPRESENTATIONS USING NEURAL NETWORKS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586293
SCENE RECONSTRUCTION FROM MONOCULAR VIDEO
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
82%
Grant Probability
91%
With Interview (+8.8%)
2y 2m
Median Time to Grant
High
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month