Prosecution Insights
Last updated: April 19, 2026
Application No. 17/564,300

Systems And Methods To Generate A Floorplan Of A Building

Final Rejection §103§112§DP
Filed
Dec 29, 2021
Examiner
HOCKER, JOHN PAUL
Art Unit
2189
Tech Center
2100 — Computer Architecture & Software
Assignee
Opal AI Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
87%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
84 granted / 146 resolved
+2.5% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
16 currently pending
Career history
162
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
36.3%
-3.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 146 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 3-7, 12, 14, 15 and 17-20 are amended, claims 2, 8 and 11 are canceled, and claims 21-23 are newly added. Claims 1, 3-7, 9, 10 and 12-23 are pending. Claims 1, 3-7, 9, 10 and 12-23 are rejected (Final Rejection). Related Co-Pending Applications/Patents Examiner is notating that there is a later-filed, commonly owned patent (U.S. Patent No. 12,204,821 B2, hereinafter “the ‘821 patent”) that appears to include the same FIGS. 1-7 as in the present application. Examiner will continue to consider double patenting and/or obvious double patenting issues during the prosecution of (e.g., after any claim amendments in) the current application. Information Disclosure Statements The information disclosure statements (IDS) submitted on 04/14/2022, 06/01/2022 and 08/02/2023, respectively, are in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDSs have been considered by the examiner. Note: TANG et al. (U.S. Patent Application Publication No. 2021/0225090) which forms part of the basis for at least one of the 35 U.S.C. 103 rejections below, was cited in Applicant’s IDS of 06/01/2022. Response to Amendments Applicant’s amendments (dated 09/30/2025) obviate the prior claim, specification and drawing objections. For these reasons, the previous claim, specification and drawing objections have been withdrawn. Response to Arguments Applicant’s arguments filed 12/12/2025 with respect to the rejections under 35 U.S.C. § 101 have been fully considered and they are persuasive. Regarding Applicant’s § 103 arguments: Applicant argues that the cited references fail to disclose the amended claim limitations, as recited in claims 1, 7 and 14. The arguments regarding the rejections under 35 U.S.C. § 103 challenge certain limitations. These limitations are newly added and were therefore not addressed in the previous rejection; therefore, the arguments are moot. The amendments are newly addressed by the old and new grounds of rejection under 35 U.S.C. § 103. Specification The amendment filed 12/12/2025 is objected to under 35 U.S.C. 132 (a) because it introduces new matter into the disclosure. 35 U.S.C. 132(a) states that no amendment shall introduce new matter into the disclosure of the invention. The added material which is not supported by the original disclosure, which is at Para. [0073], is as follows: “e.g., outputting or displaying”. Applicant is required to cancel the new matter in the reply to this Office action. Claim Objections Claims 1 and 7 are objected to for informalities. Claim 1 recites “… capturing one or more images of at least a portion of a building the one or more images comprising pixels …”, which appears to be missing a comma between “building” and “the one or more …”. Appropriate correction is required. Claim 7 recites “Generating …”, which should be lowercase like all other method steps. Appropriate correction is required. Claim Rejections - 35 U.S.C. § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1, 3-7, 9, 10 and 12-23 are rejected under 35 U.S.C. 112(a), as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention. Claim 1 has been amended to recite that “displaying a reconstructed floorplan based on the rendering.” Applicant indicates that support for this limitation is allegedly provided in the original claims. However, it is not clear where in the original claims the above cited limitation is supported. Accordingly, Applicant has not particularly pointed out where each of the newly added claim limitations originate from in the original disclosure. Accordingly, claim 1 is rejected for failing to comply with the written description requirement. Claims 7 and 14 have substantially similar limitations as recited in claim 1; therefore, they are rejected under 35 U.S.C. 112(a) for the same reasons. Claims 3-6, 9, 10, 12, 13 and 15-23 depend respectively from one or more of rejected claims 1, 7 and 14. Therefore, claims 3-6, 9, 10, 12, 13 and 15-23 are also rejected under the same rationale since these claims inherit the respective deficiencies of claims 1, 7 and 14. Claim Rejections - 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-6, 14, 16-19 and 22 are rejected under 35 U.S.C. § 103 as being unpatentable over TANG et al. (U.S. Patent Application Publication No. 2021/0225090), hereinafter Tang), in view of TIWARI et al. (U.S. Patent Application Publication No. 2018/0121571, hereinafter TIWARI). Regarding claim 1, TANG teaches a method (abstract: devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data) comprising: capturing one or more images of a room (capture images and depth data around the user in a room, Para. [0008] of TANG) the one or more images comprising pixels (pixels of image data, Para. [0101] of TANG); generating, based on the one or more images, a three-dimensional polygonal mesh representation ([a point cloud representation corresponds to a three-dimensional polygonal mesh representation per Para. [0080] of Applicant’s specification]; See TANG teaches “3D point cloud may be generated based on depth camera information received concurrently with the images”, Para. [0013] of TANG; See also a 3D reconstructed mesh may be generated as the semantic 3D representation 445, Para. [0102] of TANG) of the room (the images are of a room of a physical environment, Para. [0012]; [a room of a physical environment is interpreted as corresponding to a functional unit (at least a portion) of a building, or it is at least obvious that a room is a portion of a building (see TIWARI below)]); See also FIGS. 1-12B of TANG and corresponding description); converting the three-dimensional polygonal mesh representation to a rendering (Applicant’s claim 4 indicates that a floorplan corresponds to a rendering (e.g., of a room)]; TANG teaches generate floorplans and measurements using three-dimensional (3D) representations of a physical environment … the 3D representations of the physical environment may be generated based on sensor data, such as image and depth sensor data, Para. [0006] of TANG); identifying, in the three-dimensional polygonal mesh representation, a first room (3D semantic data may be segmented into a plurality of horizontal layers that are used to identify where the wall edges of the room are located, Para. [0010] of TANG; See also 3D semantic point cloud can then be used to determine specific measurements of the door or window, Para. [0011] of TANG; See also FIGS. 1-12B of TANG and corresponding description); determining an authenticity of an element indicated in the three-dimensional polygonal mesh representation (Para. [0146] of TANG, which is discussed more in immediate next claim limitation mapping, includes determining, based on a height threshold, whether a semantically identified wall is a floor to ceiling wall or a cubicle wall; [the floor to ceiling wall is interpreted as corresponding to a “real” or “authentic” room wall, whereas the cubical wall is interpreted as not corresponding to a real/authentic room wall]; See also each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; [confidence is interpreted as corresponding to likelihood, which is a type of authenticity value per Applicant’s claim 5]; [Examiner’s interpretation is that each of the height threshold and confidence value can be interpreted to correspond to realness/authenticity values]; See also FIGS. 1-12B of TANG and corresponding description), wherein determining the authenticity comprises determining a likelihood of each pixel of the one or more images (identify semantic labels for pixels of image data, Para. [0101] of TANG) being a part of an edge or a corner by applying a corner likelihood model and an edge likelihood model on the pixel (each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; [a confidence is interpreted as corresponding to a likelihood]; See also a floorplan creation process identifies wall structures (e.g., wall edges) based on a 2D representation that encodes 3D semantic data in multiple layers, Paras. [0010] & [0104] of TANG; See also classifying corners and small walls based on the 3D representation using a more computationally intensive neural network, generating a transitional 2D floorplan based on the classified corners and small walls, determining refinements for the transitional 2D floorplan using a standardization algorithm, and generating the final 2D floorplan of the physical environment based on the determined refinements for the transitional 2D floorplan, Para. [0109] of TANG; See also FIGS. 1-12B of TANG and corresponding description, including discussion of corners); one of including a structure (e.g., wall per Applicant’s claim 2) or excluding the structure (e.g., wall) in a rendering (e.g., floorplan per Applicant’s claim 4) of the first room, based on the authenticity of the element indicated in the three-dimensional polygonal mesh representation, wherein the structure corresponds to a first wall associated with the one of the edge or the corner of the first room (generate 2D representations (e.g., 2D semantic layer 1026) for each 3D semantic layer … generate a height map of the 2D semantic layers … the 2D semantic height map 1028 can be used to determine whether a semantically identified wall is a floor-to-ceiling wall that should be included in the floorplan, or if the semantically identified wall does not reach the height of the ceiling (e.g., a cubicle wall) based on an identified height threshold in comparison to the identified height of the ceiling, then the system (e.g., floorplan unit 1010) can determine to not include that particular wall in the edge map and associated floorplan, Para. [0146] of TANG; See also FIGS. 1-12B of TANG and corresponding description); and displaying a reconstructed floorplan based on the rendering (display a 2D floorplan of a physical environment based on a 3D representation (e.g., a 3D point cloud, a 3D mesh reconstruction, a semantic 3D point cloud, etc.) of the physical environment using one or more of the techniques disclosed herein, Para. [0074] of TANG; See also display 620 that includes the preview 2D floorplan 630, which includes edge map walls 632a, 632b, 632c (e.g., representing walls 134, 130, 132, respectively), boundary 634 a (e.g., representing door 150), boundary 634 b (e.g., representing window 152), bounding box 636 a (e.g., representing table 142), and bounding box 636 b (e.g., representing chair 140), Para. [0115] of TANG). TANG does not appear to explicitly disclose that the “room” is “of a building”. However, TIWARI is in the field of processing of images of rooms of a floor plan (Para. [0010] of TIWARI) and teaches that a room is at least a portion of a building and generating a three-dimensional polygonal mesh representation of at least a portion of a first building (user repeats the step of capturing dimensions of a 360 degree image in at least one additional room (e.g., room B) of the building, Para. [0101] of TIWARI; See also the repeated step: “step provides an initial set of vertices, (dfloor, θ), for a polygon representation of the room geometry in two dimensions (2D), Para. [0100] of TIWARI; See also processing of interior building images for each room of the floor plan, Para. [0010] of TIWARI; See also FIGS. 1-8F of TIWARI and corresponding description). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG with the multiple room scanning/floorplan application of TIWARI [to arrive at the claimed features] for the purpose of optimally placing security sensors/components within a building (Paras. [0004]-[0010] of TIWARI: Para. [0004]: “maximize the probability of detecting the entry of an intruder into a room or to minimize the time taken to detect an intrusion”, Para. [0011]: “optimal placement for each room is achieved … based on connectivity between rooms”). Regarding claim 3, TANG as modified by TIWARI teaches the method of claim 1 (as shown above), further comprising: identifying, in the three-dimensional polygonal mesh representation, a second room (user repeats the step of capturing dimensions of a 360 degree image in at least one additional room (e.g., room B) of the building, Para. [0101] of TIWARI; See also the repeated step: “step provides an initial set of vertices, (dfloor, θ), for a polygon representation of the room geometry in two dimensions (2D), Para. [0100] of TIWARI); identifying, in the three-dimensional polygonal mesh representation, a second wall in the second room (characteristics of the semantically rich building floor plan and/or the captured image can be mapped to a computer database to determine information about the building, such as the type of window, wall and/or door, Para. [0169] of TIWARI); and determining that the second wall in the second room and the first wall in the first room are a shared wall (user indicates 302 how the rooms are connected to create the floor plan of the building … for example, the user can indicate which adjacent walls are shared between the first and second rooms … alternately, adjacent rooms can be determined by using the compass readings associated with room corner, Para. [0101] of TIWARI). Regarding claim 4, TANG as modified by TIWARI teaches the method of claim 1 (as shown above), wherein the rendering of the first room is one of (“one of” is interpreted as requiring only one of the following:) a floorplan of the at least the portion of the first building or a three-dimensional drawing of the at least the portion of the first building (the determinations discussed above in the mapping of claim 1 are related to whether to include or not include (exclude) a wall in a floorplan, Para. [0146] of TANG; See also user may use typical drawing tools to modify the model floor plan and also provide measurement information for different rooms so that the floor plan matches closely with the building, Para. [0096] of TIWARI). Regarding claim 5, TANG as modified by TIWARI teaches the method of claim 1 (as shown above), wherein determining the authenticity of the element indicated in the three-dimensional polygonal mesh representation further comprises executing a simulation procedure (each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; See also a floorplan creation process identifies wall structures (e.g., wall edges) based on a 2D representation that encodes 3D semantic data in multiple layers, Para. [0010] of TANG; See also FIGS. 1-12B of TANG and corresponding description, including discussion of corners; See also simulation techniques in TIWARI (e.g., Para. [0188] of TIWARI)). Regarding claim 6, TANG as modified by TIWARI teaches the method of claim 1 (as shown above), wherein determining the authenticity of the element indicated in the three-dimensional polygonal mesh representation further comprises executing at least one of (“at least one of” is interpreted as requiring only one of the following:) a learning procedure, an artificial intelligence procedure, or an augmented intelligence procedure (semantic unit 430 uses a machine learning model, where a semantic segmentation model may be configured to identify semantic labels for pixels or voxels of image data … in some implementations, the machine learning model is a neural network (e.g., an artificial neural network), decision tree, support vector machine, Bayesian network, or the like, Para. [0101] of TANG). Regarding claim 14, TANG discloses a system (abstract: devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data) comprising: a floorplan generating device (abstract: devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data) comprising: a first memory (non-transitory memory, Para. [0043] of TANG) that stores computer-executable instructions (the one or more programs are stored in the non-transitory memory … programs include instructions, Para. [0043] of TANG); and a first processor configured to execute the computer-executable instructions stored in the first memory (a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein, Para. [0043] of TANG) to at least: generate, from at least one or more images, a three-dimensional polygonal mesh representation ([a point cloud representation corresponds to a three-dimensional polygonal mesh representation per Para. [0080] of Applicant’s specification]; See TANG teaches “3D point cloud may be generated based on depth camera information received concurrently with the images”, Para. [0013] of TANG; See also a 3D reconstructed mesh may be generated as the semantic 3D representation 445, Para. [0102] of TANG) of a room (the images are of a room of a physical environment, Para. [0012]; [a room of a physical environment is interpreted as corresponding to a functional unit (at least a portion) of a building, or it is at least obvious that a room is a portion of a building (see TIWARI below)]); See also FIGS. 1-12B of TANG and corresponding description); convert the three-dimensional polygonal mesh representation to a rendering (Applicant’s claim 4 indicates that a floorplan corresponds to a rendering (e.g., of a room)]; TANG teaches generate floorplans and measurements using three-dimensional (3D) representations of a physical environment … the 3D representations of the physical environment may be generated based on sensor data, such as image and depth sensor data, Para. [0006] of TANG); identify, in the three-dimensional polygonal mesh representation (e.g., point cloud representation), a first room (3D semantic data may be segmented into a plurality of horizontal layers that are used to identify where the wall edges of the room are located, Para. [0010] of TANG; See also 3D semantic point cloud can then be used to determine specific measurements of the door or window, Para. [0011] of TANG; See also FIGS. 1-12B of TANG and corresponding description); determine an authenticity of an element indicated in the three-dimensional polygonal mesh representation corresponding to the first room (Para. [0146] of TANG, which is discussed more in immediate next claim limitation mapping, includes determining, based on a height threshold, whether a semantically identified wall is a floor to ceiling wall or a cubicle wall; [the floor to ceiling wall is interpreted as corresponding to a “real” or “authentic” room wall, whereas the cubical wall is interpreted as not corresponding to a real/authentic room wall]; See also each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; [confidence is interpreted as corresponding to likelihood, which is a type of authenticity value per Applicant’s claim 5]; [Examiner’s interpretation is that each of the height threshold and confidence value can be interpreted to correspond to realness/authenticity values]; See also FIGS. 1-12B of TANG and corresponding description), wherein to determine the authenticity comprises determining a likelihood of each pixel of the at least one or more images (identify semantic labels for pixels of image data, Para. [0101] of TANG) being a part of one of an edge or a corner by simulating a corner likelihood function and an edge likelihood function on the pixel (each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; [a confidence is interpreted as corresponding to a likelihood]; See also a floorplan creation process identifies wall structures (e.g., wall edges) based on a 2D representation that encodes 3D semantic data in multiple layers, Paras. [0010] & [0104] of TANG; See also classifying corners and small walls based on the 3D representation using a more computationally intensive neural network, generating a transitional 2D floorplan based on the classified corners and small walls, determining refinements for the transitional 2D floorplan using a standardization algorithm, and generating the final 2D floorplan of the physical environment based on the determined refinements for the transitional 2D floorplan, Para. [0109] of TANG; See also FIGS. 1-12B of TANG and corresponding description, including discussion of corners); one of include a structure (e.g., wall per Applicant’s claim 2) or exclude the structure (e.g., wall) in the rendering (e.g., floorplan per Applicant’s claim 4) of the first room, based on the authenticity of the element indicated in the three-dimensional polygonal mesh representation, wherein the structure corresponds to a first wall associated with the one of the edge or the corner of the first room (generate 2D representations (e.g., 2D semantic layer 1026) for each 3D semantic layer … generate a height map of the 2D semantic layers … the 2D semantic height map 1028 can be used to determine whether a semantically identified wall is a floor-to-ceiling wall that should be included in the floorplan, or if the semantically identified wall does not reach the height of the ceiling (e.g., a cubicle wall) based on an identified height threshold in comparison to the identified height of the ceiling, then the system (e.g., floorplan unit 1010) can determine to not include that particular wall in the edge map and associated floorplan, Para. [0146] of TANG; See also FIGS. 1-12B of TANG and corresponding description); and display a reconstructed floorplan based on the rendering (display a 2D floorplan of a physical environment based on a 3D representation (e.g., a 3D point cloud, a 3D mesh reconstruction, a semantic 3D point cloud, etc.) of the physical environment using one or more of the techniques disclosed herein, Para. [0074] of TANG; See also display 620 that includes the preview 2D floorplan 630, which includes edge map walls 632a, 632b, 632c (e.g., representing walls 134, 130, 132, respectively), boundary 634 a (e.g., representing door 150), boundary 634 b (e.g., representing window 152), bounding box 636 a (e.g., representing table 142), and bounding box 636 b (e.g., representing chair 140), Para. [0115] of TANG). TANG does not explicitly disclose that the “room” is “of a first building”. However, TIWARI is in the field of processing of images of rooms of a floor plan (Para. [0010] of TIWARI) and teaches that a room is at least a portion of a building and generate a three-dimensional polygonal mesh representation of at least a portion of a first building (user repeats the step of capturing dimensions of a 360 degree image in at least one additional room (e.g., room B) of the building, Para. [0101] of TIWARI; See also the repeated step: “step provides an initial set of vertices, (dfloor, θ), for a polygon representation of the room geometry in two dimensions (2D), Para. [0100] of TIWARI; See also processing of interior building images for each room of the floor plan, Para. [0010] of TIWARI; See also FIGS. 1-8F of TIWARI and corresponding description). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG with the multiple room scanning/floorplan application of TIWARI [to arrive at the claimed features] for the purpose of optimally placing security sensors/components within a building (Paras. [0004]-[0010] of TIWARI: Para. [0004]: “maximize the probability of detecting the entry of an intruder into a room or to minimize the time taken to detect an intrusion”, Para. [0011]: “optimal placement for each room is achieved … based on connectivity between rooms”). Regarding claim 16, TANG as modified teaches the system of claim 14, wherein the floorplan generating device is one of a personal device or a cloud computer (the server 110 is a remote server located outside of the physical environment 105 (e.g., a cloud server, central server, etc.), Para. [0061] of TANG; See also the operating environment 100 includes a server 110 and a device 120 … in an exemplary implementation, the operating environment 100 does not include a server 110, and the methods described herein are performed on the device 120, Para. [0060] of TANG), and wherein the computer-executable instructions are included in a downloadable software application (the term “software” is meant to be synonymous with any code or program that can be in a processor of a host computer, regardless of whether the implementation is in hardware, firmware or as a software computer product available on a disc, a memory storage device, or for download from a remote machine, Para. [0084] of TIWARI). Regarding claim 17, TANG as modified teaches the system of claim 16, wherein the downloadable software application (the term “software” is meant to be synonymous with any code or program that can be in a processor of a host computer, regardless of whether the implementation is in hardware, firmware or as a software computer product available on a disc, a memory storage device, or for download from a remote machine, Para. [0084] of TIWARI; See also the applications are configured to manage the user experience, Paras. [0066]-[0067] of TANG, which is owned by Apple; [the smartphone applications in Apple’s TANG reference are interpreted as being downloadable]), when executed, implements at least one of (“at least one of” is interpreted as requiring only one of the following:) a simulation procedure, a learning procedure, an artificial intelligence procedure, or an augmented intelligence procedure (simulation techniques, Paras. [0188], [0227] & [0295] of TIWARI; See also machine learning model is a neural network (e.g., an artificial neural network), decision tree, support vector machine, Bayesian network, or the like, Para. [0101] of TANG). Regarding claim 18, TANG as modified teaches the system of claim 14, wherein the floorplan generating device is a cloud computer (the server 110 is a remote server located outside of the physical environment 105 (e.g., a cloud server, central server, etc.), Para. [0061] of TANG), and wherein the system further comprises: a personal device (an electronic device having a processor (e.g., a smart phone), Para. [0012] of TANG) comprising: a second memory that stores additional computer-executable instructions (FIG. 3 shows the device 120, which includes the memory 320, which stores the following programs, modules and data structures, or a subset thereof including an optional operating system 330 and one or more applications 340, Para. [0083] of TANG); and a second processor configured to execute the additional computer-executable instructions stored in the second memory (device 120 includes one or more processing units 302 (e.g., microprocessors), Para. [0079]; See also FIG. 3 shows connection of processing units 302 and memory 320) to at least: capture a first image of the first room in the at least the portion of the first building (exemplary method first involves displaying, at an electronic device having a processor (e.g., a smart phone), a live camera image feed (e.g., live video) comprising a sequence of images of a physical environment … for example, as a user captures video while walking around a room to capture images of different parts of the room from multiple perspectives, these images are displayed live on a mobile device so that the user sees what he/she is capturing, Para. [0012] of TANG); capture a second image of a second room in the at least the portion of the first building (see sequence of images captured as discussed above with reference to Para. [0012] of TANG; See also provides a floorplan that includes 2D top-down view of a room(s) based on separately identifying wall structures (wall edges, door, & windows), Para. [0117] of TANG; See also user repeats the step of capturing dimensions of a 360 degree image in at least one additional room (e.g., room B) of the building, Para. [0101] of TIWARI); generate, based in part on the first image and the second image, the three-dimensional polygonal mesh representation (the repeated step: “step provides an initial set of vertices, (dfloor, θ), for a polygon representation of the room geometry in two dimensions (2D), Para. [0100] of TIWARI; See also processing of interior building images for each room of the floor plan, Para. [0010] of TWARI; See also FIGS. 1-8F of TIWARI and corresponding description; See also TANG teaches “3D point cloud may be generated based on depth camera information received concurrently with the images”, Para. [0013] of TANG; See also a 3D reconstructed mesh may be generated as the semantic 3D representation 445, Para. [0102] of TANG); and upload the three-dimensional polygonal mesh representation to the cloud computer (the server 110 is a remote server located outside of the physical environment 105 (e.g., a cloud server, central server, etc.), Para. [0061] of TANG; See also Para. [0061] of TANG; See also FIGS. 1-12B of TANG and corresponding description; See also server 114a along with the database may take the form of a web application hosted on a webserver or a cloud computing platform, located at a different location, which is accessible by various customer entities to perform different tasks across the three process stages, Para. [0089] of TIWARI), for generating a floorplan of the at least the portion of the first building (this feature is mapped/addressed in claim 14 above). Regarding claim 19, TANG as modified teaches the system of claim 14, wherein the structure is a wall of the first room (the floor-to-ceiling wall from Para. [0146] of TANG was interpreted as the authentic structure/wall in the mapping of claim 14 above) and wherein the first processor is further configured to execute the computer-executable instructions stored in the first memory to at least: identify a second room in the at least the portion of the first building based on evaluating the three-dimensional polygonal mesh representation (user repeats the step of capturing dimensions of a 360 degree image in at least one additional room (e.g., room B) of the building, Para. [0101] of TIWARI; See also the repeated step: “step provides an initial set of vertices, (dfloor, θ), for a polygon representation of the room geometry in two dimensions (2D), Para. [0100] of TIWARI; See also characteristics of the semantically rich building floor plan and/or the captured image can be mapped to a computer database to determine information about the building, such as the type of window, wall and/or door, Para. [0169] of TIWARI); and determine that the wall of the first room is a shared wall that is shared between the first room and the second room (user indicates 302 how the rooms are connected to create the floor plan of the building … for example, the user can indicate which adjacent walls are shared between the first and second rooms … alternately, adjacent rooms can be determined by using the compass readings associated with room corner, Para. [0101] of TIWARI). Regarding claim 22, TANG as modified teaches the method of claim 1, further comprising generating, based on the determining of the authenticity, a likelihood diagram that illustrates the likelihood of one of various corners or various edges present in a layout (each semantic label includes a confidence value. For example, a particular point may be labeled as an object (e.g., table), and the data point would include x,y,z coordinates and a confidence value as a decimal value (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly). In some implementations, a 3D reconstructed mesh may be generated as the semantic 3D representation 445, Para. [0102] of TANG). Claims 7 and 9-12 are rejected under 35 U.S.C. § 103 as being unpatentable over TANG et al. (U.S. Patent Application Publication No. 2021/0225090, hereinafter TANG), in view of SAMSON et al. (U.S. Patent Application Publication No. 2015/0324940, hereinafter SAMSON). Regarding claim 7, TANG discloses a method (abstract: devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data) executed by a processor (a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein, Para. [0043] of TANG), the method comprising: Generating a three-dimensional polygonal mesh representation ([a point cloud representation corresponds to a three-dimensional polygonal mesh representation per Para. [0080] of Applicant’s specification]; See TANG teaches “3D point cloud may be generated based on depth camera information received concurrently with the images”, Para. [0013] of TANG; See also a 3D reconstructed mesh may be generated as the semantic 3D representation 445, Para. [0102] of TANG) of a room (the images are of a room of a physical environment, Para. [0012]; [a room of a physical environment is interpreted as corresponding to a functional unit (at least a portion) of a building, or it is at least obvious that a room is a portion of a building (see SAMSON below)]); See also FIGS. 1-12B of TANG and corresponding description); generating a reconstructed floorplan by operating upon the three-dimensional polygonal mesh representation (generating a final 2D floorplan of the physical environment based on the 3D representation, wherein generating the final 2D floorplan uses a different process than generating the live preview of the preliminary 2D floorplan … for example, the different process uses a more computationally intensive neural network with fine-tuning (e.g., corner correction), Para. [0109] of TANG); evaluating the three-dimensional polygonal mesh representation to determine ([Examiner notes that “to determine” is not positively recited and the “determine” limitation is not clearly required by the claim]) an authenticity of an element included in the three-dimensional polygonal mesh representation (Para. [0146] of TANG, which is discussed more in immediate next claim limitation mapping, includes determining, based on a height threshold, whether a semantically identified wall is a floor to ceiling wall or a cubicle wall; [the floor to ceiling wall is interpreted as corresponding to a “real” or “authentic” room wall, whereas the cubical wall is interpreted as not corresponding to a real/authentic room wall]; See also each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; [confidence is interpreted as corresponding to likelihood, which is a type of authenticity value per Applicant’s claim 5]; [Examiner’s interpretation is that each of the height threshold and confidence value can be interpreted to correspond to realness/authenticity values]; See also FIGS. 1-12B of TANG and corresponding description), wherein the evaluating comprises determining a likelihood of each pixel of the one or more images (identify semantic labels for pixels of image data, Para. [0101] of TANG) being a part of an edge or a corner by applying a corner likelihood model and an edge likelihood model on the pixel (each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; [a confidence is interpreted as corresponding to a likelihood]; See also a floorplan creation process identifies wall structures (e.g., wall edges) based on a 2D representation that encodes 3D semantic data in multiple layers, Paras. [0010] & [0104] of TANG; See also classifying corners and small walls based on the 3D representation using a more computationally intensive neural network, generating a transitional 2D floorplan based on the classified corners and small walls, determining refinements for the transitional 2D floorplan using a standardization algorithm, and generating the final 2D floorplan of the physical environment based on the determined refinements for the transitional 2D floorplan, Para. [0109] of TANG; See also FIGS. 1-12B of TANG and corresponding description, including discussion of corners); and excluding a structure in the reconstructed floorplan, based on determining a lack of authenticity of the element included in the three-dimensional polygonal mesh representation (generate 2D representations (e.g., 2D semantic layer 1026) for each 3D semantic layer … generate a height map of the 2D semantic layers … the 2D semantic height map 1028 can be used to determine whether a semantically identified wall is a floor-to-ceiling wall that should be included in the floorplan, or if the semantically identified wall does not reach the height of the ceiling (e.g., a cubicle wall) based on an identified height threshold in comparison to the identified height of the ceiling, then the system (e.g., floorplan unit 1010) can determine to not include that particular wall in the edge map and associated floorplan, Para. [0146] of TANG; See also FIGS. 1-12B of TANG and corresponding description); refining the reconstructed floorplan (determining refinements for the transitional 2D floorplan using a standardization algorithm, Para. [0109] of TANG); and outputting a rendered floorplan based on refining the reconstructed floorplan (display a 2D floorplan of a physical environment based on a 3D representation (e.g., a 3D point cloud, a 3D mesh reconstruction, a semantic 3D point cloud, etc.) of the physical environment using one or more of the techniques disclosed herein, Para. [0074] of TANG; See also display 620 that includes the preview 2D floorplan 630, which includes edge map walls 632a, 632b, 632c (e.g., representing walls 134, 130, 132, respectively), boundary 634 a (e.g., representing door 150), boundary 634 b (e.g., representing window 152), bounding box 636 a (e.g., representing table 142), and bounding box 636 b (e.g., representing chair 140), Para. [0115] of TANG; See also generating the final 2D floorplan of the physical environment based on the determined refinements for the transitional 2D floorplan, Para. [0109] of TANG). TANG does not appear to explicitly disclose that the “room” is “of a building” and does not explicitly disclose the refining comprising comparing the reconstructed floorplan to a reference floorplan of at least a portion of a second building. However, SAMSON is in the field of 3D floor/house plan design (Paras. [0002]-[0004] of SAMSON) and teaches that a room is at least a portion of a building/home (home specification includes number of rooms, Para. [0056] of SAMSON) and the refining comprising comparing the reconstructed floorplan to a reference floorplan of at least a portion of a second building (user can quickly select a different collection 404, home model list 408, and ultimately different home model 410 for view … based upon this displayed information, the user has quick access to a number of available home plans for comparison purposes … once a user has selected a desired home model to customize, the user can proceed by selecting a “Configure & Price” button 418, Para. [0056] of SAMSON; See also FIGS. 1-7B of SAMSON and corresponding description). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG with the floorplan comparison tool of SAMSON [to arrive at the claimed features] for the purpose of providing transparent pricing options of buildings/homes to an end user (Paras. [0003]-[0007] of SAMSON). Regarding claim 9, TANG as modified by SAMSON teaches the method of claim 7 (as shown above), wherein refining the reconstructed floorplan comprises executing at least one of (“at least one of” is interpreted as requiring only one of the following:) a simulation procedure, a learning procedure, an artificial intelligence procedure, or an augmented intelligence procedure (machine learning model is a neural network (e.g., an artificial neural network), decision tree, support vector machine, Bayesian network, or the like, Para. [0101] of TANG; See also generating a final 2D floorplan of the physical environment based on the 3D representation, wherein generating the final 2D floorplan uses a different process than generating the live preview of the preliminary 2D floorplan … for example, the different process uses a more computationally intensive neural network with fine-tuning (e.g., corner correction), generating a transitional 2D floorplan based on the classified corners and small walls, determining refinements for the transitional 2D floorplan using a standardization algorithm, and generating the final 2D floorplan of the physical environment based on the determined refinements for the transitional 2D floorplan, Para. [0109] of TANG). Regarding claim 10, TANG as modified by SAMSON teaches the method of claim 7 (as shown above), wherein refining the reconstructed floorplan further comprises: executing a manual interactive procedure that includes at least one of (“at least one of” is interpreted as requiring only one of the following:) eliminating an object present in the reconstructed floorplan, modifying a first measurement in the reconstructed floorplan, and introducing a second measurement into the reconstructed floorplan (the central or primary display 326A could show a virtual real-time 3D model and/or 2D floor plan of the house being designed by the customer, while the flanking left and right displays 326B, 326C could show different customizable/swappable options and associated prices, dimensions, etc., Para. [0049] of SAMSON; See also FIGS. 1-7B of SAMSON and corresponding description). Regarding claim 12, TANG as modified by SAMSON teaches the method of claim 7 (as shown above), wherein the element included in the three-dimensional polygonal mesh representation is one of an edge or a corner, and wherein the structure is a first wall in a first room (providing the floorplan further includes generating the edge map by identifying walls in the physical environment based on the 2D semantic data for multiple horizontal layers, updating the edge map by identifying wall attributes (e.g., doors and windows) in the physical environment based on the 3D semantic data, updating the edge map by identifying objects in the physical environment based on the 3D semantic data, and generating the floorplan based on the updated edge map that includes the identified walls, identified wall attributes, and identified objects, Para. [0030] of TANG; See also “wall edges”, “wall structures” and boundary/“boundaries of a wall attribute”, Paras. [0104], [0107], [0117], [0148], [0150], [0152] & [0154] of TANG; [Examiner is interpreting edges/lines/boundaries of a 2D floorplan to correspond to a wall, e.g., of a room in a building]), and wherein the structure is a first wall in a first room (the floor-to-ceiling wall from Para. [0146] of TANG was interpreted as the authentic structure/wall in the mapping of claim 1 above). Claim 13 is rejected under 35 U.S.C. § 103 as being unpatentable over TANG et al. (U.S. Patent Application Publication No. 2021/0225090, hereinafter TANG), in view of SAMSON et al. (U.S. Patent Application Publication No. 2015/0324940, hereinafter SAMSON), and further in view of TIWARI et al. (U.S. Patent Application Publication No. 2018/0121571, hereinafter TIWARI). Regarding claim 13, TANG as modified by SAMSON teaches the method of claim 12 (as shown above) but appears to fail to explicitly disclose evaluating the three-dimensional polygonal mesh representation to identify a second room in the at least the portion of the first building; identifying a second wall in the second room; and determining that the second wall in the second room is same as the first wall in the first room. However, TIWARI is in the field of processing of images of rooms of a floor plan (Para. [0010] of TIWARI) and teaches evaluating the three-dimensional polygonal mesh representation to identify a second room in the at least the portion of the first building (user repeats the step of capturing dimensions of a 360 degree image in at least one additional room (e.g., room B) of the building, Para. [0101] of TIWARI; See also the repeated step: “step provides an initial set of vertices, (dfloor, θ), for a polygon representation of the room geometry in two dimensions (2D), Para. [0100] of TIWARI); identifying a second wall in the second room (characteristics of the semantically rich building floor plan and/or the captured image can be mapped to a computer database to determine information about the building, such as the type of window, wall and/or door, Para. [0169] of TIWARI); and determining that the second wall in the second room is same as the first wall in the first room (user indicates 302 how the rooms are connected to create the floor plan of the building … for example, the user can indicate which adjacent walls are shared between the first and second rooms … alternately, adjacent rooms can be determined by using the compass readings associated with room corner, Para. [0101] of TIWARI). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG (as modified by SAMSON) with the multiple room scanning/floorplan application of TIWARI [to arrive at the claimed features] for the purpose of optimally placing security sensors/components within a building (Paras. [0004]-[0010] of TIWARI: Para. [0004]: “maximize the probability of detecting the entry of an intruder into a room or to minimize the time taken to detect an intrusion”, Para. [0011]: “optimal placement for each room is achieved … based on connectivity between rooms”). Claims 15 and 21 is rejected under 35 U.S.C. § 103 as being unpatentable over TANG et al. (U.S. Patent Application Publication No. 2021/0225090, hereinafter TANG), in view of TIWARI et al. (U.S. Patent Application Publication No. 2018/0121571, hereinafter TIWARI), and further in view of PHALAK (U.S. Patent Application Publication No. 2021/0279950, hereinafter PHALAK). Regarding claim 15, TANG as modified by TIWARI teaches the system of claim 14 (as shown above), wherein the at least the portion of the first building is a floor of one of a single-story building or a multi-storied building (number of floors, Para. [0096] of TIWARI; [a number of floors is interpreted as corresponding to the first building including a floor of one of a single-story building or a multi-storied building]) but appears to fail to explicitly disclose wherein the three-dimensional polygonal mesh representation comprises a Manhattan style configuration and a non-Manhattan style configuration. PHALAK, however, is in the field of room layout or floorplan estimation from an image (Para. [0005] of PHALAK) and teaches wherein the three-dimensional polygonal mesh representation (generate a 3D point cloud with two room labels and a wall label for each point, Para. [0177] of PHALAK) comprises a Manhattan style configuration and a non-Manhattan style configuration (Manhattan-style room shapes, Para. [0156] of PHALAK; See also non-Manhattan layouts, Para. [0170] of PHALAK). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG (as modified by TIWARI) with the Manhattan style configuration of PHALAK [to arrive at the claimed features] for the purpose of efficiently generating a floorplan from scans of indoor scenes (Para. [0014] of PHALAK). Regarding claim 21, TANG as modified by TIWARI teaches the method of claim 1 (as shown above) but appears to fail to explicitly disclose wherein: the one or more images are captured by an electronic device, the electronic device comprising a camera and a LiDAR device, and the pixels of the one or more images include metadata associated with parameters comprising distance metadata, wherein the distance metadata is generated by the LiDAR device. PHALAK, however, is in the field of room layout or floorplan estimation from an image (Para. [0005] of PHALAK) and teaches wherein: the one or more images are captured by an electronic device, the electronic device comprising a camera and a LiDAR device, and the pixels of the one or more images include metadata associated with parameters comprising distance metadata, wherein the distance metadata is generated by the LiDAR device (as LIDAR scanners and depth cameras become more affordable and widely used for robotics applications, 3D-videos became readily-available sources of input for robotics systems or AR/VR applications, Para. [0279] of PHALAK). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG (as modified by TIWARI) with the Manhattan style configuration of PHALAK [to arrive at the claimed features] for the purpose of efficiently generating a floorplan from scans of indoor scenes (Para. [0014] of PHALAK). Claim 20 is rejected under 35 U.S.C. § 103 as being unpatentable over TANG et al. (U.S. Patent Application Publication No. 2021/0225090, hereinafter TANG), in view of TIWARI et al. (U.S. Patent Application Publication No. 2018/0121571, hereinafter TIWARI), and further in view of MOULON et al. (U.S. Patent Application Publication No. 2021/0125397, hereinafter MOULON). Regarding claim 20, TANG as modified by TIWARI teaches the system of claim 14 (as shown above), wherein the structure is a wall of the first room (the floor-to-ceiling wall from Para. [0146] of TANG was interpreted as the authentic structure/wall in the mapping of claim 14 above) and wherein the first processor is further configured to execute the computer-executable instructions stored in the first memory to at least: determine the authenticity of the element indicated in the three-dimensional polygonal mesh representation (Para. [0146] of TANG, which is discussed more in immediate next claim limitation mapping, includes determining, based on a height threshold, whether a semantically identified wall is a floor to ceiling wall or a cubicle wall; [the floor to ceiling wall is interpreted as corresponding to a “real” or “authentic” room wall, whereas the cubical wall is interpreted as not corresponding to a real/authentic room wall]; See also each semantic label includes a confidence value … (e.g., 0.9 to represent a 90% confidence the semantic label has classified the particular data point correctly), Para. [0102] of TANG; [confidence is interpreted as corresponding to likelihood, which is a type of authenticity value per Applicant’s claim 5]; [Examiner’s interpretation is that each of the height threshold and confidence value can be interpreted to correspond to realness/authenticity values]; See also FIGS. 1-12B of TANG and corresponding description) but fails to explicitly disclose determine the authenticity of the element indicated in the three-dimensional polygonal mesh representation based on comparing a reconstructed floorplan of the at least the portion of the first building to a reference floorplan of at least a portion of a second building. However, MOULON is in the field of automatically generating mapping information from video (Para. [0002] of MOULON) and teaches determine the authenticity of an element indicated in the three-dimensional polygonal mesh representation (generating a corresponding floor map for the building, Para. [0005] of MOULON; See also form hypotheses of likely wall locations, Para. [0013] of MOULON; Regarding three-dimensional polygonal mesh representation see 3D point cloud, Para. [0013] of MOULON) based on comparing a reconstructed floorplan of the at least the portion of the first building to a reference floorplan of at least a portion of a second building (FIG. 2N illustrates a modified floor map 230n that includes additional information of various types, Para. [0040]; [modified floor map is interpreted as corresponding to a reconstructed floorplan]; See also form hypotheses of likely wall locations …as part of doing so, machine learning techniques may be used in at least some embodiments to predict which aggregated plane/normal information corresponds to flat walls, such as based on prior training, Para. [0013] of MOULON; See also system provided by application 155 executing on one or more mobile visual data acquisition devices 185), such as with respect to one or more buildings or other structures, Para. [0020] of MOULON; [because the machine learning is applied to more than one building it is interpreted as corresponding to comparing the modified floorplan to a reference floorplan]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG with the scanning-to-floorplan application of MOULON [to arrive at the claimed features] for the purpose of automatically generating a floor map of a building from video captured in the building’s interior (Para. [0002] of MOULON). Claim 23 is rejected under 35 U.S.C. § 103 as being unpatentable over TANG et al. (U.S. Patent Application Publication No. 2021/0225090), in view of TIWARI et al. (U.S. Patent Application Publication No. 2018/0121571), and further in view of LI et al. (U.S. Patent Application Publication No. 2021/0287430). Regarding claim 23, TANG as modified by TIWARI teaches the method of claim 1 (as shown above) and generating, based on the rendering, a top view mean normal rendering and a top view projection rendering of the first room of the first room (a floorplan includes a 2D top-down view of a room, Para. [0007] of TANG; [Applicant’s claim 4 indicates that a floorplan corresponds to a rendering (e.g., of a room)]; See also TANG teaches generate floorplans and measurements using three-dimensional (3D) representations of a physical environment … the 3D representations of the physical environment may be generated based on sensor data, such as image and depth sensor data, Para. [0006] of TANG; See also walls may be identified by generating 2D semantic data (e.g., in layers), using the 2D semantic data to generate an edge map using a neural network, and determining vector parameters to standardize the edge map in a 3D normalized plan, Para. [0021] of TANG; [a 3D normalized plan is interpreted as corresponding to a mean normal rendering/plan]; See also generating 2D representations (e.g., furniture icons or flat 2D bounding boxes) of the 3D bounding boxes, Para. [0018] of TANG; [generating 2D representations of the 3D bounding boxes is interpreted as corresponding to projecting because it transforms 3D coordinates into a 2D plane]) but appears to fail to disclose: comparing at least one of the top view mean normal rendering or the top view projection rendering of the first room to one or more template renderings; and determining, based on the comparing, an authenticity of an existence of the structure in the first room. However, LI is in the field of reconstructing three-dimensional meshes of objects from two-dimensional images (Para. [0002] of LI) and teaches comparing at least one of the top view mean normal rendering or the top view projection rendering (similarity between an individual instance mesh and the exemplar mesh may be measured by computing the IoU between their rendered silhouettes, Para. [0143] of LI); and determining, based on the comparing, an authenticity of an existence of the structure in the first room (a subset of reconstructed meshes whose viewpoints roughly match may be selected … to do so, from the meshes reconstructed for all the training images, the instance with the most reliable reconstruction results, such as the instance whose rendered silhouette has the largest intersection over union (IoU) with its corresponding ground truth silhouette, may be chosen as an exemplar, Para. [0143] of LI). It would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify the room scanning-to-floorplan application of TANG (as modified by TIWARI) with the reliable reconstruction application of LI [to arrive at the claimed features] for the purpose of recovering 3D shapes, textures, and camera pose from a 2D image of an object (Para. [0003] of LI). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN P HOCKER whose telephone number is (571)272-0501. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached on (571)272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOHN P. HOCKER Examiner Art Unit 2189 /JOHN P HOCKER/Examiner, Art Unit 2189 /REHANA PERVEEN/Supervisory Patent Examiner, Art Unit 2189
Read full office action

Prosecution Timeline

Dec 29, 2021
Application Filed
Aug 07, 2025
Non-Final Rejection — §103, §112, §DP
Nov 17, 2025
Interview Requested
Nov 24, 2025
Applicant Interview (Telephonic)
Nov 24, 2025
Examiner Interview Summary
Dec 12, 2025
Response Filed
Apr 03, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601250
MONITORING A WELL BARRIER
2y 5m to grant Granted Apr 14, 2026
Patent 12530512
CIRCUIT SIMULATION BASED ON AN RTL COMPONENT IN COMBINATION WITH BEHAVIORAL COMPONENTS
2y 5m to grant Granted Jan 20, 2026
Patent 12505124
METHOD AND SYSTEM FOR CREATING A RULE FOR A BUSINESS FLOW DIAGRAM
2y 5m to grant Granted Dec 23, 2025
Patent 12487797
SMART PROGRAMMING METHOD FOR INTEGRATED CNC-ROBOT
2y 5m to grant Granted Dec 02, 2025
Patent 8515929
Online Propagation of Data Updates
2y 5m to grant Granted Aug 20, 2013
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
87%
With Interview (+29.7%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 146 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month