Prosecution Insights
Last updated: April 19, 2026
Application No. 18/202,043

Multi-Room 3D Floor Plan Generation

Non-Final OA §103
Filed
May 25, 2023
Examiner
CLOTHIER, MATTHEW MORRIS
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
1y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
3 granted / 3 resolved
+38.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 11m
Avg Prosecution
29 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
65.2%
+25.2% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/2/2026 has been entered. Response to Amendment 2. This action is in response to the amendment filed on 3/2/2026. Claims 1, 10, 12, 16, and 20 have been amended. Claims 9 and 11 have been cancelled. Claims 21-23 have been added. Claims 1-8, 10, and 12-23 remain rejected in the application. Response to Arguments 3. Applicant’s arguments with respect to claim 1, and similarly claims 16 and 20, filed on 3/2/2026, with respect to the rejection under 35 U.S.C. 103 regarding that the prior art does not teach the limitation(s): “determining a 3D positional relationship between the first 3D floor plan and the second 3D floor plan based on: displaying a first layout representing the first scan and second layout representing the second scan in a user interface; receiving input providing a first relative positioning between the first layout and the second layout; automatically determining a second positioning between the first layout and the second layout based on the first relative positioning, the second positioning aligning a boundary of the first layout with a boundary of the second layout; and determining the 3D positional relationship based on the second positioning;” have been fully considered, but are moot because of new grounds for rejection. Claim 1, and similarly claims 16 and 20, are now disclosed by Cier, Osokin, and AppLearning. 4. Regarding arguments to claims 2-8, 10, 12-15, and 17-19, they are dependent on independent claims 1 and 16 respectively. Applicant does not argue anything other than independent claim 1, and similarly claims 16 and 20. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-4, 13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US-11252329-B1, hereinafter "Cier") in view of Osokin et al. (US-2024/0312163-A1, hereinafter "Osokin"), and further in view of AppLearning (NPL: "Blender 3.0 3D Architecture 2- Attach a Room Using the Snap Tool.", https://www.youtube.com/watch?v=z5kP1yTzLtA, Jan. 20, 2022.). (Examiner’s note: A screenshot printout with timecodes and captions has been provided for the AppLearning YouTube video.) 7. As per claim 1, Cier discloses: A method comprising: at a device having a processor: (Cier, column 37, lines 22-25, “Thus, in some embodiments, some or all of the described techniques may be performed by hardware means that include one or more processors ...”) generating a first three-dimensional (3D) floor plan of a first room (Cier, column 3, lines 33-44, “In at least some embodiments, the determined position for the mobile computing device is based at least in part on performing a SLAM (Simultaneous Localization And Mapping) ... including in at least some such embodiments to use the additional data captured by the mobile computing device to generate an estimated three-dimensional (“3D”) shape of the enclosing room” and Claim 1, “... generating, by the mobile computing device, and using the 3D first and second shapes of the first and second rooms and the determined positions of the first and second panorama images, a partial floor plan for the building, including using the movement data of the mobile computing device to determine relative positions of the 3D first and second shapes of the first and second rooms; …”) of a multi-room environment based on a first 3D representation of a boundary feature [[and an object]] of the first room, (Cier, column 22, lines 10-26, “In particular, images (e.g., video frames) captured in the living room of the house 198 may be analyzed in order to determine an estimated 3D shape of the living room, such as from a 3D point cloud of features detected in the video frames ... Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc. …” and Claim 1, “... generating, by the mobile computing device, and using the 3D first and second shapes of the first and second rooms and the determined positions of the first and second panorama images, a partial floor plan for the building, including using the movement data of the mobile computing device to determine relative positions of the 3D first and second shapes of the first and second rooms; …”) the first 3D representation determined based on a first scan; (Cier, Claim 1, “… obtaining, by a mobile computing device with one or more image sensors and one or more inertial measurement unit (IMU) sensors and that is in a first area of a first room of a building, a first panorama image that is captured in the first area by a camera device separate from the mobile computing device and that has 360 degrees of horizontal coverage around a vertical axis, and first additional data that is captured in the first area by the mobile computing device and includes motion data from the IMU sensors and additional visual data from the image sensors; …”) generating a second 3D floor plan of a second room (Cier, column 3, lines 33-44, “In at least some embodiments, the determined position for the mobile computing device is based at least in part on performing a SLAM (Simultaneous Localization And Mapping) ... including in at least some such embodiments to use the additional data captured by the mobile computing device to generate an estimated three-dimensional (“3D”) shape of the enclosing room” and Claim 1, “... generating, by the mobile computing device, and using the 3D first and second shapes of the first and second rooms and the determined positions of the first and second panorama images, a partial floor plan for the building, including using the movement data of the mobile computing device to determine relative positions of the 3D first and second shapes of the first and second rooms; …”) of the multi-room environment based on a second 3D representation of a boundary feature [[and an object]] of the second room, (Cier, column 22, lines 10-26, “In particular, images (e.g., video frames) captured in the living room of the house 198 may be analyzed in order to determine an estimated 3D shape of the living room, such as from a 3D point cloud of features detected in the video frames ... Such a point cloud may be further analyzed to determine planar areas, such as to correspond to walls, the ceiling, floor, etc., as well as in some cases to detect features such as windows, doorways and other inter-room openings, etc. …” and Claim 1, “... generating, by the mobile computing device, and using the 3D first and second shapes of the first and second rooms and the determined positions of the first and second panorama images, a partial floor plan for the building, including using the movement data of the mobile computing device to determine relative positions of the 3D first and second shapes of the first and second rooms; …”) the second 3D representation determined based on a second scan that is distinct from the first scan; (Cier, Claim 1, “... obtaining, by the mobile computing device, a second panorama image that is captured by the camera device in the second area of the second room and has 360 degrees of horizontal coverage around a vertical axis, and second additional data captured in the second area by the mobile computing device that includes motion data from the IMU sensors and further visual data from the image sensors; …”) determining a 3D positional relationship between the first 3D floor plan and the second 3D floor plan based on: (Cier, Claim 1, “... determining, by the mobile computing device, a first location and orientation of the mobile computing device in the first room based at least in part on analyzing the motion data and additional visual data of the first additional data, and a second location and orientation of the camera device in the first room that is relative to the first location and orientation of the mobile computing device and is based at least in part on visual data of the first panorama image; …” and Claim 1, “... generating, by the mobile computing device, and using the 3D first and second shapes of the first and second rooms and the determined positions of the first and second panorama images, a partial floor plan for the building, including using the movement data of the mobile computing device to determine relative positions of the 3D first and second shapes of the first and second rooms; …”) displaying a first layout representing the first scan and second layout representing the second scan in a user interface; (Cier, Claim 1, “... displaying, by the mobile computing device and to a user located in the building, the generated partial floor plan for the building with visual indications overlaid to show the determined positions of the first and second panorama images in the respective first and second rooms, to enable determination of additional areas of the building to acquire additional panorama images.”) [[receiving input providing a first relative positioning between the first layout and the second layout;]] [[automatically determining a second positioning between the first layout and the second layout based on the first relative positioning, the second positioning aligning a boundary of the first layout with a boundary of the second layout; and]] [[determining the 3D positional relationship based on the second positioning;]] and generating a combined 3D floor plan based on the determined 3D positional relationship between the first 3D floor plan and the second 3D floor plan. (Cier, Claim 1, “... generating, by the mobile computing device, and using the 3D first and second shapes of the first and second rooms and the determined positions of the first and second panorama images, a partial floor plan for the building, including using the movement data of the mobile computing device to determine relative positions of the 3D first and second shapes of the first and second rooms; and displaying, by the mobile computing device and to a user located in the building, the generated partial floor plan for the building with visual indications overlaid to show the determined positions of the first and second panorama images in the respective first and second rooms, to enable determination of additional areas of the building to acquire additional panorama images.”) 8. Cier doesn't explicitly disclose but Osokin discloses: [[generating a first three-dimensional (3D) floor plan of a first room of a multi-room environment based on a first 3D representation of a boundary feature]] and an object [[of the first room, the first 3D representation determined based on a first scan;]] [[generating a second 3D floor plan of a second room of the multi-room environment based on a second 3D representation of a boundary feature]] and an object [[of the second room, the second 3D representation determined based on a second scan that is distinct from the first scan;]] (Osokin, [0020], “The system may also generate 3D models from the three-dimensional scan, three-dimensional shells, and/or two-dimensional floor plans. In some cases, the 3D model may be a semantic 3D model derived from the 3D scan, such a CAD model. The 3D models are also closed and orthogonal. In some cases, the system may also select pre-generated and/or generate accurate 3D models (e.g., again CAD models) of the objects within the physical environment and insert the 3D objects into the 3D model.” and [0118], “At 1204, the system may detect, based at least in part on the 3D scan, an object and, at 1206, system may generate point cloud data associated with the object. For example, the system may detect a unit of furniture within the scan of the 3D environment as part of the segregation and classification of the 3D scan.” and [0122]-[0123], “As discussed above with respect to process 1200, the system may detect and insert objects, such as furniture, within a 3D model or shell using one or more machine learned models and/or networks. In process 1300 the system may generate point cloud data from 3D models, such as CAD models to use in training the one or more machine learned models and/or networks. At 1302, the system may receive an object model. For example, the system may receive a 3D model of a unit of furniture, such as chair. In some case, the object model may be received from a manufacturer or other third party associated with the unit of furniture.” and [0101], “In some cases, the system may generate a 2D floor model and/or 3D model (such as a CAD model) of a physical environment. In the examples discussed herein, the system may generate a 3D shell of a physical environment based at least in part on configuring and orthogonalizing wall segments.” and [0109], “FIG. 11 illustrates an example flow diagram showing a process 1100 for determining open door locations in a shell according to some implementations. As discussed above with respect to FIG. 10 , the system may be configured to insert objects such as doors and windows into the 3D shell.” and [0051], “As discussed above, the system (such as system 100 of FIG. 1 ) may receive a 3D environment scan representative of a physical environment, such as one or more rooms, and generate a 2D floor plan model of the physical environment. In some cases, the floor plan model may be projected to form a shell that is a 3D representation of the physical environment.”) 9. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of Cier to include the disclosure of generating floor plans of two or more rooms based on a representation of a boundary feature and an object, of Osokin. The motivation for this modification could have been to create a more complete representation of a room by not only featuring its walls and dimensions, but also items within the room such as couches, chairs, and tables. 10. Cier in view of Osokin doesn't explicitly disclose but AppLearning discloses: receiving input providing a first relative positioning between the first layout and the second layout; (AppLearning, Timecode: 2:05, User selects room object using the snap tool.; Timecode: 2:09-2:17, User presses the “g” key and is able to freely move the room around with the user’s cursor (with mouse, touchscreen, or other interface). Pressing the “x” key keeps the room on the same axis as the second room.; Examiner’s note: As the user moves the cursor after selecting the room, the user is able to move the room relative to the second room.) automatically determining a second positioning between the first layout and the second layout based on the first relative positioning, the second positioning aligning a boundary of the first layout with a boundary of the second layout; and (AppLearning, Description: “Second video of the tutorial series. We make a room extension for our house project by duplicating an object and snapping it onto another one by using the snap to face option. The Archimesh add-on is being used for this project.”; Timecode: 1:31-1:36, User specifies wanting to join the left and right room and using a snap tool to do so.; Timecode: 1:51-1:57, User configures the snap tool to snap on object faces.; Timecode: 2:40-2:53, User states that careful movement of the room with the cursor to the face to be snapped to (or in this case, wall of second room) will snap the room into place.; Timecode: 3:01, Clicking finishes the snapping process when the room’s walls are adjacent to each other.; Examiner’s note: Timecodes 2:40-2:53 show how the rooms are automatically determining a second and final positioning by aligning and snapping the wall boundaries together.) determining the 3D positional relationship based on the second positioning; (AppLearning, Description: “Second video of the tutorial series. We make a room extension for our house project by duplicating an object and snapping it onto another one by using the snap to face option. The Archimesh add-on is being used for this project.”; Timecode: 2:40-2:53, User states that careful movement of the room with the cursor to the face to be snapped to (or in this case, wall of second room) will snap the room into place.; Timecode: 3:01, Clicking finishes the snapping process when the room’s walls are adjacent to each other.; Examiner’s note: Timecodes 2:40-2:53 show how the rooms are automatically determining a second and final positioning by aligning and snapping the wall boundaries together. 11. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of Cier in view of Osokin to include the disclosure of receiving input providing a first relative positioning between a first and second room layout and automatically determining a second positioning by aligning the boundary of the first and second room layout, determining a final the 3D positional relationship, of AppLearning. The motivation for this modification could have been to assist a user to easily join 3D representations of rooms together. By “snapping” together rooms, this would allow a user to quickly generate a complete floor plan of a building. 12. As per claim 2, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 1, wherein the 3D positional relationship between the first 3D floor plan and the second 3D floor plan is determined based on a re-localization of a scanning device in the first room during the second scanning process. (Cier, column 3, lines 1-11, “... a combination of acquisition location and orientation for a target image is referred to at times herein as a ‘pose’ or an ‘acquisition position’ or merely ‘position’ of the target image.” and column 29, lines 56-64, “Pose data determined for a phone from its additional data may be discontinuous, such as due to gaps in acquisition of the additional data—if so, an attempt to interpolate a pose path through these gaps may be used by “re-localizing” (finding a correspondence between an old stream of camera and depth data available before the gap and a new stream of camera and depth data available after the gap).” and Claim 1, “... obtaining, by the mobile computing device, movement data captured by the mobile computing device as it moves along a travel path from the first area of the first room to a second area of a second room of the building; ... determining, by the mobile computing device, a first location and orientation of the mobile computing device in the first room based at least in part on analyzing the motion data and additional visual data of the first additional data, and a second location and orientation of the camera device in the first room that is relative to the first location and orientation of the mobile computing device and is based at least in part on visual data of the first panorama image; ...”) 13. As per claim 3, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 2, wherein the re-localization of the scanning device in the first room comprises matching feature points from the first scan with feature points from the second scan. (Cier, column 3, lines 1-11, “... a combination of acquisition location and orientation for a target image is referred to at times herein as a ‘pose’ or an ‘acquisition position’ or merely ‘position’ of the target image.” and column 29, lines 56-64, “Pose data determined for a phone from its additional data may be discontinuous, such as due to gaps in acquisition of the additional data—if so, an attempt to interpolate a pose path through these gaps may be used by “re-localizing” (finding a correspondence between an old stream of camera and depth data available before the gap and a new stream of camera and depth data available after the gap).” and Claim 16, “... determining, by the one or more computing devices, the second location in each of the one or more rooms of the camera device relative to the first location in that room of the mobile computing device, including analyzing the visual data of the at least one panorama image captured in that room by the camera device and the additional visual data that is captured by the mobile computing device in that room to identify features visible in both that visual data and that additional visual data, and using positions of the identified features as part of the determining of the second location in that room of the camera device; ...”) 14. As per claim 4, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 2 further comprising providing an indication indicating that the re-localization is complete. (Cier, column 28, line 13-column 9, line, 61, “As a non-exclusive example embodiment, the automated operations of the ILDM system may include the following operations to determine acquisition positions (e.g., acquisition locations and optionally acquisition orientations) ... Inter-panorama visual matching for coordinate system fusion. Pose data determined for a phone from its additional data may be discontinuous, such as due to gaps in acquisition of the additional data—if so, an attempt to interpolate a pose path through these gaps may be used by “re-localizing” …” and column 30, line 62-column 31, line 5, “Once automated acquisition position information is determined for such target panorama images, the information may be used in a variety of manners, such as one or more of the following: Displaying an approximate floorplan, and determining which areas of the house have been well-covered by panoramas and what other areas should be captured next. This allows users to capture higher-quality data, and avoid costly mistakes which currently require returning to the site to capture additional images or even a new tour.”) 15. As per claim 13, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 1, wherein determining the 3D positional relationship between the first 3D floor plan and the second 3D floor plan comprises: determining preliminary 3D positional relationship; and (Cier, Claim 1, “… obtaining, by the mobile computing device, movement data captured by the mobile computing device as it moves along a travel path from the first area of the first room to a second area of a second room of the building; …” and column 30, lines 29-36, “By using multiple localization techniques together, benefits can be achieved, including to use different techniques in different situations, and to use results of some techniques as initial estimates that are updated by other techniques (e.g., using motion pattern matching and/or camera marker recognition as initial estimates used by optimization-based techniques such as depth/point cloud matching and RGB feature matching).”) adjusting the preliminary 3D positional relationship based on an optimization using one or more constraints. (Cier, column 22, lines 39-50, “FIG. 2L illustrates additional information 255I corresponding to, after estimated room shapes are determined for the rooms of the illustrated floor of the house 198, positioning the rooms' estimated room shapes relative to each other, based at least in part in this example on connecting inter-room passages between rooms and matching room shape information between adjoining rooms—in at least some embodiments, such information may be treated as constraints on the positioning of the rooms, and an optimal or otherwise preferred solution is determined for those constraints. Examples of such constraints in FIG. 2L include matching 231 connecting passage information …”) 16. Claim 16 is similar in scope to claim 1 except for additional limitations that Cier in view of Osokin, and further in view of AppLearning discloses: A system comprising: a non-transitory computer-readable storage medium; and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the system to perform operations comprising: (Cier, column 37, lines 22-29, “Thus, in some embodiments, some or all of the described techniques may be performed by hardware means that include one or more processors ... such as by execution of software instructions of the one or more software programs ...” and column 37, lines 44-47, “Some or all of the components, systems and data structures may also be stored (e.g., as software instructions or structured data) on a non-transitory computer-readable storage mediums ...”) 17. Claim 17, which is similar in scope to claims 2 and 16, is thus rejected under the same rationale as described above. 18. Claim 18, which is similar in scope to claims 3, 16, and 17, is thus rejected under the same rationale as described above. 19. Claim 19, which is similar in scope to claims 4, 16, and 17, is thus rejected under the same rationale as described above. 20. Claim 20, which is similar in scope to claims 1 and 16, is thus rejected under the same rationale as described above. 21. Claims 5, and 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US-11252329-B1, hereinafter "Cier") in view of Osokin et al. (US-2024/0312163-A1, hereinafter "Osokin"), further in view of AppLearning (NPL: "Blender 3.0 3D Architecture 2- Attach a Room Using the Snap Tool.", https://www.youtube.com/watch?v=z5kP1yTzLtA, Jan. 20, 2022.), and further in view of Preston (US-2023/0206549-A1). 22. As per claim 5, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 1, wherein determining the 3D positional relationship between the first 3D floor plan and the second 3D floor plan comprises, during the second scanning process: [[re-localizing a scanning device in the first room; and]] tracking a position of the scanning device as the scanning device moves from the first room to the second room. (Cier, Claim 1, “... obtaining, by the mobile computing device, movement data captured by the mobile computing device as it moves along a travel path from the first area of the first room to a second area of a second room of the building; ... determining, by the mobile computing device, a first location and orientation of the mobile computing device in the first room based at least in part on analyzing the motion data and additional visual data of the first additional data, and a second location and orientation of the camera device in the first room that is relative to the first location and orientation of the mobile computing device and is based at least in part on visual data of the first panorama image; ...”) 23. Cier in view of Osokin, and further in view of AppLearning doesn't explicitly disclose but Preston discloses: re-localizing a scanning device in the first room; and (Preston, [0037], “In certain embodiments, it may be desirable to add another space. In these embodiments, the present technology may end scanning and taking images for a first space. A user may then begin to scan an additional space. To relocalize, the method may guide a user to a previously scanned area which will enable the capturing device to regain tracking of a position within the space by initializing a previously established coordinate system. When relocalization is completed, a user may be prompted to scan while walking to the new space so that the new space is connected to a three-dimensional map in which the originally scanned space is located.”) 24. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, and further in view of AppLearning to include the disclosure of re-localizing a scanning device in the first room in order to track it as the device moves between rooms, of Preston. The motivation for this modification could have been to help provide the scanning device a point of reference in order to quickly orient the device so that the completed scans are lined up and in the same coordinate system. This is particularly helpful if needing to start from one location into a plurality of different room for each individual scan. 25. As per claim 7, Cier in view of Osokin, further in view of AppLearning, and further in view of Preston discloses: The method of claim 5, wherein tracking the position of the scanning device comprises visual inertial odometry (VIO) based on images captured by the scanning device during the second scan. (Cier, column 3, lines 61-65, “... in other embodiments, the automated determination of the position for the mobile computing device may be based at least in part on other analyses, such as via Wi-Fi triangulation, Visual Inertial Odometry (“VIO”), etc.” and column 4, lines 9-15, “... the automated operations by the ILDM system may further include determining an additional estimated room shape for the enclosing room (e.g., an additional 3D room shape) based at least in part on an analysis of the visual data in the target image, such as based at least in part on performing a MVS (multiple-view stereovision) and/or Visual Odometry (“VO”) analysis ...”) 26. As per claim 8, Cier in view of Osokin, further in view of AppLearning, and further in view of Preston discloses: The method of claim 5 further comprising initiating capture of sensor data for the second scan based on determining that the scanning device is within the second room. (Cier, Claim 1, “... obtaining, by the mobile computing device, a second panorama image that is captured by the camera device in the second area of the second room and has 360 degrees of horizontal coverage around a vertical axis, and second additional data captured in the second area by the mobile computing device that includes motion data from the IMU sensors and further visual data from the image sensors; …” and column 15, lines 34-40, “An embodiment of the ICA system (e.g., ICA system 160 on server computing system(s) 180; a copy of some or all of the ICA system executing on a mobile computing device of the user, such as ICA application system 155 executing in memory 152 on device 185; etc.) may automatically perform or assist in the capturing of the data representing the building interior …”) 27. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US-11252329-B1, hereinafter "Cier") in view of Osokin et al. (US-2024/0312163-A1, hereinafter "Osokin"), further in view of AppLearning (NPL: "Blender 3.0 3D Architecture 2- Attach a Room Using the Snap Tool.", https://www.youtube.com/watch?v=z5kP1yTzLtA, Jan. 20, 2022.), further in view of Preston (US-2023/0206549-A1), and further in view of Huber (US-11830136-B2). 28. As per claim 6, Cier in view of Osokin, further in view of AppLearning, and further in view of Preston discloses: The method of claim 5, wherein determining the 3D positional relationship between the first 3D floor plan and the second 3D floor plan comprises, during the second scanning process, (See rejection of claim 5.) 29. Cier in view of Osokin, further in view of AppLearning, and further in view of Preston doesn't explicitly disclose but Huber discloses: tracking the position of the scanning device as the scanning device moves from one story to another story of multiple stories in the multi-room environment. (Huber, column 7, lines 16-36, “An interesting potential application of this technique exists for deeper integration of this approach with SLAM systems. Since this technique works by moving forward through the data in time, it may be implemented in real-time during SLAM processing to provide better modeling results. For example, if the floor is leveled in real-time, the IMU estimates of gravity vector may more appropriately match the map that is being built and lead to better scan to map matching. Additionally, if building stories are revisited, adjusting the new data to the correct level may improve the alignment of revisited parts of the story preventing double registration of these regions. More specifically, in some instances, real time correction of floor data while scanning, referred to as “live scanning” may allow one to use an established floor level while scanning to improve the process of scanning in difficult environments. For example, if, instead of waiting until one leaves a floor to detect a floor level and adjust the data, one adjusts the data in real time as soon as there is enough data to establish such an adjustment, one might use the floor level as a step in an optimization process to align new scan data.”) 30. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 5 of Cier in view of Osokin, further in view of AppLearning, and further in view of Preston to include the disclosure of tracking the position of a scanning device as the scanning device moves from one story to another story of multiple stories in the multi-room environment, of Huber. The motivation for this modification could have been to help provide a method to create a 3D representation of a multi-floor building, associating the floors with each other and overcoming the difficulties from other scanning methods that work primarily on the same floor plane. 31. Claims 10, 12, 21 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US-11252329-B1, hereinafter "Cier") in view of Osokin et al. (US-2024/0312163-A1, hereinafter "Osokin"), further in view of AppLearning (NPL: "Blender 3.0 3D Architecture 2- Attach a Room Using the Snap Tool.", https://www.youtube.com/watch?v=z5kP1yTzLtA, Jan. 20, 2022.), and further in view of Tiwari et al. (US-10606963-B2, hereinafter "Tiwari"). 32. As per claim 10, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 1, (See rejection of claim 1.) 33. Cier in view of Osokin, and further in view of AppLearning doesn't explicitly disclose but Tiwari discloses: wherein the presenting comprises orienting the first layout and the second layout based on cardinal directions associated with the first scan and second scan. (Tiwari, column 2, lines 34-39, “In additional embodiments, the sensors can include a camera, a gyroscope, an accelerometer and a digital compass. Adjacent rooms can be identified by digital compass readings associated with room corners. Further, multiple floors can be aligned based on user input or digital compass readings associated with floors.” and column 10, lines 37-48, “Alternately, adjacent rooms can be determined by using the compass readings associated with room corners. The room corners with same or closer readings can be joined to automatically assemble the floor plan from rooms. By repeating these steps in each room of the building the user creates a floor plan of the building. While capturing the 360 image, the mapping module 302 auto detects doorways and window frames such that the mapping module builds these features of the room/building into the floor plan. The mapping module allows the user to build upon the stored floor-to-wall 360 image of the room to create a three dimensional (3D) floor plan.”) 34. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, and further in view of AppLearning to include the disclosure of orienting the first layout and the second layout based on cardinal directions associated with the first scan and second scan, of Tiwari. The motivation for this modification could have been to help assist a user in determining a common direction for multiple room layouts and make it easier to determine the positional relationship between rooms. 35. As per claim 12, Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari discloses: The method of claim 1, wherein the aligning comprises aligning corners, walls, or doors. (Tiwari, column 10, lines 33-45, “The user indicates 302 how the rooms are connected to create the floor plan of the building. For example, the user can indicate which adjacent walls are shared between the first and second rooms. Alternately, adjacent rooms can be determined by using the compass readings associated with room corners. The room corners with same or closer readings can be joined to automatically assemble the floor plan from rooms. By repeating these steps in each room of the building the user creates a floor plan of the building. While capturing the 360 image, the mapping module 302 auto detects doorways and window frames such that the mapping module builds these features of the room/building into the floor plan.”) 36. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, further in view of AppLearning to include the disclosure of an alignment process comprising of corners, walls, or doors, of Tiwari. The motivation for this modification could have been to help to automatically line up multiple rooms of a floor plan so that the boundaries match in an appropriate way. This can be used to assist a user in lining up room floor plans that are adjacent to each other. 37. As per claim 21, Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari discloses: The method of claim 1, wherein the user interface provides feedback (Tiwari, column 18, line 63-column 19, line 8, “The system 100 provides a user interface that illustrates the floor plan and allows a user to drag a specific component and drop it at a particular location on the floor plan. … The module keeps track of the component icon location, obtains the sub-set of valid locations that are in the vicinity of the icon, and for example, highlights these valid locations 232a-f in the vicinity, as shown in FIG. 2G. This provides a visual/audio/haptic feedback to guide the user about the potential valid placement options.”) regarding the first positioning. (AppLearning, Timecode: 2:05, User selects room object using the snap tool.; Timecode: 2:09-2:17, User presses the “g” key and is able to freely move the room around with the user’s cursor (with mouse, touchscreen, or other interface). Pressing the “x” key keeps the room on the same axis as the second room.; Examiner’s note: As the user moves the cursor after selecting the room, the user is able to move the room relative to the second room.) 38. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, further in view of AppLearning to include the disclosure of the user interface providing feedback to a user regarding a positioning, of Tiwari. The motivation for this modification could have been to give the user information regarding a room positioning. For instance, feedback could inform the user of when two rooms are properly lined up to be joined and no longer require interaction from the user. 39. As per claim 23, Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari discloses: The method of claim 21, wherein the feedback is a sound produced (Tiwari, column 18, line 63-column 19, line 8, “The system 100 provides a user interface that illustrates the floor plan and allows a user to drag a specific component and drop it at a particular location on the floor plan. … The module keeps track of the component icon location, obtains the sub-set of valid locations that are in the vicinity of the icon, and for example, highlights these valid locations 232a-f in the vicinity, as shown in FIG. 2G. This provides a visual/audio/haptic feedback to guide the user about the potential valid placement options.”) when the first 3D floor plan and the second 3D floor plan reach adjacent positions. (AppLearning, Description: “Second video of the tutorial series. We make a room extension for our house project by duplicating an object and snapping it onto another one by using the snap to face option. The Archimesh add-on is being used for this project.”; Timecode: 1:31-1:36, User specifies wanting to join the left and right room and using a snap tool to do so.; Timecode: 1:51-1:57, User configures the snap tool to snap on object faces.; Timecode: 2:40-2:53, User states that careful movement of the room with the cursor to the face to be snapped to (or in this case, wall of second room) will snap the room into place.; Timecode: 3:01, Clicking finishes the snapping process when the room’s walls are adjacent to each other.; Examiner’s note: Timecodes 2:40-2:53 show how the rooms are automatically determining a second and final positioning by aligning and snapping the wall boundaries together.) 40. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, further in view of AppLearning to include the disclosure of feedback generated for a user regarding a positioning is sound feedback, of Tiwari. The motivation for this modification could have been to give the user an audio notification regarding a room positioning. For instance, audio feedback, such as a “snapping sound,” could inform the user of when two rooms are properly lined up to be joined and no longer require interaction from the user. 41. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US-11252329-B1, hereinafter "Cier") in view of Osokin et al. (US-2024/0312163-A1, hereinafter "Osokin"), further in view of AppLearning (NPL: "Blender 3.0 3D Architecture 2- Attach a Room Using the Snap Tool.", https://www.youtube.com/watch?v=z5kP1yTzLtA, Jan. 20, 2022.), and further in view of Lambert et al. (US 2023/0138762 A1, hereinafter "Lambert"). 42. As per claim 14, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 13, wherein the one or more constraints comprise a constraint corresponding to: (See rejection of claim 13.) 43. Cier in view of Osokin, and further in view of AppLearning doesn't explicitly disclose but Lambert discloses: a difference between representations of a door between adjacent rooms; a difference between representations of a window between adjacent rooms; or a difference between representations of a wall between adjacent rooms. (Lambert, page 2, ¶ 0013, “In at least some embodiments, the automated operations of the MIGM system include analyzing visual data of pairs of target images that have little-to-no overlap in visual coverage in order to identify target images that are likely to be acquired at acquisition locations proximate to each other (e.g., in adjacent rooms or other adjacent areas), performing a global optimization operation to refine the alignment of inter-image directions and acquisition locations (in combination, referred to at times herein as inter-image “pose” information) and optionally distances for some or all of the multiple target images for the building into globally aligning those target images using a common coordinate system, and then using information identified from the images' visual data that includes structural room layouts (e.g., locations of walls and inter-wall borders) and structural wall elements (e.g., windows, doorways and non-doorway wall openings, etc.) along with the aligned global information of those target images to generate a floor plan of the building …”) 44. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 13 of Cier in view of Osokin, and further in view of AppLearning to include the disclosure of using multiple building constraints such as differences between representations of a doors, windows, and walls between adjacent rooms, of Lambert. The motivation for this modification could have been to help translate and orient the captured 3D room data to align into a common coordinate system, allowing all captured data to be corelated with each other. 45. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US-11252329-B1, hereinafter "Cier") in view of Osokin et al. (US-2024/0312163-A1, hereinafter "Osokin"), further in view of AppLearning (NPL: "Blender 3.0 3D Architecture 2- Attach a Room Using the Snap Tool.", https://www.youtube.com/watch?v=z5kP1yTzLtA, Jan. 20, 2022.), further in view of Tiwari et al. (US-10606963-B2, hereinafter "Tiwari"), and further in view of Li et al. (US-11592969-B2, hereinafter "Li"). 46. As per claim 15, Cier in view of Osokin, and further in view of AppLearning discloses: The method of claim 1, (See rejection of claim 1.) 47. Cier in view of Osokin, and further in view of AppLearning doesn't explicitly disclose but Tiwari discloses: [[The method of claim 1,]] wherein generating the combined 3D floor plan comprises merging representations of a wall between adjacent rooms (Tiwari, column 10, lines 33-48, “The user indicates 302 how the rooms are connected to create the floor plan of the building. For example, the user can indicate which adjacent walls are shared between the first and second rooms. Alternately, adjacent rooms can be determined by using the compass readings associated with room corners. The room corners with same or closer readings can be joined to automatically assemble the floor plan from rooms. By repeating these steps in each room of the building the user creates a floor plan of the building. ... The mapping module allows the user to build upon the stored floor-to-wall 360 image of the room to create a three dimensional (3D) floor plan.”) [[and reprojecting a door or window based on the merging.]] 48. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, and further in view of AppLearning to include the disclosure of generating a combined 3D floor plan by merging representations of a wall between adjacent rooms, of Tiwari. The motivation for this modification could have been to help to create a complete set of connected rooms that make up multiple rooms in a building. 49. Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari doesn't explicitly disclose but Li discloses: [[The method of claim 1, wherein generating the combined 3D floor plan comprises merging representations of a wall between adjacent rooms]] and reprojecting a door or window based on the merging. (Li, column 17, lines 25-43, “(6) a room object spatial reprojection cost, such as to measure a degree to which wall feature positions match expected positions when an image reprojection is performed. For example, given known wall features for a room shape (e.g., wall features automatically determined with an object detection pipeline, manually annotated by one or more users, etc.), and an algorithm for object localization within an image (e.g., the ‘Faster R-CNN’ algorithm, or the faster region-based convolutional neural network algorithm), bounding boxes may be generated from panorama images for wall openings (e.g., doors, windows and other wall openings), and a wall feature position may be computed from a candidate/target room's room shape in a panorama image for a current room to be connected to the candidate/target room via inter-room wall openings of the rooms—such a reprojected wall feature bounding box may then be compared with bounding boxes generated for the panorama image for the current room for a corresponding wall feature (e.g., a wall opening).”) 50. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari to include the disclosure of reprojecting a door or window based on the merging of adjacent 3D rooms, of Li. The motivation for this modification could have been to ensure that the merging process properly accounts for the doors and windows so that they do not become misaligned after merging. This makes sure that doors and windows are in their expected locations after the room merge process. 51. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Cier et al. (US-11252329-B1, hereinafter "Cier") in view of Osokin et al. (US-2024/0312163-A1, hereinafter "Osokin"), further in view of AppLearning (NPL: "Blender 3.0 3D Architecture 2- Attach a Room Using the Snap Tool.", https://www.youtube.com/watch?v=z5kP1yTzLtA, Jan. 20, 2022.), further in view of Tiwari et al. (US-10606963-B2, hereinafter "Tiwari"), and further in view of Hauenstein et al. (US-2019/0065027-A1, hereinafter "Hauenstein"). 52. As per claim 22, Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari discloses: The method of claim 21, wherein the feedback (Tiwari, column 18, line 63-column 19, line 8, “The system 100 provides a user interface that illustrates the floor plan and allows a user to drag a specific component and drop it at a particular location on the floor plan. … The module keeps track of the component icon location, obtains the sub-set of valid locations that are in the vicinity of the icon, and for example, highlights these valid locations 232a-f in the vicinity, as shown in FIG. 2G. This provides a visual/audio/haptic feedback to guide the user about the potential valid placement options.”) [[is an animation]] of the first 3D floor plan and the second 3D floor plan moving to adjacent positions (AppLearning, Description: “Second video of the tutorial series. We make a room extension for our house project by duplicating an object and snapping it onto another one by using the snap to face option. The Archimesh add-on is being used for this project.”; Timecode: 1:31-1:36, User specifies wanting to join the left and right room and using a snap tool to do so.; Timecode: 1:51-1:57, User configures the snap tool to snap on object faces.; Timecode: 2:40-2:53, User states that careful movement of the room with the cursor to the face to be snapped to (or in this case, wall of second room) will snap the room into place.; Timecode: 3:01, Clicking finishes the snapping process when the room’s walls are adjacent to each other.; Examiner’s note: Timecodes 2:40-2:53 show how the rooms are automatically determining a second and final positioning by aligning and snapping the wall boundaries together.) and a sound (Tiwari, column 18, line 63-column 19, line 8, “The system 100 provides a user interface that illustrates the floor plan and allows a user to drag a specific component and drop it at a particular location on the floor plan. … The module keeps track of the component icon location, obtains the sub-set of valid locations that are in the vicinity of the icon, and for example, highlights these valid locations 232a-f in the vicinity, as shown in FIG. 2G. This provides a visual/audio/haptic feedback to guide the user about the potential valid placement options.”) when the first 3D floor plan and the second 3D floor plan reach the adjacent positions. (AppLearning, Description: “Second video of the tutorial series. We make a room extension for our house project by duplicating an object and snapping it onto another one by using the snap to face option. The Archimesh add-on is being used for this project.”; Timecode: 1:31-1:36, User specifies wanting to join the left and right room and using a snap tool to do so.; Timecode: 1:51-1:57, User configures the snap tool to snap on object faces.; Timecode: 2:40-2:53, User states that careful movement of the room with the cursor to the face to be snapped to (or in this case, wall of second room) will snap the room into place.; Timecode: 3:01, Clicking finishes the snapping process when the room’s walls are adjacent to each other.; Examiner’s note: Timecodes 2:40-2:53 show how the rooms are automatically determining a second and final positioning by aligning and snapping the wall boundaries together.) 53. Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari doesn't explicitly disclose but Hauenstein discloses: [[The method of claim 21, wherein the feedback]] is an animation [[of the first 3D floor plan and the second 3D floor plan moving to adjacent positions and a sound when the first 3D floor plan and the second 3D floor plan reach the adjacent positions.]] (Hauenstein, [0391], “In some embodiments, if the respective virtual user interface object moves beyond a furthest extent of its maximum resting state based on movement of the input, the respective virtual user interface object snaps back (e.g., in an animated transition) to its maximum resting state when the input lifts off. For example, if a virtual roof of a 3D building model can be displayed resting directly on the 3D building model and hovering up to twelve inches above the building model (e.g., the resting state of the virtual roof is between zero and twelve inches from the building model), if a user lifts the virtual roof fifteen inches above the building model, when the user input lifts off, the virtual roof snaps back to twelve inches above the building model.”) 54. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to modify the method of claim 1 of Cier in view of Osokin, further in view of AppLearning, and further in view of Tiwari to include the disclosure of feedback generated for a user regarding a positioning is an animation, of Hauenstein. The motivation for this modification could have been to give the user a visual notification regarding a room positioning. For instance, animation feedback, such as rooms visibly “snapping together,” could inform the user of when two rooms are properly lined up to be joined and no longer require interaction from the user. Conclusion 55. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW CLOTHIER whose telephone number is (571)272-4667. The examiner can normally be reached Mon-Fri 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW CLOTHIER/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
May 03, 2025
Non-Final Rejection — §103
Aug 14, 2025
Applicant Interview (Telephonic)
Aug 15, 2025
Examiner Interview Summary
Aug 15, 2025
Response Filed
Nov 18, 2025
Final Rejection — §103
Feb 25, 2026
Applicant Interview (Telephonic)
Feb 26, 2026
Examiner Interview Summary
Mar 02, 2026
Request for Continued Examination
Mar 05, 2026
Response after Non-Final Action
Apr 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530842
AIRBORNE LiDAR POINT CLOUD FILTERING METHOD DEVICE BASED ON SUPER-VOXEL GROUND SALIENCY
2y 5m to grant Granted Jan 20, 2026
Patent 12499800
IN-VEHICLE DISPLAY DEVICE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
1y 11m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month