DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This office action is responsive to the amendment received 10/27/2025.
In the response to the Non-Final Office Action, the applicant states that No claims have been amended.
No claims have been amended. In summary, claims 1-20 are pending in current application.
Response to Arguments
Applicant's arguments filed 10/27/2025 have been fully considered but they are not persuasive.
Regarding to claim 1, the applicant argues that cited arts fail to teach or suggest “generating boundary tolerance data for each boundary estimate, each boundary tolerance data creating a plane buffer that extends a corresponding boundary estimate by a predetermined distance; locating an intersection of the first plane segment and the second plane segment using the boundary estimate data and the boundary tolerance data; and constructing a 3D layout model that includes at least a boundary segment connecting the first plane segment and the second plane segment at the intersection." The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
Stekovic discloses “generating boundary tolerance data for each boundary estimate”. For example, in paragraph [0053]: Stekovic teaches keeping planes corresponding to the floor, ceiling, walls, and/or other objects in an environment, while planes corresponding to other objects or components are discarded; Stekovic further teaches identifying corners, edges and boundaries based on the intersections of the planes; Stekovic further more teaches finding an optimal set of layout polygons for the image. In paragraph [0079], Stekovic teaches this process is iterated until a layout is determined, i.e. meeting tolerance. In paragraph [0103], Stekovic teaches iterative layout refinement is performed by the layout estimation system 600; Stekovic further teaches when the rendered depth map has values that are smaller than the values of the original depth map by some threshold, the layout estimation system determines that there is a mistake in the layout that can be fixed by adding one or more planes; Stekovic further more teaches the threshold is related to boundary tolerance data and can be any suitable value, such as 0.1, 0.2, 0.3, or other value. In addition, Stekovic teaches the layout estimation system 600 determines that layout components should not be in front of other objects in the room, i.e. that layout components located in front of other objects in the room is not tolerance. In paragraph [0107], Stekovic teaches invalid depth values is discarded from all calculations; Stekovic further teaches iterative layout refinement to meet requirements; Stekovic further more teaches invalid depth values are not tolerance. In paragraph [0111], Stekovic teaches discarding at least plane of the plurality of planes that belongs to at least one class other than the subset of one or more classes. In paragraph [0117], Stekovic teaches a greater distance. In paragraph [0122], Stekovic teaches adjusting the location of the 3D model from the first location and from the first pose to the second location and to the second pose in an output image. In paragraph [0123], Stekovic teaches the location, and the property of the three-dimensional model are adjusted based on semantic information; Stekovic further teaches a range of possible movements, stretched amount and compressed amount of boundary are boundary tolerance.
Stekovic further discloses “each boundary tolerance data creating a plane buffer that extends a corresponding boundary estimate by a distance”. For example, in Fig. 11A-C and paragraph [0081], Stekovic teaches the boundary in room layout extends in Z direction by a room depth distance as illustrated in Fig. 11A-C. In paragraph [0123], Stekovic teaches adjusting the property of the selected three-dimensional model; Stekovic further teaches a buffer for a range of possible movements and a buffer for stretched amount; Stekovic further more teaches manipulating the 3D model and the stretched 3D model extends the plane boundaries by a distance.
Stekovic further more discloses “locating an intersection of the first plane segment and the second plane segment using the boundary estimate data and the boundary tolerance data”. For example, in paragraph [0053], Stekovic teaches the plane intersections represent vertices of candidate polygons for the room layout. In paragraph [0068], Stekovic teaches computing the intersections of the planes. In Fig. 6B and paragraph [0078], Stekovic teaches intersection of three different planes provide a candidate layout vertex or corner; Stekovic further teaches an intersection of two different planes provides a candidate layout edge. In paragraph [0123], Stekovic teaches a range of possible movements; Stekovic further teaches manipulate an amount of the 3D model with intersections of planes, such as stretched and compressed, among others.
In addition, Stekovic discloses “constructing a 3D layout model that includes at least a boundary segment connecting the first plane segment and the second plane segment at the intersection”. For example, in Fig. 1 and paragraph [0052], Stekovic teaches generating an estimated 3D layout of the room;
PNG
media_image1.png
364
762
media_image1.png
Greyscale
. In paragraph [0053], Stekovic teaches determining the 3D layout of the environment based on the polygons. In paragraph [0054], Stekovic teaches reconstructing the 3D layout of a room, including walls, floors, and ceilings from a single perspective view; Stekovic further teaches generating the 3D layout using a color image. In Fig. 11C and paragraph [0081], Stekovic teaches room layout; Stekovic further teaches a resulting 3D reconstructed layout;
PNG
media_image2.png
228
264
media_image2.png
Greyscale
. In Fig. 12C and paragraph [0081], Stekovic teaches a resulting 3D reconstructed layout. In Fig. 14 and paragraph [0114], Stekovic teaches determining a three-dimensional layout of the environment based on the one or more polygons. In paragraph [0120], Stekovic teaches generating an output image based on the three-dimensional layout of the environment.
Lin discloses “a predetermined value”. For example, in paragraph [0006], Lin teaches the error of the target 3D parameter and the training 3D parameter is less than a preset threshold value. In Fig. 4 and paragraph [0053], Lin teaches when the error of the target 3D parameter θ.sub.3 and the training 3D parameter θ.sub.2 is less than a preset threshold value, the second training phase ends and the target 3D encoder 152 acts as the 3D encoder 150. In paragraph [0062], Lin teaches estimating the wall-floor, the wall-ceiling, and the wall-wall boundaries directly.
Regarding to claim 1, the applicant argues that Stekovic does not construct the 3D model by using the “boundary estimate data” and the “boundary tolerance data”. The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons.
Claim 1 does not include “construct the 3D model by using the “boundary estimate data” and the “boundary tolerance data”.
What claimed is: “constructing a 3D layout model that includes at least a boundary segment connecting the first plane segment and the second plane segment at the intersection”.
Stekovic discloses “constructing a 3D layout model that includes at least a boundary segment connecting the first plane segment and the second plane segment at the intersection”. For example, in Fig. 1 and paragraph [0052], Stekovic teaches generating an estimated 3D layout of the room;
PNG
media_image1.png
364
762
media_image1.png
Greyscale
. In paragraph [0053], Stekovic teaches determining the 3D layout of the environment based on the polygons. In paragraph [0054], Stekovic teaches reconstructing the 3D layout of a room, including walls, floors, and ceilings from a single perspective view; Stekovic further teaches generating the 3D layout using a color image. In Fig. 11C and paragraph [0081], Stekovic teaches room layout; Stekovic further teaches a resulting 3D reconstructed layout;
PNG
media_image2.png
228
264
media_image2.png
Greyscale
. In Fig. 12C and paragraph [0081], Stekovic teaches a resulting 3D reconstructed layout. In Fig. 14 and paragraph [0114], Stekovic teaches determining a three-dimensional layout of the environment based on the one or more polygons. In paragraph [0120], Stekovic teaches generating an output image based on the three-dimensional layout of the environment.
Claims 8 and 15 contain similar limitations as recited claim 1. Therefore, claims 8 and 15 are not allowable due to the similar reasons as discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-9, 11-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Stekovic (US 20210150805 A1) and in view of Lin (US 20200211284 A1).
Regarding to claim 1, Stekovic discloses a computer-implemented method comprising (Fig. 1; [0052]: methods, systems, apparatuses, and computer-readable media perform three-dimensional (3D) layout estimation from one or more images; generate an estimated 3D layout of the room;
PNG
media_image1.png
364
762
media_image1.png
Greyscale
):
receiving a digital image, the digital image comprising two-dimensional data (Fig. 1; [0052]: receive a single input image 102 of a room; [0066]: a digital camera; Fig. 14; [0109]: an input image of an environment; [0124]: a camera captures images);
generating instance segmentation data using the digital image ([0053]: segment planes of an environment depicted in an input image; segments include the floor, ceiling, walls, and other objects in an environment; [0070]: perform semantic segmentation; Fig. 6B; [0075]: generate a semantic segmentation; Fig. 8; [0076]: the classes and segmentations include a class for floor, a class for wall, and a class for ceiling), the instance segmentation data including segmentation masks identifying architectural elements in the digital image ([0053]: define a structure by one or more floors, ceilings, walls, and other objects; segments include the floor, ceiling, walls, and other objects in an environment; Fig. 6B; [0075]: generate a semantic segmentation; Fig. 7; [0075]: the floor is illustrated with a hatched pattern 704, a first side wall is shown with a hatched pattern 706, a back wall is shown with a hatched pattern 708; mask;
PNG
media_image3.png
354
420
media_image3.png
Greyscale
; Fig. 8; [0076]: the classes and segmentations includes a class for floor, a class for wall, and a class for ceiling);
generating depth data using the digital image ([0072]: obtain depth information; the depth information is inferred from a single color image using machine learning techniques; Fig. 6B; [0077]: the depth is estimated and generated using a neural network);
generating a set of planes, each plane being generated using the depth data of a corresponding segmentation mask ([0053]: keep planes corresponding to the floor, ceiling, walls, and other objects in an environment; [0068]: detect planes in an environment; the parameter determination engine 634 use depth information to determine 3D parameters of the planes for the layout; Fig. 7; [0075]: the floor is illustrated with a hatched pattern 704, a first side wall is shown with a hatched pattern 706, a back wall is shown with a hatched pattern 708; [0112]: normal vector for a plane is represented by a vector that is orthogonal to the plane; [0113]: determine the one or more three-dimensional parameters of the one or more planes using the depth information), the set of planes including at least a first plane and a second plane ([0068]: the planes include first plane and a second plane; [0072]: the layout planes include first plane and a second plane; [0073]: the layout planes; [0112]: each plane of the one or more planes);
generating boundary estimate data for the set of planes using corresponding boundary data of the segmentation masks ([0053]: identify corners, edges and boundaries based on the intersections of the planes; calculate 3D parameters for the remaining planes; [0060]: uses predefined room types to estimate the layout edges; [0068]: The corners and the boundaries are recovered by computing the intersections of the planes; Fig. 7; [0075]: the floor is illustrated with a hatched pattern 704, a first side wall is shown with a hatched pattern 706, a back wall is shown with a hatched pattern 708;
PNG
media_image3.png
354
420
media_image3.png
Greyscale
; [0108]: semantic boundaries, and relations between the planes);
generating a set of plane segments by bounding the set of planes using the boundary estimate data ([0053]: identify corners, edges and boundaries based on the intersections of the planes; determine polygons based on the corners and edges; [0062]: estimate the wall-floor, the wall-ceiling, and the wall-wall boundaries directly; [0068]: the corners and the boundaries are recovered by computing the intersections of the planes), the set of plane segments include a first plane segment corresponding to a bounding of the first plane and a second plane segment corresponding to a bounding of the second plane ([0062]: estimate the wall-floor, the wall-ceiling, and the wall-wall boundaries directly; there is only one boundary per image column for the ceiling and the floor; Fig. 7; [0075]: the floor is illustrated with a hatched pattern 704, a first side wall is shown with a hatched pattern 706, a back wall is shown with a hatched pattern 708;
PNG
media_image3.png
354
420
media_image3.png
Greyscale
; [0108]: semantic boundaries, and relations between the planes),
generating boundary tolerance data for each boundary estimate ([0053]: keep planes corresponding to the floor, ceiling, walls, and/or other objects in an environment, while planes corresponding to other objects or components are discarded; identify corners, edges and boundaries based on the intersections of the planes; find an optimal set of layout polygons for the image; [0079]: this process is iterated until a layout is determined; [0103]: iterative layout refinement is performed by the layout estimation system 600; [0107]: invalid depth values is discarded from all calculations; iterative layout refinement; [0117]: a greater distance; [0122]: adjust the location of the 3D model from the first location and from the first pose to the second location and to the second pose in an output image; [0123]: the location, and the property of the three-dimensional model are adjusted based on semantic information; a range of possible movements, stretched amount and compressed amount of boundary are boundary tolerance), each boundary tolerance data creating a plane buffer that extends a corresponding boundary estimate by a distance (Fig. 11A-C; [0081]: the boundary in room layout extends in Z direction by a room depth distance as illustrated in Fig. 11A-C; [0123]: adjust the property of the selected three-dimensional model; a buffer for a range of possible movements; manipulate the 3D model; the stretched 3D model extends the plane boundaries by a distance);
locating an intersection of the first plane segment and the second plane segment using the boundary estimate data and the boundary tolerance data ([0053]: the plane intersections represent vertices of candidate polygons for the room layout; [0068]: compute the intersections of the planes; Fig. 6B; [0078]: intersection of three different planes provide a candidate layout vertex or corner; an intersection of two different planes provides a candidate layout edge.); and
constructing a 3D layout model that includes at least a boundary segment connecting the first plane segment and the second plane segment at the intersection (Fig. 1; [0052]: generate an estimated 3D layout of the room;
PNG
media_image1.png
364
762
media_image1.png
Greyscale
; [0053]: determine the 3D layout of the environment based on the polygons; [0054]: reconstruct the 3D layout of a room, including walls, floors, and ceilings from a single perspective view; generate the 3D layout using a color image; Fig. 11C; [0081]: room layout; a resulting 3D reconstructed layout;
PNG
media_image2.png
228
264
media_image2.png
Greyscale
; Fig. 12C; [0081]: a resulting 3D reconstructed layout; Fig. 14; [0114]: determine a three-dimensional layout of the environment based on the one or more polygons; [0120]: generate an output image based on the three-dimensional layout of the environment).
Stekovic fails to explicitly disclose a predetermined.
In same field of endeavor, Lin teaches:
a predetermined value ([0006]: the error of the target 3D parameter and the training 3D parameter is less than a preset threshold value; Fig. 4; [0053]: when the error of the target 3D parameter θ.sub.3 and the training 3D parameter θ.sub.2 is less than a preset threshold value, the second training phase ends and the target 3D encoder 152 acts as the 3D encoder 150; [0062]: estimates the wall-floor, the wall-ceiling, and the wall-wall boundaries directly).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stekovic to include a predetermined value as taught by Lin. The motivation for doing so would have been to estimate the wall-floor, the wall-ceiling, and the wall-wall boundaries directly; to improve the results not only quantitatively but also qualitatively as taught by Lin in paragraphs [0062] and [0093].
Regarding to claim 2, Stekovic in view of Lin discloses the computer-implemented method of claim 1, wherein the depth data is generated via a convolutional neural network (CNN) using the digital image (Stekovic; [0070]: the machine learning system 632 includes one or more CNNs; [0072]: obtain depth information; the depth information is inferred from a single color image using machine learning techniques; Fig. 6B; [0077]: the depth is estimated using a neural network).
Regarding to claim 4, Stekovic in view of Lin discloses the computer-implemented method of claim 1, wherein:
the set of planes further include at least a third plane (Stekovic; [0053]: one or more floors, ceilings, walls include a third plane; [0054]: room includes walls, floors, and ceilings; [0068]: the planes include first plane, a second plane, and a third plane; [0072-0073]; Fig. 7; [0075]: the floor is illustrated with a hatched pattern 704, a first side wall is shown with a hatched pattern 706, a back wall is shown with a hatched pattern 708;
PNG
media_image3.png
354
420
media_image3.png
Greyscale
; [0112]: each plane of the one or more planes; a third plane); and
the segmentation masks identify a wall, a floor, or a ceiling (or is optional; Stekovic; [0053]: one or more floors, ceilings, walls; [0054]: room includes walls, floors, and ceilings; Fig. 6B; [0075]: generate a semantic segmentation; Fig. 7; [0075]: the floor is illustrated with a hatched pattern 704, a first side wall is shown with a hatched pattern 706, a back wall is shown with a hatched pattern 708; mask;
PNG
media_image3.png
354
420
media_image3.png
Greyscale
; Fig. 8; [0076]: the classes and segmentations includes a class for floor, a class for wall, and a class for ceiling).
Regarding to claim 5, Stekovic in view of Lin discloses the computer-implemented method of claim 1, further comprising:
generating measurement data based on the 3D layout model (Lin; [0105]: estimate the 3D cuboid representation of the spatial layout for the indoor scene; the indoor navigation and localization generate measurement; the virtual object arrangement in the rooms contains measurements; [0106]: output the 3D indoor scene image),
wherein the measurement data indicates a dimension between a first locus on the first plane and a second locus on the second plane (Lin; Fig. 11; [0099]: the re-projected results; [0105]: estimate the 3D cuboid representation of the spatial layout for the indoor scene; the indoor navigation and localization generate measurement; the virtual object arrangement in the rooms contains measurements; [0106]: output the 3D indoor scene image).
Same motivation of claim 1 is applied here.
Regarding to claim 6, Stekovic in view of Lin discloses the computer-implemented method of claim 1, further comprising:
performing an action using the 3D layout model (Stekovic; [0122]: manipulate the three-dimensional model, and adjust at least one of a pose, a location, and a property of the three-dimensional model in an output image based on the user input; the process 1400 adjusts the location of the 3D model from the first location and from the first pose to the second location and to the second pose in an output image; [0123]),
wherein, the action includes outputting the 3D layout model to an input/output device or controlling an actuator using the 3D layout model (or is optional; Stekovic; [0120]: generate an output image based on the three-dimensional layout of the environment; [0122]: the process 1400 adjusts the location of the 3D model from the first location and from the first pose to the second location and to the second pose in an output image; [0123]: allow a user to interact with the 3D model through a user interface).
Regarding to claim 7, Stekovic in view of Lin discloses the computer-implemented method of claim 1, further comprising:
receiving another 3D layout model that is generated based on an another digital image (Lin; [0014]: the input picture is provided by a camera, the camera has a camera intrinsic matrix; [0015]: the camera location of a camera outputs the output image; [0045]: the layout shown on image space is the projected cube from 3D space with the corresponding transformations on cuboid relative to the camera pose; [0069]: camera pose; Fig. 11; [0099]: receive multiple re-projected results in FIG. 11);
generating camera pose data by matching one or more segmentation masks of the digital image with one or more another segmentation masks of the another digital image (Lin; Fig. 2; [0041]: one is the real corner inside the room, i.e., inner corner; the other is the intersected points with the camera margins and shown on the borders of image; [0069]: the parameters for the camera pose are decomposed into translation vector T and rotation matrix R; [0070]: contain the extrinsic parameters for camera pose); and
generating unified 3D layout model by aligning the 3D layout model with the another 3D layout model using the camera pose data (Stekovic; Fig. 6B; [0079]: detect missing edges from the differences in depth for the 3D layout estimate and the input image; the missing edges are added and aligned to the set of candidate edges, as shown in image 613;
PNG
media_image4.png
448
800
media_image4.png
Greyscale
).
Regarding to claim 8, Stekovic discloses a system (Fig. 1; [0052]: methods, systems, apparatuses, and computer-readable media perform three-dimensional (3D) layout estimation from one or more images; generate an estimated 3D layout of the room;
PNG
media_image1.png
364
762
media_image1.png
Greyscale
) comprising:
one or more processors (Fig. 15; [0129]: one or more processors 1510);
one or more computer memory in data communication with the one or more processors, the one or more computer memory having computer readable data stored thereon, the computer readable data including instruction that, when executed by one or more processors, causes the one or more processors to perform a method ([0126]: the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations; [0127]: a plurality of instructions executable by one or more processors; Fig. 15; [0129]: one or more processors; Fig. 15; [0130]: one or more non-transitory storage devices), the method including
the rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 8.
Regarding to claim 9, Stekovic in view of Lin discloses the system of claim 8,
the rest claim limitations are similar to claim limitations recited in claim 2. Therefore, same rational used to reject claim 2 is used to reject claim 9.
Regarding to claim 11, Stekovic in view of Lin discloses the system of claim 8,
the rest claim limitations are similar to claim limitations recited in claim 4. Therefore, same rational used to reject claim 4 is used to reject claim 11.
Regarding to claim 12, Stekovic in view of Lin discloses the system of claim 8, wherein the method further comprises:
the rest claim limitations are similar to claim limitations recited in claim 5. Therefore, same rational used to reject claim 5 is used to reject claim 12.
Regarding to claim 13, Stekovic in view of Lin discloses the system of claim 8, wherein the method further comprises:
the rest claim limitations are similar to claim limitations recited in claim 6. Therefore, same rational used to reject claim 6 is used to reject claim 13.
Regarding to claim 14, Stekovic in view of Lin discloses the system of claim 8, wherein the method further comprises:
the rest claim limitations are similar to claim limitations recited in claim 7. Therefore, same rational used to reject claim 7 is used to reject claim 14.
Regarding to claim 15, Stekovic discloses one or more non-transitory computer readable mediums having computer readable data stored thereon, the computer readable data including instructions that, when executed by one or more processors, cause the one or more processors to perform a method (Fig. 1; [0052]: methods, systems, apparatuses, and computer-readable media perform three-dimensional (3D) layout estimation from one or more images; generate an estimated 3D layout of the room;
PNG
media_image1.png
364
762
media_image1.png
Greyscale
; [0126]: the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations; [0127]: a plurality of instructions executable by one or more processors; Fig. 15; [0129]: one or more processors; Fig. 15; [0130]: one or more non-transitory storage devices), the method comprising:
the rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is used to reject claim 15.
Regarding to claim 16, Stekovic in view of Lin discloses the one or more non-transitory computer readable mediums of claim 15,
the rest claim limitations are similar to claim limitations recited in claim 2. Therefore, same rational used to reject claim 2 is used to reject claim 16.
Regarding to claim 18, Stekovic in view of Lin discloses the one or more non-transitory computer readable mediums of claim 15, wherein:
the rest claim limitations are similar to claim limitations recited in claim 4. Therefore, same rational used to reject claim 4 is used to reject claim 18.
Regarding to claim 19, Stekovic in view of Lin discloses the one or more non-transitory computer readable mediums of claim 15,
wherein the segmentation masks further identify a window or a door (Stekovic; or is optional; Fig. 10; [0080]: door as illustrated in Fig. 10;
PNG
media_image5.png
122
536
media_image5.png
Greyscale
;
PNG
media_image6.png
112
500
media_image6.png
Greyscale
).
Stekovic in view of Lin further discloses wherein the segmentation masks further identify a window or a door (Lin; Fig. 5; [0098]: layout segmentation; window as illustrated in Fig. 5;
PNG
media_image7.png
146
612
media_image7.png
Greyscale
).
Regarding to claim 20, Stekovic in view of Lin discloses the one or more non-transitory computer readable mediums of claim 15, wherein the method further comprises:
the rest claim limitations are similar to claim limitations recited in claim 5. Therefore, same rational used to reject claim 5 is used to reject claim 20.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Stekovic (US 20210150805 A1) in view of Lin (US 20200211284 A1), and further in view of Shi (US 20200111251 A1).
Regarding to claim 3, Stekovic in view of Lin discloses the computer-implemented method of claim 1, further comprising:
generating the depth data using the depth sensors measurements in association with the digital image (Stekovic; [0020]: the depth information is obtained from one or more depth sensors; [0072]: the depth information is obtained from the one or more depth sensors).
Stekovic in view of Lin fails to explicitly disclose:
performing laser measurements via a laser range finder; and
depth sensors are the laser.
In same field of endeavor, Shi teaches:
performing laser measurements via a laser range finder ([0007]: the raw point cloud data is acquired by a laser scanner with respect to the indoor scene; [0041]: the initial 3D model is obtained, and finally, taking benefits of high quality of laser data and the distance between the wall surface-objects and the wall surface; [0049]: a laser scanner 13, preferably a backpack laser scanner, is used to acquire raw point cloud data for a processor 12 to compute and re-construct a 3D model; [0104]: utilize the point cloud data provided by the laser scanner 13 and perform high-precision 3D modeling for complex indoor scenes); and
depth sensors are the laser ([0049]: a laser scanner 13, preferably a backpack laser scanner, is used to acquire raw point cloud data for a processor 12 to compute and re-construct a 3D model; [0085]: some useful information is obtained from the laser pulses emitted by the laser scanner 13; [0105]: utilize the point cloud data provided by the laser scanner 13 and perform high-precision 3D modeling for complex indoor scenes; [0106]: the raw point cloud data from the laser scanner 13 is reconstructed to a 3D indoor model of the indoor scene).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Stekovic in view of Lin to include performing laser measurements via a laser range finder; depth sensors are the laser as taught by Shi. The motivation for doing so would have been to acquire the raw point cloud data by a laser scanner with respect to the indoor scene; to acquire raw point cloud data by a laser scanner for a processor 12 to compute and re-construct a 3D model; to improve wall classification as taught by Shi in paragraphs [0007], [0049], and [0100]
Regarding to claim 10, Stekovic in view of Lin discloses the system of claim 8, further comprising:
The rest claim limitations are similar to claim limitations recited in claim 3. Therefore, same rational used to reject claim 3 is also used to reject claim 10.
Regarding to claim 17, Stekovic in view of Lin discloses the one or more non-transitory computer readable mediums of claim 15, wherein the method further comprises:
The rest claim limitations are similar to claim limitations recited in claim 3. Therefore, same rational used to reject claim 3 is also used to reject claim 17.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616