DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Examiner acknowledges the reply filed on 12/18/2025 in which claims 1, 6-8, 13-17, and 20 have been amended. Currently claims 1-20 are pending for examination in this application.
Based on this reply:
The drawing objections have been withdrawn.
The specification objection have been withdrawn.
The previous 112 rejections have been withdrawn.
Response to Arguments
Applicant's arguments filed 12/18/2025 have been fully considered but they are not persuasive. The applicant argued on pages 12-17 that the cited references did not teach the new amendments to claim 1, however, as outlined in the rejections below, the teachings of Lewis do correspond to the claimed limitation of determining that there is not at least one additional edge/corner of the room.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8, and 15 recites the limitation “and receiving the second beam of light reflected from one of the edge and from the pair of surfaces”. The current phrasing makes the meaning of this limitation unclear. In particular, this limitation is a reasonable typo away from being read in two plausible ways: either “from one of [[the edge] and [the pair of surfaces]]” or “from [one of the (at least one) edge] and from [the pair of surfaces]”. The former representing an alternative between the edge and the pair of surfaces, and the latter representing a requirement for both the “one of the (at least one) edge” and “the pair of surfaces”. As best understood by the examiner, the former interpretation is the intended one, as this amendment appears to be a rephrasing/clarifying of the previous limitation, which was in the alternative form, and thus that the second “from” of the current amendment was likely intended to be removed. Examination will proceed under this assumption.
Claims 2-7, 9-14, and 16-20 are further rejected due to claim dependency.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 8-12, and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zweigle et al. (US 20190325592 A1), hereinafter Zweigle, in view of Yuasa (US 20220026206 A1) and Lewis et al. (US 20200100066 A1), hereinafter Lewis.
Regarding claim 1, Zweigle teaches:
A system of generating a two-dimensional (2D) image of an environment ([0024-25] “The present invention relates to a device that includes a system having a 2D scanner that works cooperatively with an inertial measurement unit to generate an annotated two-dimensional map of an environment… Referring now to FIGS. 1-5, an embodiment of a system 30”), the system comprising:
a 2D scanner having a first light source, an image sensor, […] and a controller ([0030] “The controller 68 is coupled to a wall 70 of body 34. In an embodiment, the wall 70 is coupled to or integral with the handle 36. The controller 68 is electrically coupled to the 2D scanner 50, the 3D camera 60, a power source 72, an inertial measurement unit (IMU) 74, a laser line projector 76, and a haptic feedback device 77.”),
[…]
the controller being operable to determine a distance value to object points in the environment based at least in part on a beam of light emitted by the first light source and the receiving of the beam of light reflected from the object points ([0037] “Coupled to the controller 68 is the 2D scanner 50. The 2D scanner 50 measures 2D coordinates in a plane. In the exemplary embodiment, the scanning is performed by steering light within a plane to illuminate object points in the environment. The 2D scanner 50 collects the reflected (scattered) light from the object points to determine 2D coordinates of the object points in the 2D plane. In an embodiment, the 2D scanner 50 scans a spot of light over an angle while at the same time measuring an angle value and corresponding distance value to each of the illuminated object points.”);
at least one processor operably coupled to the 2D scanner, the at least one processor executing non-transitory executable instructions ([0029] “controller 68 which has one or more processors that is operable to perform the methods described herein.”) to execute a method comprising:
generating a plan view map of the environment ([0005] “the one or more processors being further responsive to generate a 2D image of the environment based at least in part in response to a signal from the selected first sensor or the at least one second sensor.”; [0062] “The method 120 then proceeds to block 164 where a 2D map 176 is generated of the scanned area as shown in FIG. 16.”);
[…]
detecting at least one edge including the edge, based at least in part on emitting a second beam of light from at least one of the first light source and the second light source, and receiving the second beam of light reflected from one of the edge and from the pair of surfaces ([0061] “The method 160 starts in block 162 where the facility or area is scanned to acquire scan data 170, such as that shown in FIG. 15. The scanning is performed by carrying the system 30 through the area to be scanned. The system 30 measures distances from the system 30 to an object, such as a wall for example, and also a pose of the system 30 in an embodiment the user interacts with the system 30 via actuator 38… the two dimensional locations of the measured points on the scanned objects (e.g. walls, doors, windows, cubicles, file cabinets etc.) may be determined.” Note that the “second beam of light” has no clearly distinguishing features from the first beam of light, and the scan of Zweigle can reasonably be split into two groups of points to meet the limitations of “the first beam” and “the second beam”. Further, the example scans of FIGS. 14-15 visibly include points detected from the corners.);
wherein the edge comprises at least one of an edge of a room within the environment and a corner of the room within the environment (FIG. 15; [0061] “Using the registration process desired herein, the two dimensional locations of the measured points on the scanned objects (e.g. walls, doors, windows, cubicles, file cabinets etc.) may be determined.” The corners of FIG. 15 are clearly contemplated as the intersection of two walls of a room.);
[…]
Zweigle does not teach:
a second light source
the second light source being configured to emit a visible light,
emitting light from the second light source towards an edge defined by at least a pair of surfaces;
determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on light emitted by the at least one of the first light source and the second light source and reflected by at least one of the object points in the environment;
defining the room on the plan view map based at least in part on the detecting one of the corner and the edge, in response to determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on the light emitted by the at least one of the first light source and the second light source and reflected by the at least one of the object points in the environment.
Yuasa, in the same field of endeavor, teaches:
a second light source,… the second light source being configured to emit a visible light ([0013] “Further, in the surveying instrument according to a preferred embodiment, each of the distance measuring light and the tracking light is an invisible light, the laser pointer light is a visible light”; [0091] “Further, the light emitter 56 projects, as a laser pointer light, a laser beam having a wavelength in a red visible light region concurrently with the above-described distance measurement operation and tracking operation… Here, since the laser pointer light is coaxial with the distance measuring light, an irradiating position of the distance measuring light coincides with an irradiating position of the laser pointer light.”),
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Zweigle with the visible laser pointer of Yuasa to give a visual indication of the pointing of the laser distance measurement.
The combination also thus teaches:
emitting light from the second light source towards an edge defined by at least a pair of surfaces (The light emitted towards the corners by the 2D scanner of Zweigle would be accompanied by the visible laser pointer light of Yuasa.);
The combination still does not teach:
determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on light emitted by the at least one of the first light source and the second light source and reflected by at least one of the object points in the environment;
defining the room on the plan view map based at least in part on the detecting one of the corner and the edge, in response to determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on the light emitted by the at least one of the first light source and the second light source and reflected by the at least one of the object points in the environment.
Lewis, in the same field of floor plan generation, teaches:
determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment (FIG. 3, block 48 shows a determination as to whether there are any more walls in the room, and given that the teachings of Lewis determine corners based on the intersection of walls, and that there is a well understood connection between the number of walls and the number of corners in a room, the determination of whether there are any more walls remaining is equivalent to determining whether there are any more corners remaining. ),
based on light emitted by the at least one of the first light source and the second light source and reflected by at least one of the object points in the environment ([0029] “In a first example, the system can determine whether there are more walls to capture by promoting the user and soliciting a reply. In a second example, the system can have a predetermined number of walls threshold and determine whether a number of captured walls meets the predetermined threshold.” Note that this decision would be understood to be made based at least in part on what has already been measured.);
defining the room on the plan view map based at least in part on the detecting one of the corner and the edge, in response to determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on the light emitted by the at least one of the first light source and the second light source and reflected by the at least one of the object points in the environment ([0021] “The system described herein produces a dimensioned floor plan, room by room, using a user device, such as a mobile device. The system uses the user device's internal and external sensors to track its position in space and defines points in a coordinate space which define a wall. The wall points are collected one wall at a time until all the walls of a room have been defined. The intersecting points of the walls are used to define the shape and measurements of that room.”; [0030] “Once all the walls of the room have been defined, the system takes a line segment that defines each wall and projects 3D coordinates of the line segment into a 2D coordinate space. The system then calculates the intersecting points of the walls to determine the corners of a room. The calculated corner points are used to define and measure a length and a position of the real-world wall. FIG. 7 is an illustration showing the system determining the corners of a room.”).
While the system of Lewis acquires the data points in a different manner, the data itself is similar in nature, which is to say, location points on a wall, and thus it would have been obvious that the technique could be applied similarly to either data set.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have implemented the room-defining technique of Lewis in the system of Zweigle in view of Yuasa in order to create more readily interpretable maps.
Regarding claim 2, Zweigle in view of Yuasa and Lewis teach the system of claim 1, as described above, and further teach:
wherein the method further comprises associating the detected edge with a location on the plan view map (Lewis: [0021] “The intersecting points of the walls are used to define the shape and measurements of that room.”; FIGS. 11 and 12 show the map of the rooms.).
Regarding claim 3, Zweigle in view of Yuasa and Lewis teach the system of claim 1, as described above, and further teach:
wherein the method further comprises detecting a plurality of edges, and generating a polygon on the plan view map defined by the edges (Lewis: [0030] “Once all the walls of the room have been defined, the system takes a line segment that defines each wall and projects 3D coordinates of the line segment into a 2D coordinate space. The system then calculates the intersecting points of the walls to determine the corners of a room. The calculated corner points are used to define and measure a length and a position of the real-world wall.” The description of Lewis, along with the FIGS 11 and 12 convey the formation of a room shape via a series of lines, thus a polygon.).
Regarding claim 4, Zweigle in view of Yuasa and Lewis teach the system of claim 3, as described above, and further teach:
wherein the detecting of the edge includes measuring a plurality of first points on a first surface of the pair of surfaces and measuring a plurality of second points on a second surface of the pair of surfaces (Zweigle: The descriptions of [0037-38] describe a scanning system which one of ordinary skill in the art would understand to capture multiple points per wall surface. Further supported by the plurality of points visibly conveyed in FIGS. 15-16.).
Regarding claim 5, Zweigle in view of Yuasa and Lewis teach the system of claim 4, as described above, and further teach:
wherein the edge is defined by a first line and a second line, the first line being defined by the plurality of first points, the second line being defined by the plurality of second points (Lewis: [0027] “Specifically, the system captures two points on the wall (a first wall point and a second wall point) and defines an infinite line”; [0030] “Once all the walls of the room have been defined, the system takes a line segment that defines each wall and projects 3D coordinates of the line segment into a 2D coordinate space. The system then calculates the intersecting points of the walls to determine the corners of a room.”).
Regarding claim 8, the method of claim 8 matches the scope of the system of claim 1 and is rejected for the same reasons.
Regarding claim 9, the method of claim 9 matches the scope of the system of claim 2 and is rejected for the same reasons.
Regarding claim 10, the method of claim 10 matches the scope of the system of claim 3 and is rejected for the same reasons.
Regarding claim 11, the method of claim 11 matches the scope of the system of claim 4 and is rejected for the same reasons.
Regarding claim 12, the method of claim 12 matches the scope of the system of claim 5 and is rejected for the same reasons.
Regarding claim 15, Zweigle teaches:
A system of generating a two-dimensional (2D) image of an environment ([0024-25] “The present invention relates to a device that includes a system having a 2D scanner that works cooperatively with an inertial measurement unit to generate an annotated two-dimensional map of an environment… Referring now to FIGS. 1-5, an embodiment of a system 30”), the system comprising:
At least one processor (FIG. 10, processor 78);
a 2D scanner sized and weighted to be carried by a single person, having a first light source, […], an image sensor ([0030] “The controller 68 is coupled to a wall 70 of body 34. In an embodiment, the wall 70 is coupled to or integral with the handle 36. The controller 68 is electrically coupled to the 2D scanner 50, the 3D camera 60, a power source 72, an inertial measurement unit (IMU) 74, a laser line projector 76, and a haptic feedback device 77.”; [0051] “In the exemplary embodiment, the system 30 is a handheld portable device that is sized and weighted to be carried by a single person during operation.”),
an inertial measurement unit having a first plurality of sensors ([0040] “Also coupled to the controller 86 is the IMU 74. The IMU 74 is a position/orientation sensor that may include accelerometers 94 (inclinometers), gyroscopes 96, a magnetometers or compass 98, and altimeters.”; FIG. 10),
wherein the first light source steers a beam of light within a first plane to illuminate object points in the environment, and the image sensor is arranged to receive light reflected from the object points ([0037] “Coupled to the controller 68 is the 2D scanner 50. The 2D scanner 50 measures 2D coordinates in a plane. In the exemplary embodiment, the scanning is performed by steering light within a plane to illuminate object points in the environment. The 2D scanner 50 collects the reflected (scattered) light from the object points to determine 2D coordinates of the object points in the 2D plane. In an embodiment, the 2D scanner 50 scans a spot of light over an angle while at the same time measuring an angle value and corresponding distance value to each of the illuminated object points.”);
a mobile computing device removably coupled to the 2D scanner, the mobile computing device having a second plurality of sensors ([0027] “The mobile device holder 41 is configured to securely couple a mobile device 43 to the housing 32… In an embodiment, the mobile device 43 is coupled to communicate with a controller 68 (FIG. 10).”; FIG. 10 shows multiple potential sensors within mobile device 43.);
wherein the at least one processor executes instructions ([0029] “controller 68 which has one or more processors that is operable to perform the methods described herein.”) that perform:
generating a plan view map of the environment ([0005] “the one or more processors being further responsive to generate a 2D image of the environment based at least in part in response to a signal from the selected first sensor or the at least one second sensor.”; [0062] “The method 120 then proceeds to block 164 where a 2D map 176 is generated of the scanned area as shown in FIG. 16.”);
[…]
detecting at least one edge including the edge, based at least in part on emitting a second beam of light from at least one of the first light source and the second light source, and receiving the second beam of light reflected from one of the edge and from the pair of surfaces ([0061] “The method 160 starts in block 162 where the facility or area is scanned to acquire scan data 170, such as that shown in FIG. 15. The scanning is performed by carrying the system 30 through the area to be scanned. The system 30 measures distances from the system 30 to an object, such as a wall for example, and also a pose of the system 30 in an embodiment the user interacts with the system 30 via actuator 38… the two dimensional locations of the measured points on the scanned objects (e.g. walls, doors, windows, cubicles, file cabinets etc.) may be determined.” Note that the “second beam of light” has no clearly distinguishing features from the first beam of light, and the scan of Zweigle can reasonably be split into two groups of points to meet the limitations of “the first beam” and “the second beam”. Further, the example scans of FIGS. 14-15 visibly include points detected from the corners.);
wherein the edge comprises at least one of an edge of a room within the environment and a corner of the room within the environment (FIG. 15; [0061] “Using the registration process desired herein, the two dimensional locations of the measured points on the scanned objects (e.g. walls, doors, windows, cubicles, file cabinets etc.) may be determined.” The corners of FIG. 15 are clearly contemplated as the intersection of two walls of a room.);
[…]
Zweigle does not teach:
a second light source
emitting light from the second light source towards an edge defined by at least a pair of surfaces;
determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on light emitted by the at least one of the first light source and the second light source and reflected by at least one of the object points in the environment;
defining the room on the plan view map based at least in part on the detecting one of the corner and the edge, in response to determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on the light emitted by the at least one of the first light source and the second light source and reflected by the at least one of the object points in the environment.
Yuasa, in the same field of endeavor, teaches:
a second light source ([0013] “Further, in the surveying instrument according to a preferred embodiment, each of the distance measuring light and the tracking light is an invisible light, the laser pointer light is a visible light”; [0091] “Further, the light emitter 56 projects, as a laser pointer light, a laser beam having a wavelength in a red visible light region concurrently with the above-described distance measurement operation and tracking operation… Here, since the laser pointer light is coaxial with the distance measuring light, an irradiating position of the distance measuring light coincides with an irradiating position of the laser pointer light.”),
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Zweigle with the visible laser pointer of Yuasa to give a visual indication of the pointing of the laser distance measurement.
The combination also thus teaches:
emitting light from the second light source towards an edge defined by at least a pair of surfaces (The light emitted towards the corners by the 2D scanner of Zweigle would be accompanied by the visible laser pointer light of Yuasa.);
The combination still does not teach:
determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on light emitted by the at least one of the first light source and the second light source and reflected by at least one of the object points in the environment;
defining the room on the plan view map based at least in part on the detecting one of the corner and the edge, in response to determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on the light emitted by the at least one of the first light source and the second light source and reflected by the at least one of the object points in the environment.
Lewis, in the same field of floor plan generation, teaches:
determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment (FIG. 3, block 48 shows a determination as to whether there are any more walls in the room, and given that the teachings of Lewis determine corners based on the intersection of walls, and that there is a well understood connection between the number of walls and the number of corners in a room, the determination of whether there are any more walls remaining is equivalent to determining whether there are any more corners remaining. ),
based on light emitted by the at least one of the first light source and the second light source and reflected by at least one of the object points in the environment ([0029] “In a first example, the system can determine whether there are more walls to capture by promoting the user and soliciting a reply. In a second example, the system can have a predetermined number of walls threshold and determine whether a number of captured walls meets the predetermined threshold.” Note that this decision would be understood to be made based at least in part on what has already been measured.);
defining the room on the plan view map based at least in part on the detecting one of the corner and the edge, in response to determining that there is not at least one additional edge of the room within the environment, and that there is not at least one additional corner of the room within the environment, based on the light emitted by the at least one of the first light source and the second light source and reflected by the at least one of the object points in the environment ([0021] “The system described herein produces a dimensioned floor plan, room by room, using a user device, such as a mobile device. The system uses the user device's internal and external sensors to track its position in space and defines points in a coordinate space which define a wall. The wall points are collected one wall at a time until all the walls of a room have been defined. The intersecting points of the walls are used to define the shape and measurements of that room.”; [0030] “Once all the walls of the room have been defined, the system takes a line segment that defines each wall and projects 3D coordinates of the line segment into a 2D coordinate space. The system then calculates the intersecting points of the walls to determine the corners of a room. The calculated corner points are used to define and measure a length and a position of the real-world wall. FIG. 7 is an illustration showing the system determining the corners of a room.”).
While the system of Lewis acquires the data points in a different manner, the data itself is similar in nature, which is to say, location points on a wall, and thus it would have been obvious that the technique could be applied similarly to either data set.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have implemented the room-defining technique of Lewis in the system of Zweigle in view of Yuasa in order to create more readily interpretable maps.
Regarding claim 16, Zweigle in view of Yuasa and Lewis teach the system of claim 15, as described above, and further teach:
wherein the at least one processor executes instructions that perform associating the detected edge with a location on the plan view map (Lewis: [0021] “The intersecting points of the walls are used to define the shape and measurements of that room.”; FIGS. 11 and 12 show the map of the rooms.).
Regarding claim 17, Zweigle in view of Yuasa and Lewis teach the system of claim 15, as described above, and further teach:
wherein the at least one processor executes instructions that perform detecting a plurality of edges, and generating a polygon on the plan view map defined by the edges (Lewis: [0030] “Once all the walls of the room have been defined, the system takes a line segment that defines each wall and projects 3D coordinates of the line segment into a 2D coordinate space. The system then calculates the intersecting points of the walls to determine the corners of a room. The calculated corner points are used to define and measure a length and a position of the real-world wall.” The description of Lewis, along with the FIGS 11 and 12 convey the formation of a room shape via a series of lines, thus a polygon.).
Regarding claim 18, Zweigle in view of Yuasa and Lewis teach the system of claim 17, as described above, and further teach:
wherein the detecting of the edge includes measuring a plurality of first points on a first surface of the pair of surfaces and measuring a plurality of second points on a second surface of the pair of surfaces (Zweigle: The descriptions of [0037-38] describe a scanning system which one of ordinary skill in the art would understand to capture multiple points per wall surface. Further supported by the plurality of points visibly conveyed in FIGS. 15-16.).
Regarding claim 19, Zweigle in view of Yuasa and Lewis teach the system of claim 18, as described above, and further teach:
wherein the edge is defined by a first line and a second line, the first line being defined by the plurality of first points, the second line being defined by the plurality of second points (Lewis: [0027] “Specifically, the system captures two points on the wall (a first wall point and a second wall point) and defines an infinite line”; [0030] “Once all the walls of the room have been defined, the system takes a line segment that defines each wall and projects 3D coordinates of the line segment into a 2D coordinate space. The system then calculates the intersecting points of the walls to determine the corners of a room.”).
Claim(s) 6, 13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zweigle in view of Yuasa and Lewis, and further in view of Barber et al. (R. Barber, M. Mata, M. J. L. Boada, J. M. Armingol and M. A. Salichs, "A perception system based on laser information for mobile robot topologic navigation," IEEE 2002 28th Annual Conference of the Industrial Electronics Society. IECON 02, Seville, Spain, 2002, pp. 2779-2784 vol.4), hereinafter Barber.
Regarding claim 6, Zweigle in view of Yuasa and Lewis teach the system of claim 1, as described above, but do not explicitly teach:
wherein the method further comprises: detecting a plurality of edges based at least in part on emitting the second beam of light from the at least one of the first light source and the second light source and receiving the reflected second beam of light; and
defining a doorway on the plan view map based on the plurality of edges.
Barber, in the same field of endeavor, teaches:
wherein the method further comprises: detecting a plurality of edges based at least in part on emitting the second beam of light from the at least one of the first light source and the second light source and receiving the reflected second beam of light (Pg. 2780, Section III, A: “To detect a door, two segments which belong to the same straight line and which are separated a distance equal to the door width (this distance will be of 80 cm in our experiment) will be needed.” The ends of these two segments adjacent to the space correspond to the plurality of edges. See Fig. 1.); and
defining a doorway on the plan view map based on the plurality of edges (Pg. 2780, Section III, A: “In figure 1 the segment disposition, for the system to detect the door event, is shown.”; Pg. 2783, Section V: “The next step is to calculate the straight lines present in the data sample by the Hough transform. In figure 9 the segments calculated from the straight lines are represented… From the segments already calculated, the detection techniques developed are applied.”; See also Figs. 9 and 10 for visual labelling of the door on the 2D map.).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Zweigle in view of Yuasa and Lewis with the door detection technique of Barber for the benefit of adding more information to the maps.
Regarding claim 13, the method of claim 13 matches the scope of the system of claim 6 and is rejected for the same reasons.
Regarding claim 20, Zweigle in view of Yuasa and Lewis teach the system of claim 15, as described above, but do not explicitly teach:
wherein the at least one processor executes instructions that perform: detecting a plurality of edges based at least in part on emitting the second beam of light from the at least one of the first light source and the second light source, and receiving the reflected second beam of light; and
defining a doorway on the plan view map based on the plurality of edges.
Barber, in the same field of endeavor, teaches:
wherein the at least one processor executes instructions that perform: detecting a plurality of edges based at least in part on emitting the second beam of light from the at least one of the first light source and the second light source, and receiving the reflected second beam of light (Pg. 2780, Section III, A: “To detect a door, two segments which belong to the same straight line and which are separated a distance equal to the door width (this distance will be of 80 cm in our experiment) will be needed.” The ends of these two segments adjacent to the space correspond to the plurality of edges. See Fig. 1.); and
defining a doorway on the plan view map based on the plurality of edges (Pg. 2780, Section III, A: “In figure 1 the segment disposition, for the system to detect the door event, is shown.”; Pg. 2783, Section V: “The next step is to calculate the straight lines present in the data sample by the Hough transform. In figure 9 the segments calculated from the straight lines are represented… From the segments already calculated, the detection techniques developed are applied.”; See also Figs. 9 and 10 for visual labelling of the door on the 2D map.).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Zweigle in view of Yuasa and Lewis with the door detection technique of Barber for the benefit of adding more information to the maps.
Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zweigle in view of Yuasa and Lewis and further in view of Chao et al. (US 20130267260 A1), hereinafter Chao.
Regarding claim 7, Zweigle in view of Yuasa and Lewis teach the system of claim 1, as described above, but do not explicitly teach:
wherein the method further comprises: displaying an icon on the plan view map in an area occupied by an operator of the 2D scanner.
Chao, in the related field of updating and modifying location information, teaches:
wherein the method further comprises: displaying an icon on the plan view map in an area occupied by an operator of the 2D scanner. ([0049] “In particular implementations, MS 43 may receive positioning assistance data for indoor positioning operations from servers 40, 50 or 55… Other assistance data received by the MS may include, for example, local maps of indoor areas for display or to aid in navigation… By obtaining and displaying such a map, an MS may overlay a current location of the MS (and user) over the displayed map to provide the user with additional context.”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Zweigle in view of Yuasa and Lewis with the display of user location to aid in navigation and context.
Regarding claim 14, the method of claim 14 matches the scope of the system of claim 7 and is rejected for the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
ElKaissi et al. (M. ElKaissi, M. Elgamel, M. Bayoumi and B. Zavidovique, "SEDLRF: A New Door Detection System for Topological Maps," 2006 International Workshop on Computer Architecture for Machine Perception and Sensing, Montreal, QC, Canada, 2006, pp. 75-80) teaches a door detection system for 2D line scanning systems.
Gowda et al. (US 20220292549 A1) defines a room and creates a polygonal representation based on defined corners of the room.
Roland et al. (US 10060730 B2) teaches a handheld mobile scanning device which identifies room corners based on the intersection of lines fit to data points scanned across the walls of the room.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN C. GRANT whose telephone number is (571)272-0402. The examiner can normally be reached Monday - Friday, 9:30 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at (571)270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN C. GRANT/Examiner, Art Unit 3645
/YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645