DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of Claims
Claims 1, 2, 4-7, 9, 11-14, 16-21 and 23-25 are pending and have been examined below.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant's cooperation is requested in correcting any errors of which Applicant may become aware in the specification.
Claim Rejections - 35 USC § 101
35 USC 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1, 2, 4-7, 9, 11-14, 16-21, 23 and 24 is/are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea, without significantly more. The rejected claim(s) is/are shown below with formatting and annotations that will be referred to throughout the analysis. Abstract ideas are in bold, followed by the abstract idea grouping in brackets. Additional elements are underlined, followed by the category of additional elements for which the additional element fails to integrate the abstract idea into a practical application, the category being listed in brackets.
Claim 1
A method of localising a robot using a scale plan of a designated environment, comprising:
identifying reference features from the scale plan [mental process];
generating a predicted pose of the robot's current location on the scale plan based on the identified reference features [mental process];
scanning on-site reference features of the current location [mental process]; and
generating a map for the current location to localise the robot, based on the scanned on-site reference features of the current location and last N frames of historical scans of previous on-site reference features of a plurality of locations in the designated environment that the robot traversed through before reaching the current location [mental process];
the last N frames being fewer than total frames of the historical scans of the previous on-site reference features [mental process].
Claim 2
The method according to claim 1, further comprising:
calculating a weight for the predicted pose based on the generated map and the scale plan [mental process]; and/or
updating the predicted pose of the robot on the scale plan based on the calculated weight [mental process].
Claim 4
The method according to claim 2, further comprising converting the scale plan to a first gradient direction map and converting the generated map to a second gradient direction map for calculating the weight for the predicted pose, wherein the first gradient direction map includes a first gradient direction representing a respective reference normal direction of surface to each reference feature identified on the scale plan and the second gradient direction map includes a second gradient direction representing a respective on-site normal direction of a surface to each on-site reference feature scanned at the current location [mental process].
Claim 5
The method according to claim 2, further comprising converting the scale plan to a gradient magnitude map for calculating the weight for the predicted pose, wherein the gradient magnitude map includes a gradient of magnitude representing a respective distribution for each reference feature identified on the scale plan, with a centre of each identified reference feature carrying a higher magnitude value than an edge of the respective identified reference feature [mental process].
Claim 6
The method according to claim 5, wherein the predicted pose comprises a plurality of particles, each representing a potential location of the robot associated with a respective particle weight, wherein the particle weight for at least one of the plurality of particles is calculated to be proportional to the magnitude value of a corresponding cell in the gradient magnitude map [mental process].
Claim 7
The method according to claim 6, wherein calculating the weight for the predicted pose comprising calculating the particle weight for each of the plurality of particles [mental process].
Claim 9
The method according to claim 6, wherein the generated map comprises a sensed pose of the robot and the scanned on-site reference features of the current location comprise a fixed or immovable object of the designated environment, the particle weight for at least one of the plurality of particles is calculated to be inverse proportional to a distance between the sensed pose and the fixed or immovable object [mental process].
Claim 11
The method according to claim 1, wherein the scanned on-site reference features of the current location is stored as a latest frame of historical scans for generating the map when the robot traverses to a next location from the current location [insignificant extra-solution activity – storing data].
Claim 12
The method according to claim 1 wherein two neighbouring frames of the last N frames of historical scans share an overlap of on-site reference features [mental process].
Claim 13
A system for localising a robot using a scale plan of a designated environment, comprising:
a sensor module, configured to detect on-site reference features [field of use and technological environment]; and
a processor ["apply it" (or an equivalent) or mere instructions to implement an abstract idea or other exception on a computer], configured to:
identify reference features from the scale plan [mental process];
generate a predicted pose of the robot's current location on the scale plan based on the identified reference features [mental process];
scan on-site reference features of the current location [mental process]; and
generate a map for the current location to localise the robot, based on the scanned on-site reference features of the current location and last N frames of historical scans of previous on-site reference features of a plurality of locations in the designated environment that the robot traversed through before reaching the current location [mental process];
the last N frames being fewer than total frames of the historical scans of the previous on-site reference features [mental process].
Claims 14, 16, 17, 18, 19, 20, 21, 23 and 24
Claims 14, 16, 17, 18, 19, 20, 21, 23 and 24 recite subject matter similar to that of claims 2, 4, 5, 6, 7, 6, 9, 11 and 12, respectively, and are rejected under similar grounds.
Step 1
The claims are directed to a process, machine, manufacture or composition of matter. The analysis proceeds to step 2A, prong I.
Step 2A, Prong I
The claims are analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. See MPEP 2106(A)(11)(1) and MPEP 2106.04(a)-(c).
Examiner asserts that the foregoing bolded limitation(s) constitute(s) a mathematical concept, a certain method of organizing human activity, and/or a mental process because under the broadest reasonable interpretation, the limitation(s) can be performed in the human mind, or by a human using a pen and paper. Accordingly, the claim recites at least one abstract idea, and the analysis proceeds to step 2A, prong II.
Step 2A, Prong II
The claims are analyzed to determine whether the claims, as a whole, integrate the abstract idea(s) into a practical application. See MPEP 2106.04(11)(A)(2) and MPEP 2106.04(d)(2). It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional element(s) merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” See MPEP 2106.05f-h.
The additional element(s) of the claim at issue do not integrate the abstract idea into a practical application. Further, looking at the additional element(s) as an ordered combination or as a whole, the additional element(s) add nothing that is not already present when looking at the element(s) individually. For instance, there is no indication that the additional element(s), when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception. See MPEP 2106.05. The analysis proceeds to step 2B.
Step 2B
Step 2B requires that any additional element(s) determined to be insignificant extra solution activity in Step 2A must be re-evaluated in Step 2B to determine if the additional element(s) are more than what is well-understood, routine, and conventional in the field, which would result in the claim amounting to an inventive concept (in other words, "significantly more" than the abstract idea). Such element(s) in the claims being considered is/are listed below, each followed by the support showing that the element is well-understood, routine, and conventional in the field (see MPEP 2106.05dII for more details):
wherein the scanned on-site reference features of the current location is stored as a latest frame of historical scans for generating the map when the robot traverses to a next location from the current location Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93.
Thus, the claims fail to recite anything sufficient to amount to significantly more than the judicial exception.
Conclusion
Based on the analysis above, Examiner determines that claims 1, 2, 4-7, 9, 11-14, 16-21, 23 and 24 do not qualify as eligible subject matter, and the claims are rejected under 35 USC 101.
Examiner notes that claim(s) 25 recite(s) additional elements that do result in the claims integrating the abstract idea into a practical application of the exception and amounting to an inventive concept (aka "significantly more"). Amending the independent claim(s) to include the subject matter of claim(s) 25 and any intervening claims, if applicable, would be sufficient to overcome the rejection under 35 USC 101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 USC 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 13 and 14 rejected under 35 USC 102 as being anticipated by “Robust LiDAR-based Localization in Architectural Floor Plans” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), DOI:10.1109/IROS37595.2017, 24-28 Sept. 2017 (“Boniardi”).
Claim 1
Boniardi discloses a method of localising a robot using a scale plan of a designated environment (abstract), comprising:
identifying reference features from the scale plan (p. 3319 col. 1 floor plan image is processed to extract useful features, namely room plates and corners in doorways and passages, which are used as landmarks for localization);
generating a predicted pose of the robot's current location on the scale plan based on the identified reference features (p. 3320 col. 1 Assuming that a stable pose-graph anchored to the CAD drawing is available, we can adapt the MAP estimation in Eq. 1 to track the current robot pose without altering the underlying pose-graph and thus keeping the computational complexity bounded. Given the relative measurements (Δxt,tj)j at time t with respect to some previous poses in the trajectory (xtj)j, we can estimate the current robot pose);
scanning on-site reference features of the current location (p. 3319 Second, we introduce a method for scan-to-map-matching to obtain pose information relative to the floor plan... The goal of this work is to track the pose of a robot in an architectural floor plan using a 2D LiDAR sensor. Given a coarse estimate of the starting pose x0 in the reference frame of the floor plan and the current LiDAR reading Zt, we want to estimate the 2D pose xt of the robot in that reference frame.); and
generating a map for the current location to localise the robot, based on the scanned on-site reference features of the current location and last N frames of historical scans of previous on-site reference features of a plurality of locations in the designated environment that the robot traversed through before reaching the current location (p. 3319 Second, we introduce a method for scan-to-map-matching to obtain pose information relative to the floor plan... The goal of this work is to track the pose of a robot in an architectural floor plan using a 2D LiDAR sensor. Given a coarse estimate of the starting pose x0 in the reference frame of the floor plan and the current LiDAR reading Zt, we want to estimate the 2D pose xt of the robot in that reference frame., p. 3320 Given the relative measurements (Δxt,tj)j at time t with respect to some previous poses in the trajectory (xtj)j, we can estimate the current robot pose... In order to decide whether a pure localization or a full pose-graph optimization should be executed, we use the following simple heuristic based on the associations computed by the front-end: pure localization is performed at time t only if enough associations can be obtained in the vicinity of the robot, that is, if the number of relative measurements (Δxt,tj)j exceeds a threshold value Nloc. This approach efficiently maintains the number of vertices and edges of the pose-graph bounded without any complex and computationally expensive pose-graph sparsification procedure. The pseudo-code of the whole system is reported in Algorithm 1.);
the last N frames being fewer than total frames of the historical scans of the previous on-site reference features (p. 3319 Second, we introduce a method for scan-to-map-matching to obtain pose information relative to the floor plan... The goal of this work is to track the pose of a robot in an architectural floor plan using a 2D LiDAR sensor. Given a coarse estimate of the starting pose x0 in the reference frame of the floor plan and the current LiDAR reading Zt, we want to estimate the 2D pose xt of the robot in that reference frame., p. 3320 Given the relative measurements (Δxt,tj)j at time t with respect to some previous poses in the trajectory (xtj)j, we can estimate the current robot pose... In order to decide whether a pure localization or a full pose-graph optimization should be executed, we use the following simple heuristic based on the associations computed by the front-end: pure localization is performed at time t only if enough associations can be obtained in the vicinity of the robot, that is, if the number of relative measurements (Δxt,tj)j exceeds a threshold value Nloc. This approach efficiently maintains the number of vertices and edges of the pose-graph bounded without any complex and computationally expensive pose-graph sparsification procedure. The pseudo-code of the whole system is reported in Algorithm 1.).
Claim 2
The method according to claim 1, further comprising:
calculating a weight for the predicted pose based on the generated map and the scale plan; and/or updating the predicted pose of the robot on the scale plan based on the calculated weight (p. 3320 As already mentioned, the constraints related to the measurements must be balanced with the prior constraints if CAD drawings are metrically inaccurate (see Fig. 5). We can assume that inaccuracies act as bias terms β(xti,xtj)∈R3 in the relative measurements. As a consequence, the relative measurement are now distributed).
Claim 13
Boniardi discloses a system for localising a robot using a scale plan of a designated environment (abstract), comprising:
a sensor module, configured to detect on-site reference features (p. 3319 we adopt SLAM techniques and concepts from relative localization to fit a map onto the floor plan in order to overcome the lack of features in CAD drawings. We also localize with respect to previously acquired LiDAR measurements whenever the map provides a sufficiently complete representation of the environment...The goal of this work is to track the pose of a robot in an architectural floor plan using a 2D LiDAR sensor); and
a processor (p. 3320 As for the optimization problem in Eq. 2, we compute xˆt using a non-linear least squares optimization.).
The steps performed by the processor are identical to those of claim 1. See prior art rejection of claim 1 for details.
Claim(s) 14
Claim(s) 14 recite(s) subject matter similar to that/those of claim(s) 2 and is/are rejected under the same grounds.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 USC 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11, 12 and 23-25 are rejected under 35 USC 103 as being unpatentable over Boniardi in view of US20230010105 (“Ha”).
Claim 11
Boniardi fails to disclose wherein the scanned on-site reference features of the current location is stored as a latest frame of historical scans for generating the map when the robot traverses to a next location from the current location. However, Boniardi does disclose scanning on-site reference features and the robot traversing to a next location. Furthermore, Ha teaches a system of localizing a robot (0001), including:
wherein the scanned on-site reference features of the current location is stored as a latest frame of historical scans for generating the map when the robot traverses to a next location from the current location (0011 a method of initializing a mobile robot includes: capturing, by a camera moving in an environment, a sequence of frames at respective locations within a portion of the environment, the sequence of frames includes a first frame and a second frame; capturing, by an inertial measurement unit, a sequence of inertial odometry data corresponding to the sequence of frames at the respective locations; storing in a queue, for each respective frame in the sequence of frames, a data record comprising information extracted from processing the respective frame and information from the inertial measurement unit; in accordance with a determination that the sequence of inertial odometry data satisfies a first criterion: calculating a first relative pose between the first frame and the second frame; and in accordance with a determination that a difference between the first relative pose and the information extracted from processing the respective frame satisfy a first threshold: generating an initial map of the portion of the environment based on the first data record and the second data record., 0085 The mobile robot also captures (704) by an inertial measurement unit, a sequence of inertial odometry data corresponding to the sequence of frames at the respective locations. The mobile robot then stores (706), in a queue, for each respective frame in the sequence of frames, a data record that includes information extracted from processing the respective frame and information from the inertial measurement unit.).
Boniardi and Ha both disclose systems of localizing a robot in an indoor environment. Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of Applicant's invention to modify the system in Boniardi to include the teaching of Ha with a reasonable expectation of success in order to store the frames in the same order they were captured to ease retrieval of information in the frames in the future.
Claim 12
Boniardi fails to explicitly disclose wherein two neighbouring frames of the last N frames of historical scans share an overlap of on-site reference features. However, Boniardi does disclose robot localization (abstract). Furthermore, Ha discloses a robot localization system (abstract), including:
wherein two neighbouring frames of the last N frames of historical scans share an overlap of on-site reference features (0040 For example, a map point stores information such as a 3D position in world coordinate system, a viewing direction, descriptors, max and min distances where observed, etc. For example, the keyframe 202a is connected to the keyframe 202b by the edge 204 because these two keyframes share certain map points (e.g., map points that correspond to the table 108 that are captured in both keyframes 202a and 202b). In another example, keyframe 202a and keyframe 202d are not directly connected by an edge, indicating that the overlap of the two keyframes is low (e.g., they are captured at locations in the environment 100 that are too far away, or pointing at different portions of the environment, etc.). In some embodiments, the graph 200 is a co-visibility graph. The co-visibility graph is a graph that includes keyframes as nodes and edges between keyframes indicating that which pairs of keyframes have at least a threshold number of common map points between them (e.g., are more likely to capture the same portion of the environment, or capture the same objects corresponding to a respective functionality, etc.).).
See prior art rejection of claim 11 for obviousness and reasons to combine.
Claims 23 and 24
Claim(s) 23 and 24 recite(s) subject matter similar to that/those of claim(s) 11 and 12, respectively, and is/are rejected under the same grounds.
Claim 25
Boniardi discloses a robot (abstract), comprising:
a system according to claim 13 (see prior art rejection of claim 13).
Boniardi fails to explicitly disclose a controller configured to control an operation of the robot in a designated environment based on a localisation result provided by the system. However, Boniardi does disclose the robot traveling in an environment (abstract). Furthermore, Ha teaches localization of a robot, including:
a controller configured to control an operation of the robot in a designated environment based on a localisation result provided by the system (0002 Localization, place recognition, and environment understanding are key capabilities to enable a mobile robot to become a fully autonomous or semi-autonomous system in an environment. Simultaneous localization and mapping (SLAM) is a method that builds a map of an environment and simultaneously estimates the pose of a mobile robot (e.g., using the estimated pose of a camera) in the environment. SLAM algorithms allow the mobile robot to map out unknown environments and localize itself in the environment to carry out tasks such as path planning and obstacle avoidance., 0099, 0097).
See prior art rejection of claim 11 for obviousness and reasons to combine.
Allowable Subject Matter
Claims 4-7, 9 and 16-21 are rejected under 35 USC 101, but would be allowable if the rejections were overcome. The closest prior art of record are Boniardi and Ha, as discussed above, which both disclose indoor localization systems for a robot. However, the aforementioned claims recite subject matter directed towards at least the following subject matter: converting the scale plan to a first gradient direction map and converting the generated map to a second gradient direction map for calculating the weight for the predicted pose, wherein the first gradient direction map includes a first gradient direction representing a respective reference normal direction of surface to each reference feature identified on the scale plan and the second gradient direction map includes a second gradient direction representing a respective on-site normal direction of a surface to each on-site reference feature scanned at the current location; and converting the scale plan to a gradient magnitude map for calculating the weight for the predicted pose, wherein the gradient magnitude map includes a gradient of magnitude representing a respective distribution for each reference feature identified on the scale plan, with a centre of each identified reference feature carrying a higher magnitude value than an edge of the respective identified reference feature. While relevant to the claims, the prior art does not provide sufficient disclosure, teaching or suggestion to adequately provide a basis for rejection of the claims under 35 USC 102 or 103 because the prior art found does not sufficiently teach nor suggest the limitations as claimed, hence the allowability of the claims. Examiner notes that amendment to the claims resulting in a change of scope may result in requirement of an updated search.
Contact Information
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO892. Specifically, the following prior art is considered relevant to Applicant’s claims:
US20180111274 - METHOD AND SYSTEM FOR CONTROLLING INDOOR AUTONOMOUS ROBOT; and
US20150131871 - FLOOR PLAN SPACE DETECTION.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Examiner KRISHNAN RAMESH whose telephone number is (571)272-6407. The examiner can normally be reached Monday-Friday 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn, can be reached at (571)272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KRISHNAN RAMESH/
Primary Examiner, Art Unit 3663