Prosecution Insights
Last updated: April 19, 2026
Application No. 18/712,674

METHOD FOR ROBOTS TO IMPROVE THE ACCURACY OF OBSTACLE LABELING

Non-Final OA §101§103§112
Filed
May 22, 2024
Examiner
YANOSKA, JOSEPH ANDERSON
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Amicro Semiconductor Co., Ltd.
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
10 granted / 26 resolved
-13.5% vs TC avg
Strong +60% interview lift
Without
With
+60.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
34 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
28.5%
-11.5% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§101 §103 §112
Detailed Office Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a non-final Office Action on the merits. Claims 1-10 are currently pending and are addressed below. Priority Acknowledgment is made of applicant's claim priority for Chinese Application CN202111382649.0 filed November 22, 2021. Information Disclosure Statement The information disclosure statements (IDS) submitted on 05/01/2025 is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With regards to claim 1 recites the term “making two positionings” is indefinite for failing to distinctly claim the subject matter the applicant claims as the invention as it is unclear if the term “making” is directed to the moving of the robot, the deciding of where to move, the calculation of a future movement, etc. It is also unclear what is doing the “making” in the claim. Further, the term “coverage area” is indefinite for failing to distinctly claim the subject matter the applicant claims as the invention as it is unclear what the term is directed to. Examiner was unable to discern if this is an area physical covered by the robot or an area scanned by the robot, and how the coverage area may be related to the robot and the grid map. With regards to claim 3 recites the term “point clouds participating in laser positioning” is indefinite for failing to distinctly claim the subject matter the applicant claims as the invention as it is unclear what the meaning of the term is. For sake of compact prosecution, the Examiner has construed the term to mean point clouds that are acquired via a laser positioning data gathering method. With regards to claim 5 recites the term “obtaining corresponding coefficients of rows and columns in the grid area according to the distance” is indefinite for failing to distinctly claim the subject matter the applicant claims as the invention as it is unclear what the coefficient being obtained is and how a coefficient may relate to a row and a column. Further, the term “pixel value” is indefinite for failing to distinctly claim the subject matter the applicant claims as the invention as it is unclear what the generic definition of a “pixel value” may be. With regards to claim 6 recites the term “deviating distances” is indefinite for failing to distinctly claim the subject matter the applicant claims as the invention as it is unclear what the term is intended to mean relating to a distance or how the said deviating distances are obtained. The claims 2-10 are dependent upon the independent claim 1 are also rejected under 112 second paragraph by the fact that they are dependent upon the rejected claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). 101 Analysis - With respect to Claim 1 Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis - Step 1: Claim 1 is directed towards a method which is directed to the statutory category of a process. Therefore Claims 1 IS within at least one of the four statutory categories. 101 Analysis- Step 2A Prong One: Regarding Prong One of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental process. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites, inter alai: “A method for a robot to improve the accuracy of an obstacle labeling, comprising: S1, making two positionings according to set moments, and then acquiring positioning poses of the two positionings on a grid map respectively at a first moment and a second moment; S2, defining a coverage area of the first moment and a coverage area of the second moment respectively according to positions of the two positionings at the first moment and the second moment, acquiring confidence coefficients of the two positionings, and processing the coverage area of the first moment and the coverage area of the second moment through the confidence coefficients; S3, interpolating the positioning poses at the first moment and the second moment, and constructing a closed graph according to the positioning poses at the first moment and the second moment, the pose interpolation and the processed coverage area of the first moment and the processed coverage area of the second moment; and S4, obtaining a grid occupied by the closed graph on the grid map and modifying the obstacle labeling according to the grid occupied by the closed graph on the grid map and the area of the closed graph.” The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “making”, “defining”, “processing”, “interpolating”, “constructing” and “obtaining” in the context of this claim, all encompass a person looking at available data and forming a simple judgement (determination, analysis, comparison, etc.) either manually or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As drafted, the above claims, under their broadest reasonable interpretation, cover mental processes performed in the human mind (including an observation, evaluation, judgement, opinion), that are merely completed via generic computer components. Accordingly, the claims recite an abstract idea. Step 2A Prong Two Analysis: Regarding Prong Two of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): Claim 1 recites, inter alai: “A method for a robot to improve the accuracy of an obstacle labeling, comprising: S1, making two positionings according to set moments, and then acquiring positioning poses of the two positionings on a grid map respectively at a first moment and a second moment; S2, defining a coverage area of the first moment and a coverage area of the second moment respectively according to positions of the two positionings at the first moment and the second moment, acquiring confidence coefficients of the two positionings, and processing the coverage area of the first moment and the coverage area of the second moment through the confidence coefficients; S3, interpolating the positioning poses at the first moment and the second moment, and constructing a closed graph according to the positioning poses at the first moment and the second moment, the pose interpolation and the processed coverage area of the first moment and the processed coverage area of the second moment; and S4, obtaining a grid occupied by the closed graph on the grid map and modifying the obstacle labeling according to the grid occupied by the closed graph on the grid map and the area of the closed graph.” For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “acquiring positioning poses…” and “acquiring confidence coefficients…”, these limitations merely describe the sending and receiving of data which is in insignificant extra solution activity. See MPEP § 2106.05(g). Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Step 2B Analysis: The claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computer components to perform the abstract idea amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the act of collecting data and displaying data amounts to no more than merely storing and displaying information of the exception and thus is an extra-solution activity. The claims are not patent eligible. Regarding dependent claims 2-10, no claim further adds a limitation that introduces any practical applications to the claimed invention, the dependent claims merely add more mental process, mathematical concepts, and post-solution activities and are thus not patent eligible. Therefore, Claims 1-10 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1). Hereafter referred to as Tomono and Wang respectively. Regarding Claim 1, Tomono teaches a method for a robot to improve the accuracy of an obstacle labeling (see at least Tomono [English Translation, pg.13 para.3] based on the occupied grid map M created by the map creating unit 25, the moving body detecting unit 291 detects the moving body 100 by comparing the old map with the new map. A moving object detecting means for directly detecting the moving object may be provided....the provision of the moving body detecting means for directly detecting the moving body around the robot body 1 allows the moving body to be detected with high accuracy) comprising: S1, making two positionings according to set moments, and then acquiring positioning poses of the two positionings on a grid map respectively at a first moment and a second moment (see at least Tomono [English Translation pg.3 para.1, pg.5 para.3, pg.5 para.5] This mobile robot performs self-position estimation and environment map creation by SLAM technology, and the robot body 1 is autonomously driven by the traveling means 3 based on the map and traveling schedule…An occupancy grid map described later is used as this map...the free cell Cf1 by the first scan is shown in white. Similarly, FIGS. 6B and 6C show cell values obtained by the second scan and the third scan...the pose graph is composed of nodes that represent the position of the mobile robot and arcs that represent the relative position between the nodes) the disclosure of Tomono describes a robot moving throughout an environment that takes scans at multiple positionings as well as acquiring pose to be used in a occupancy grid map, which is analogous to making two positionings and quiring poses on a grid map at a first moment and a second moment S2, defining a coverage area of the first moment and a coverage area of the second moment respectively according to positions of the two positionings at the first moment and the second moment, acquiring confidence coefficients of the two positionings, and processing the coverage area of the first moment and the coverage area of the second moment through the confidence coefficients (see at least Tomono [English Translation pg.5 para.3-4] The cell value of the occupied lattice map M is managed with logarithmic odds, and the logarithmic odds are calculated by summing the cell values of a plurality of scans…The occupancy probability is calculated from this logarithmic odds...FIG. 6A shows the cell value of the occupied lattice map M when the control unit 2 acquires the first scan (360° scan) by the detection unit 4. Here, the free cell Cf1 by the first scan is shown in white. Similarly, FIGS. 6B and 6C show cell values obtained by the second scan and the third scan, and these free cells Cf2 and Cf3 are represented by intermediate gray and dark gray, respectively. In FIG. 6, the occupied cells Co are all shown in black, and the unobserved cells Cn are shown in light gray. In each scan, a plurality of laser beams (for example, 360 laser beams each at 1 °) are irradiated, and a cell value is calculated in a logarithmic odds as described above for a cell through which each laser beam passes) the disclosure of Tomono describes a robot that finds coverage areas for different positions using multiple scans at different locations and poses, further the multiple scans are further used to calculate a probability of each square in the occupancy grid being occupied, the scans and cell values for odds are used together to obtain what squares are occupied, which is analogous to coverage area. However, Tomono does not explicitly teach S3, interpolating the positioning poses at the first moment and the second moment, and constructing a closed graph according to the positioning poses at the first moment and the second moment, the pose interpolation and the processed coverage area of the first moment and the processed coverage area of the second moment and, S4, obtaining a grid occupied by the closed graph on the grid map and modifying the obstacle labeling according to the grid occupied by the closed graph on the grid map and the area of the closed graph. Wang, in the same field as the endeavor, teaches S3, interpolating the positioning poses at the first moment and the second moment, and constructing a closed graph according to the positioning poses at the first moment and the second moment, the pose interpolation and the processed coverage area of the first moment and the processed coverage area of the second moment (see at least Want [¶ 98-102, 54] regionalizing a map to generate a global grid map…marking occupancies of grids in a global grid map based on a position of an obstacle, to determine an available grid…finding a shortest path from a grid corresponding to an initial position to a grid corresponding to a target position in the available grid using a graph search technique….interpolating a point between neighboring points on the shortest path to generate a collision-free global path…the goal for global grid map generation is to construct a graph based on the map) the disclosure of Wang describes a robot using an occupancy grid to generate a collision free path, the disclosure further describes interpolating between neighboring points on the shortest path, the neighboring points being analogous to 2 positions at a first and second moment, a collision free path is then created by connecting all points and interpolated points with a line and a graph is constructed S4, obtaining a grid occupied by the closed graph on the grid map and modifying the obstacle labeling according to the grid occupied by the closed graph on the grid map and the area of the closed graph (see at least Wang [ ¶ 85, 102] comparing the scanned point with an occupancy map to obtain a position of an obstacle…each grid is examined based on all pixels inside the grid. If any of the pixel is in “occupied” or “unknown” status, then the grid will be marked as “unavailable”. If all pixels are in “free” status, the grid will be marked as “available”. The global grid map updates with an obstacle detection loop. Detected obstacle points are mapped to corresponding grids. Each obstacle grid is set to “unavailable” status, and is held for a short period to ensure robustness). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for interpolating the positioning poses at the first moment and the second moment, and constructing a closed graph according to the positioning poses at the first moment and the second moment, the pose interpolation and the processed coverage area of the first moment and the processed coverage area of the second moment and obtaining a grid occupied by the closed graph on the grid map and modifying the obstacle labeling according to the grid occupied by the closed graph on the grid map and the area of the closed graph with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the speed at which a path without obstacles can be found as described in Wang (see at least Wang [¶ 102] The benefit to generate the global grid map is that it can speed up graph search to find a feasible global path). Regarding Claim 2, Tomono in view of Wang teaches all limitations of Claim 1 as set forth above. However, Tomono does not explicitly teach in the step S2 the step of defining a coverage area of the first moment and a coverage area of the second moment respectively according to positions of the two positionings at the first moment and the second moment, comprises: obtaining the positions of the two positionings on the grid map, and then respectively using the positions of the two positionings as circle centers, with a radius defined by a set value to make circles to obtain the coverage area of the first moment and the coverage area of the second moment. Wang, in the same field as the endeavor, teaches obtaining the positions of the two positionings on the grid map, and then respectively using the positions of the two positionings as circle centers, with a radius defined by a set value to make circles to obtain the coverage area of the first moment and the coverage area of the second moment (see at least Wang [¶ 93, 108, and Claim 10] determining an annular region with a guide robot as a center of a circle and a length of a rigid object as a radius for use as a candidate position region of a user….the acquiring a state of a user comprises: determining an annular region with the guide robot as a center of a circle and the length of the rigid object as a radius for use as a candidate position region of the user). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining the positions of the two positionings on the grid map, and then respectively using the positions of the two positionings as circle centers, with a radius defined by a set value to make circles to obtain the coverage area of the first moment and the coverage area of the second moment with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of more accurately understanding the area that is occupied by the robot by defining its area geometrically. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1), Van der Merwe et al (US 20210278851 A1), and Zhang et al (CN 111461245 A). Hereafter referred to as Tomono, Wang, Van der Merwe, and Zhang respectively. Regarding Claim 3, Tomono in view of Wang teaches all limitations of Claim 1 as set forth above. However, Tomono does not explicitly teach step S2, the step of acquiring confidence coefficients of the two positionings, comprises steps of: A1: obtaining point clouds participating in a laser positioning, randomly selecting one of the point clouds, and defining a grid area based on the position of the one point cloud on the grid map; A2: calculating a probability value of the one point cloud on the grid map based upon the information of the grid area, repeating steps Al and A2 until the probability values of all point clouds participating in the laser positioning on the grid map are obtained; and A3: obtaining detected distances of all point clouds participating in the laser positioning, and then filtering the point clouds to obtain a quantity value of the filtered point clouds. Van der Merwe, in the same field as the endeavor teaches A1: obtaining point clouds participating in a laser positioning, randomly selecting one of the point clouds, and defining a grid area based on the position of the one point cloud on the grid map (see at least Van der Merwe [¶ 124, 127, 133] creating processed point cloud data 10135 can include filtering voxels. To reduce the number of points that will be subject to future processing, in some configurations, the centroid of each voxel in the dataset can be used to approximate the points in the voxel, and all points except the centroid can be eliminated from the point cloud data. In some configurations, the center of the voxel can be used to approximate the points in the voxel. Other methods to reduce the size of filtered segments 10251 (FIG. 1G) can be used such as, for example, but not limited to, taking random point subsamples so that a fixed number of points, selected uniformly at random, can be eliminated from filtered segments 10251 (FIG. 1G)...segmented point cloud data 10137 (FIG. 1D) can be used to generate 10161 (FIG. 1D) polygons 10759...The set of polygons 10778, including the labeled features, can be subjected to further simplification to reduce the number of possible path points, and the possible path points can be provided to device controller 10111 (FIG. 1A) in the form of annotated point data 10379 (FIG. 5B) which can be used to populate the occupancy grid.) A2: calculating a probability value of the one point cloud on the grid map based upon the information of the grid area, repeating steps Al and A2 until the probability values of all point clouds participating in the laser positioning on the grid map are obtained (see at least Van der Merwe [¶ 37] A method for managing a global occupancy grid for an autonomous device, the global occupancy grid including global occupancy grid cells, the global occupancy grid cells being associated with occupied probability, the method comprising: receiving sensor data from sensors associated with the autonomous device; creating a local occupancy grid based at least on the sensor data) and A3: obtaining detected distances of all point clouds participating in the laser positioning, and then filtering the point clouds to obtain a quantity value of the filtered point clouds (see at least Van der Merwe [¶ 135, 127-129] long-range sensor assembly 20400 is mounted on top of the cargo-container to provide improved view of the environment surrounding the AV…Long-range sensor assembly 20400 provides information about environment around the AV from a minimum distance out to a maximum range...Resulting polygons 10759 can be based at least on the size of the neighborhood, the maximum acceptable distance for a point to be considered…the point clusters can be filtered according to a relationship between the orientation of the point clusters and the reference plane...To be considered closely packed, the point must lie within a pre-selected distance from a candidate point. In some configurations, a scaling factor for the pre-selected distance can be empirically or dynamically determined. In some configurations, the scaling factor can be in the range of about 0.1 to 1.0.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining point clouds participating in a laser positioning, randomly selecting one of the point clouds, and defining a grid area based on the position of the one point cloud on the grid map, calculating a probability value of the one point cloud on the grid map based upon the information of the grid area, repeating steps Al and A2 until the probability values of all point clouds participating in the laser positioning on the grid map are obtained, and obtaining detected distances of all point clouds participating in the laser positioning, and then filtering the point clouds to obtain a quantity value of the filtered point clouds with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving both complex terrain traversal and improving decision making as discussed by Van de Merwe (see at least Van de Merwe [¶ 6, 8] a probability that the space is occupied can improve decision-making with respect to the space… The modes can enable complex terrain traversal, among other benefits. A combination of the map (surface classification, for example), the sensor data (sensing features surrounding the AV), the occupancy grid (probability that the upcoming path point is occupied), the mode (ready to traverse difficult terrain or not), and can be used to identify the direction, configuration, and speed of the AV). Further, Tomono does not explicitly teach A4: obtaining a probability weighted average value through the probability values and detected distances of all point clouds participating in the laser positioning on the grid map; and A5: obtaining the confidence coefficient of a current positioning based on the probability weighted average value, the quantity value of all point clouds participating in the laser positioning, the quantity value of the filtered point clouds, and the quantity value of point clouds set to participate in the laser positioning. Zhang, in the same field as the endeavor teaches A4: obtaining a probability weighted average value through the probability values and detected distances of all point clouds participating in the laser positioning on the grid map (see at least Zhang [English Translation pg.3 para.5, pg.3 para.11] S2 specifically comprises: dividing the two-dimensional point cloud according to whether the geometric distance between the laser scanning points at the same horizontal plane is greater than the threshold value; at the same time, distinguishing the two-part point cloud outside the camera field of view; directly using the point cloud outside the view field for laser SLAM, fusing the point cloud in the field of view with the information of the image and then for SLAM....S4.2: setting the sub-graph size and blending the corresponding number of point cloud; after constructing the sub-graph, calculating the semantic type of each grid average confidence degree) and A5: obtaining the confidence coefficient of a current positioning based on the probability weighted average value, the quantity value of all point clouds participating in the laser positioning, the quantity value of the filtered point clouds, and the quantity value of point clouds set to participate in the laser positioning (see at least Zhang [English Translation pg.4 para.1, pg.7 para.10] a semantic extracting module for extracting the surrounding box for identifying the position of the marked object and the corresponding semantic type and confidence degree from the image read by the single-phase machine by the target detection convolutional neural network;...Specifically, based on the laser SLAM algorithm Cartographer constructing two-dimensional grid map. for each grid in the semantic laser observation range, updating the grid occupancy probability, at the same time, accumulating each type of confidence and increasing the updating times. The confidence threshold value is set according to the precision of the used neural network...calculating the semantic type with the maximum average confidence level in each grid and storing, wherein the occupation probability represents the probability that the grid is occupied by the barrier). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining a probability weighted average value through the probability values and detected distances of all point clouds participating in the laser positioning on the grid map and obtaining the confidence coefficient of a current positioning based on the probability weighted average value, the quantity value of all point clouds participating in the laser positioning, the quantity value of the filtered point clouds, and the quantity value of point clouds set to participate in the laser positioning with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving point cloud data for indoor navigation as discussed in Zhang (see at least Zhang [English Translation pg.5 para.8] providing a wheel-type robot semantic mapping method for melting point cloud and image; it has real-time and simple device, the map information is abundant, and the purpose is to realize the indoor mobile robot intelligent navigation). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1) and Englard et al (US 20190113927 A1). Hereafter referred to as Tomono, Wang, and Englard respectively. Regarding Claim 4, Tomono in view of Wang teaches all limitations of Claim 1 as set forth above. However, Tomono does not explicitly teach obtaining the position of the one point cloud on the grid map, and then finding a grid intersection on the grid map closest to the position of the one point cloud and defining a grid area having N*N grids on the grid map with the grid intersection as the center; wherein, N is a positive integer. Englard, in the same field as the endeavor, teaches obtaining the position of the one point cloud on the grid map, and then finding a grid intersection on the grid map closest to the position of the one point cloud (see at least Englard [¶ 133, 93, 124] the perception signals 208 include data representing “occupancy grids” (e.g., one grid per T milliseconds), with each occupancy grid indicating object positions (and possibly object boundaries, orientations, etc.) within an overhead view of the autonomous vehicle's environment. Within the occupancy grid, each “cell” (e.g., pixel) may be associated with a particular class as determined by the classification module 214…the prediction signals 222 may include, for each such grid generated by the perception component 206, one or more “future occupancy grids” that indicate predicted object positions, boundaries and/or orientations at one or more future times (e.g., 1, 2 and 5 seconds ahead...the vehicle controller 422 receives point cloud data from the sensor heads 412 via the link 420 and analyzes the received point cloud data, using any one or more of the aggregate or individual SDCAs disclosed herein, to sense or identify targets 330 (see FIG. 5) and their respective locations) and defining a grid area having N*N grids on the grid map with the grid intersection as the center; wherein, N is a positive integer (see at least Englard [¶ 133] The occupancy grid may cover an area that does not exceed the range of at least one sensor (e.g., lidar device and/or camera) of the autonomous vehicle. The resolution, or real-world distance represented by a single cell, of the occupancy grid may vary depending on the embodiment (and possibly also based on the scenario). In one embodiment, for example, the occupancy grid represents roughly a 200 m×200 m area, with each cell representing roughly a 0.5 m×0.5 m area such that the grid includes 160,000 cells). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining the position of the one point cloud on the grid map, and then finding a grid intersection on the grid map closest to the position of the one point cloud and defining a grid area having N*N grids on the grid map with the grid intersection as the center; wherein, N is a positive integer with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the system’s ability to identify obstacles in its surrounding as discussed in Englard (see at least Englard [¶ 113, 124] The field of regard of the lidar system 300 can overlap, encompass, or enclose at least a portion of the target 330, which may include all or part of an object that is moving or stationary relative to lidar system 300…the vehicle controller 422 receives point cloud data from the sensor heads 412 via the link 420 and analyzes the received point cloud data, using any one or more of the aggregate or individual SDCAs disclosed herein, to sense or identify targets 330 (see FIG. 5) and their respective locations). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1), Englard et al (US 20190113927 A1), You et al (CN 111066064 A), and Hu et al (CN 111536964 A). Hereafter referred to as Tomono, Wang, Englard, You, and Hu respectively. Regarding Claim 5, Tomono in view of Wang and Englard teaches all limitations of Claim 4 as set forth above. However, Tomono does not explicitly teach wherein the step of calculating the probability value of the one point cloud on the grid map based upon the information of the grid area adopts a bicubic interpolation method, which comprises: obtaining a distance between each grid in the grid area and the one point cloud, and then obtaining corresponding coefficients of rows and columns in the grid area according to the distance; obtaining a corresponding weight of each grid through the corresponding coefficients of the rows and columns, and then obtaining a pixel value of the one point cloud through the weight value and by using a summation formula, and then obtaining the probability value corresponding to the pixel value. You, in the same field as the endeavor, teaches wherein the step of calculating the probability value of the one point cloud on the grid map based upon the information of the grid area adopts a bicubic interpolation method (see at least You [English Translation pg.2 para.7] using bicubic interpolation diagram of calculating the occupation probability of the grid unit). Hu, in the same field as the endeavor teaches obtaining a distance between each grid in the grid area and the one point cloud (see at least Hu [English Translation pg.9 para.11] in the above steps S400 to S402 based on the real-time positioning and establishing map algorithm (SLAM) establishing the current warehouse environment map, generating probability grid map (M). traversing the whole probability grid map, calculating the distance of each grid in the map closest to the obstacle point in the map to generate a distance map (D), and the grid distance value at the barrier is directly set to be zero). Englard, in the same field as the endeavor, teaches obtaining corresponding coefficients of rows and columns in the grid area according to the distance; obtaining a corresponding weight of each grid through the corresponding coefficients of the rows and columns, and then obtaining a pixel value of the one point cloud through the weight value and by using a summation formula, and then obtaining the probability value corresponding to the pixel value (see at least Englard [Abstract and ¶ 40, 213, 109, 111, 93] a cost map generation component is configured to generate, based on the observed occupancy grid, the predicted occupancy grid(s), and the navigation data, cost maps that each specify numerical values representing a cost, at a respective instance of time, of occupying certain cells in a two-dimensional representation of the environment...the cells of a cost map may specify numerical values representing a “cost” of the autonomous vehicle occupying certain positions at a given point in time...the decision arbiter may perform mathematical operations (e.g., calculate a geometric mean, arithmetic mean, median, or weighted average) on operational parameters (e.g., speed, acceleration, steering, braking, etc.) that are output by different SDCAs, and use the results to control the autonomous vehicle...the numerical value of a cell may be determined from a sum of multiple values corresponding to multiple respective deviations from multiple respective target locations…A collection of pixels captured in succession (which may be referred to as a depth map, a point cloud, or a point cloud frame) may be rendered as an image or may be analyzed to identify or detect objects or to determine a shape or distance of objects within the field of regard. For example, a depth map may cover a field of regard that extends 60° horizontally and 15° vertically, and the depth map may include a frame of 100-2000 pixels in the horizontal direction by 4-400 pixels in the vertical direction...Within the occupancy grid, each “cell” (e.g., pixel) may be associated with a particular class as determined by the classification module 214, possibly with an “unknown” class for certain pixels that were not successfully classified. Similarly, the prediction signals 222 may include, for each such grid generated by the perception component 206, one or more “future occupancy grids” that indicate predicted object positions, boundaries and/or orientations at one or more future times (e.g., 1, 2 and 5 seconds ahead). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for wherein the step of calculating the probability value of the one point cloud on the grid map based upon the information of the grid area adopts a bicubic interpolation method, which comprises: obtaining a distance between each grid in the grid area and the one point cloud, and then obtaining corresponding coefficients of rows and columns in the grid area according to the distance; obtaining a corresponding weight of each grid through the corresponding coefficients of the rows and columns, and then obtaining a pixel value of the one point cloud through the weight value and by using a summation formula, and then obtaining the probability value corresponding to the pixel value with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the system’s ability to identify obstacles in its surrounding as discussed in Englard (see at least Englard [¶ 113, 124] The field of regard of the lidar system 300 can overlap, encompass, or enclose at least a portion of the target 330, which may include all or part of an object that is moving or stationary relative to lidar system 300…the vehicle controller 422 receives point cloud data from the sensor heads 412 via the link 420 and analyzes the received point cloud data, using any one or more of the aggregate or individual SDCAs disclosed herein, to sense or identify targets 330 (see FIG. 5) and their respective locations). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1), Englard et al (US 20190113927 A1), and Wheeler (CN 110832417 A). Hereafter referred to as Tomono, Wang, Englard, and Wheeler respectively. Regarding Claim 6, Tomono in view of Wang and Englard teaches all limitations of Claim 4 as set forth above. However, Tomono does not explicitly teach wherein the step of processing the coverage area of the first moment and the coverage area of the second moment through the confidence coefficients, comprises: obtaining deviating distances that are negatively correlated with the confidence coefficients according to the confidence coefficients of the two positionings, and then having a comparison on the deviating distances of the two positionings obtaining a maximum deviating distance in the two positionings, and then shrinking the coverage area of the first moment and the coverage area of the second moment uniformly inward by the maximum deviating distance. Englard, in the same field as the endeavor, teaches obtaining deviating distances that are negatively correlated with the confidence coefficients according to the confidence coefficients of the two positionings, and then having a comparison on the deviating distances of the two positionings (see at least Englard [¶ 213] cost maps are generated based on the observed occupancy grid, the predicted occupancy grid(s), and the navigation data. Each cost map specifies numerical values representing a cost, at a respective instance of time, of occupying certain cells in a two-dimensional representation of the environment (e.g., in an overhead view). The numerical value, or “cost,” for a given cell of the cost map grid (for a cost map corresponding to time t) may represent a risk associated with the autonomous vehicle being in the area of the environment represented by that cell at time t. In some embodiments, the value/cost may also represent a deviation from some desired “target” location (e.g., from a waypoint along the intended route of the vehicle). The deviation may correspond to a distance from the target location, and the value/cost may increase with distance from the target location. In some embodiments, the value/cost may represent multiple deviations from multiple respective target locations). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining deviating distances that are negatively correlated with the confidence coefficients according to the confidence coefficients of the two positionings, and then having a comparison on the deviating distances of the two positionings with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the system’s ability to identify obstacles in its surrounding as discussed in Englard (see at least Englard [¶ 113, 124] The field of regard of the lidar system 300 can overlap, encompass, or enclose at least a portion of the target 330, which may include all or part of an object that is moving or stationary relative to lidar system 300…the vehicle controller 422 receives point cloud data from the sensor heads 412 via the link 420 and analyzes the received point cloud data, using any one or more of the aggregate or individual SDCAs disclosed herein, to sense or identify targets 330 (see FIG. 5) and their respective locations). Wheeler, in the same field as the endeavor, teaches obtaining a maximum deviating distance in the two positionings, and then shrinking the coverage area of the first moment and the coverage area of the second moment uniformly inward by the maximum deviating distance (see at least Wheeler [English Translation pg.3 para.2, pg.19 para.3] the error can be calculated as the maximum deviation between route high resolution and low resolution route. when the final potential route comprises a final destination or end point (e.g., potential complete route), this may be an error of the lane element route of high resolution limit. able to discard the error exceeds the limit of other possible potential route. If the remaining potential route with error greater than the upper limit, then the search is completed (i.e., from the starting point to final point of the final route). If some incomplete potential route with error is lower than the upper limit, then the system can move along those potential route to continue to search until reaching the destination lane element or error exceeds the upper limit. Thus, the system searching for the best possible complete route from the error measurement value (the maximum distance between e.g., route low resolution and high resolution route)....part of the route generating module 1530 represents a modification of the portion of route point and low resolution route of the corresponding point of the maximum vertical distance between error and indication definition map may deviate from the maximum distance of the low resolution line threshold value. threshold if the partial route generation module 1530 determines the error is less than the partial route generation module 1530 adding the modified part way to the queue data structure, or portion of the route generating module 1530 eliminates part of route-modified, then no further considerations). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining a maximum deviating distance in the two positionings, and then shrinking the coverage area of the first moment and the coverage area of the second moment uniformly inward by the maximum deviating distance with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the safety and accuracy of the vehicles navigation as discussed in Wheeler (see at least Wheeler [English Translation pg.5 para.5-6] the HD map is accurate and comprises a latest road condition for safe navigation…the vehicle determined in the HD map of the current location, determining the feature on the road relative to the position of the vehicle, based on physical constraints and legal constraints to determine whether it can safely moving vehicles, and the like. Examples of physical constraints include physical barriers such as walls, legal constraints of example comprises a lane allowed on the law of driving direction, speed limit, the line is stopped). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1) and Murray et al (US 20200398428 A1). Hereafter referred to as Tomono, Wang, and Murray respectively. Regarding Claim 7, Tomono in view of Wang teaches all limitations of Claim 1 as set forth above. However, Tomono does not explicitly teach wherein the step of interpolating the positioning poses at the first moment and the second moment, comprises: inserting an intermediate pose between the positioning poses at the first moment and the second moment. Murray, in the same field as the endeavor, teaches inserting an intermediate pose between the positioning poses at the first moment and the second moment (see at least Murray [¶ 109] As used in this specification and the appended claims, the terms determine, determining and determined when used in the context of whether a collision will occur or result, mean that an assessment or prediction is made as to whether a given pose or movement between two poses via a number of intermediate poses will result in a collision between a portion of a robot and some object (e.g., another portion of the robot, a portion of another robot, a persistent obstacle, a transient obstacle, for instance a person)). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for inserting an intermediate pose between the positioning poses at the first moment and the second moment with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the collision prediction of the traveling robot, which in turn improves the safety, as discussed in Murray (see at least Murray [¶ 8] The structures and algorithms described herein enable high degree of freedom robots to avoid collisions and continue working in a changing, shared environment). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1), Murray et al (US 20200398428 A1), and Zevenbergen et al (US 20190193629 A1). Hereafter referred to as Tomono, Wang, Murray, and Zevenbergen respectively. Regarding Claim 8, Tomono in view of Wang and Murray teaches all limitations of Claim 7 as set forth above. However, Tomono does not explicitly teach wherein in step S3, the step of constructing a closed graph according to the positioning poses at the first moment and the second moment, the pose interpolation, and the processed coverage area of the first moment and the processed coverage area of the second moment, comprises making a straight line perpendicular to the right front of the robot at the position of the positioning of the first moment, such a straight line having two intersection points with the edge of the processed coverage area of the first moment, and obtaining a first line segment of distance with the two intersection points as endpoints; making a straight line perpendicular to the right front of the robot at the position of the positioning of the second moment, such a straight line having two intersection points with the edge of the processed coverage area of the second moment, and obtaining a second line segment of distance with the two intersection points as endpoints; making a straight line perpendicular to the right front of the robot at the position of the intermediate pose on the grid map, and then obtaining a third line segment for distance according to the first line segment of distance or the second line segment of distance and connecting the endpoints of the first line segment of distance, the second line segment of distance and the third line segment of distance with the edges of the processed coverage area of the first moment and the processed coverage area of the second moment, so as to obtain the closed graph that is a figure having a largest area; wherein, the positioning pose includes a right frontal orientation of the robot at a position of a current positioning. Zevenbergen, in the same field as the endeavor, teaches making a straight line perpendicular to the right front of the robot at the position of the positioning of the first moment, such a straight line having two intersection points with the edge of the processed coverage area of the first moment, and obtaining a first line segment of distance with the two intersection points as endpoints; making a straight line perpendicular to the right front of the robot at the position of the positioning of the second moment, such a straight line having two intersection points with the edge of the processed coverage area of the second moment, and obtaining a second line segment of distance with the two intersection points as endpoints; making a straight line perpendicular to the right front of the robot at the position of the intermediate pose on the grid map, and then obtaining a third line segment for distance according to the first line segment of distance or the second line segment of distance and connecting the endpoints of the first line segment of distance, the second line segment of distance and the third line segment of distance with the edges of the processed coverage area of the first moment and the processed coverage area of the second moment, so as to obtain the closed graph that is a figure having a largest area (see at least Zevenbergen [¶ 98, 100, 106, Fig. 4C] the map may be represented as an occupancy grid that includes a number of cells that represent corresponding areas in the environment. Each cell may be assigned a state that indicates the status of the area represented by the cell. Particularly, a cell may be assigned as having an obstacle, free space, or unknown. Cells with obstacles may represent physical features within the environment, including fixed, movable, and mobile objects. Cells with free space may be traversable by the vehicle without striking objects in the environment....Planned operating region 436 may be determined by fitting a boundary to a union of footprints 413-421, as illustrated in FIG. 4C. Planned operating region 436 may be defined by an area enclosed by the boundary and may represent regions likely to be occupied by vehicle 400 within a future time period. A caution region may be detected by determining whether planned operating region 436 intersects with threshold areas around any obstacles within the environment...caution regions 452 and 456 may be smaller, larger, and/or may have a different shape than shown in FIG. 4F. For example, caution regions 452 and 456 may span the same area as intersections 448 and 446…Caution regions 452 and 456 may be indicated in the occupancy grid by assigning to corresponding cells of the occupancy grid a “caution” state) wherein, the positioning pose includes a right frontal orientation of the robot at a position of a current positioning (see at least Zevenbergen [¶ 99] The pose (i.e., position and orientation) of footprints 413-421 may be determined based on the physical size of vehicle 400 (e.g., mass and volume), as well as the steering angles and velocities planned to be commanded to vehicle 400 to cause vehicle 400 to follow path 401. In some embodiments, each footprint, in addition to representing the area in the environment expected to be occupied by the vehicle, may include a buffer region around the area. For example, each footprint might be 10% larger than an actual physical size of vehicle 400 to account for errors in sensing, vehicle positioning, and simulation, among others. In some embodiments, the density of positions 410-420, and thus the density of footprints 413-421, may be greater or smaller than that shown in FIGS. 4A-4H). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for geographically calculating the coverage area of the vehicle using geometric representations of the robot’s shape connected with lines with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving collision avoidance of the vehicle by accurately knowing how much geometric space it takes up within the occupancy grid as discussed in Zevenbergen (see at least Zevenbergen [¶ 106, 130] Caution regions 452 and 456 may be indicated in the occupancy grid by assigning to corresponding cells of the occupancy grid a “caution” state…This may allow vehicle 400 to slow down in anticipation of a potential collision with the object, thus making operation of vehicle 400 safer). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1) and Zevenbergen et al (US 20190193629 A1). Hereafter referred to as Tomono, Wang, and Zevenbergen respectively. Regarding Claim 9, Tomono in view of Wang teaches all limitations of Claim 1 as set forth above. However, Tomono does not explicitly teach wherein in step S4, the step of modifying the obstacle labeling according to the grid occupied by the closed graph on the grid map and the area of the closed graph comprises: obtaining the grid occupied by the closed graphon the grid map and the area of the closed graph, obtaining an intersection area between the grid occupied by each closed graph on the grid map and the area of the closed graph, and deleting the obstacle labeling if the intersection area is greater than a set threshold and there is an obstacle labeling in the grid. Zevenbergen, in the same field as the endeavor, teaches obtaining the grid occupied by the closed graphon the grid map and the area of the closed graph, obtaining an intersection area between the grid occupied by each closed graph on the grid map and the area of the closed graph (see at least Zevenbergen [¶ 98-100, 106] the map may be represented as an occupancy grid that includes a number of cells that represent corresponding areas in the environment. Each cell may be assigned a state that indicates the status of the area represented by the cell. Particularly, a cell may be assigned as having an obstacle, free space, or unknown. Cells with obstacles may represent physical features within the environment, including fixed, movable, and mobile objects. Cells with free space may be traversable by the vehicle without striking objects in the environment…The control system (e.g., local or remote) may periodically update and adjust the occupancy grid based on new measurements of the environment from sensors coupled to one or more vehicles navigating the environment…Planned operating region 436 may be determined by fitting a boundary to a union of footprints 413-421, as illustrated in FIG. 4C. Planned operating region 436 may be defined by an area enclosed by the boundary and may represent regions likely to be occupied by vehicle 400 within a future time period. A caution region may be detected by determining whether planned operating region 436 intersects with threshold areas around any obstacles within the environment....caution regions 452 and 456 may be smaller, larger, and/or may have a different shape than shown in FIG. 4F. For example, caution regions 452 and 456 may span the same area as intersections 448 and 446…caution regions 452 and 456 may include circles circumscribed around intersections 448 and 446….the shape, size, and other aspects of the caution regions may be defined by the safety standard…Caution regions 452 and 456 may be indicated in the occupancy grid by assigning to corresponding cells of the occupancy grid a “caution” state) and deleting the obstacle labeling if the intersection area is greater than a set threshold and there is an obstacle labeling in the grid (see at least Zevenbergen [¶ 38, 98] Determining caution regions may allow vehicles to more safely operate in tight areas and maneuver closely to objects because the visual indications ensure that such areas are likely to be free of occupants….Unknown cells may require additional sensor data to determine whether the area includes an obstacle or not (i.e., has free space). The control system (e.g., local or remote) may periodically update and adjust the occupancy grid based on new measurements of the environment from sensors coupled to one or more vehicles navigating the environment). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining the grid occupied by the closed graphon the grid map and the area of the closed graph, obtaining an intersection area between the grid occupied by each closed graph on the grid map and the area of the closed graph, and deleting the obstacle labeling if the intersection area is greater than a set threshold and there is an obstacle labeling in the grid with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving collision avoidance of the vehicle by accurately knowing how much geometric space it takes up within the occupancy grid as discussed in Zevenbergen (see at least Zevenbergen [¶ 106, 130] Caution regions 452 and 456 may be indicated in the occupancy grid by assigning to corresponding cells of the occupancy grid a “caution” state…This may allow vehicle 400 to slow down in anticipation of a potential collision with the object, thus making operation of vehicle 400 safer). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Tomono et al (JP WO2020035902 A1) in view of Wang et al (US 20210402601 A1), Zevenbergen et al (US 20190193629 A1), and Miller et al (US 20200200547 A1). Hereafter referred to as Tomono, Wang, Zevenbergen and Miller respectively. Regarding Claim 10, Tomono in view of Wang and Zevenbergen teaches all limitations of Claim 9 as set forth above. However, Tomono does not explicitly teach wherein the step of obtaining the intersection area between the grid occupied by each closed graph on the grid map and the area of the closed graph comprises: obtaining the area of each grid, and then obtaining positions of edges of the closed graph on each grid to identify a figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid; dividing the figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid into several quadrilaterals, and obtaining the area of each of the quadrilaterals and totaling the area of each of the quadrilaterals to get the area of the figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid. Miller, in the same field as the endeavor, teaches obtaining the area of each grid, and then obtaining positions of edges of the closed graph on each grid to identify a figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid (see at least Miller [¶ 59] the occupancy map 530 is represented using a 3D volumetric grid of cells at 5-10 cm resolution. Each cell indicates whether or not a surface exists at that cell, and if the surface exists, a direction along which the surface is oriented) dividing the figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid into several quadrilaterals, and obtaining the area of each of the quadrilaterals (see at least Miller [¶ 62-64] The online HD map system 110 divides a physical area into geographical regions and stores a separate representation of each geographical region. Each geographical region represents a continuous physical area bounded by a geometric shape, for example, a square, a rectangle, a quadrilateral or a general polygon....the online HD map system 110 represents a geographical region using an object or a data record that comprises various attributes including, a unique identifier for the geographical region, a unique name for the geographical region, description of the boundary of the geographical region, for example, using a bounding box of latitude and longitude coordinates, and a collection of landmark features and occupancy grid data) and totaling the area of each of the quadrilaterals to get the area of the figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid (see at least Miller [¶ 62] Examples of data required to represent the region include but are not limited to a geometric area encompassed by the region). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified the system set forth in Tomono to contain a system for obtaining the area of each grid, and then obtaining positions of edges of the closed graph on each grid to identify a figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid; dividing the figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid into several quadrilaterals, and obtaining the area of each of the quadrilaterals and totaling the area of each of the quadrilaterals to get the area of the figure located in the closed graph and composed of the edges of the closed graph and the edges of the grid with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make such a modification for benefit of improving the safety of the traveling vehicle by determining what surfaces are traversable or not as discussed in Miller (see at least Miller [Abstract] the system determines a navigable surface corresponding to a physical area over which a vehicle may safely navigate and navigable surface boundaries surrounding that area). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH A YANOSKA whose telephone number is (703)756-5891. The examiner can normally be reached M-F 9:00am to 5:00pm (Pacific Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH ANDERSON YANOSKA/Examiner, Art Unit 3664 /RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

May 22, 2024
Application Filed
Mar 28, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600502
NEURAL NETWORK-GUIDED PASSIVE SENSOR DRONE INSPECTION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12548454
CONTROLLING DRONE NOISE BASED UPON HEIGHT
2y 5m to grant Granted Feb 10, 2026
Patent 12530031
VIRTUAL OFF-ROADING GUIDE
2y 5m to grant Granted Jan 20, 2026
Patent 12447969
LIMITED USE DRIVING OPERATIONS FOR VEHICLES
2y 5m to grant Granted Oct 21, 2025
Patent 12366859
TROLLING MOTOR AND SONAR DEVICE DIRECTIONAL CONTROL
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
99%
With Interview (+60.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month