Prosecution Insights
Last updated: April 18, 2026
Application No. 18/592,153

TWO-DIMENSIONAL MAP GENERATION METHOD, DEVICE, TERMINAL DEVICE AND STORAGE MEDIUM

Non-Final OA §103
Filed
Feb 29, 2024
Examiner
SARKAR, SHIVANGI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Orbbec Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
7 currently pending
Career history
7
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-30 are currently pending in the application filed February 29, 2024. Information Disclosure Statement The information disclosure statements (IDS) submitted on February 29, 2024. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 on September 24, 2021. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the 2-5, 9-12 and 16-19 must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Claim 2, 9, and 16 mentions acquiring pixel information and mapping however the drawings do not indicate first or second pixel information. Claim 3, 10, and 17 mentions acquiring pose information and mapping however the drawings do not indicate pose information. Claim 4, 11, and 18 mentions conversion of depth image into 3D point cloud data and color identifiers however the drawings do not indicate conversion to 3D point cloud data nor color identifiers Claim 5, 12 and 19 mentions maximum and minimum vertical and horizontal coordinate points however the drawings do not indicate the coordinates Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to because in figure 1 of Claim 1 S200 mentions "corresponding to the depth image according to the depth image." One segment of depth image should be removed. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Abstract Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Claim Objections Claim 19 objected to because of the following informalities: “Point C” should be “point cloud”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims --------------1, 2, 4, 7, 8, 9, 11, 14, and 15, 16, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Yan (CN 111598916 A) and Qiao (CN 106355647 A). Regarding claim 1, Yan teaches: A method for generating a two-dimensional (Yan,[0017];”The two-dimensional occupancy grid“) map (Yan, [0016];”the present invention provide a method for preparing an indoor occupancy grid map based on RGB-D). obtaining a depth image and a color image of a target scene (Yan, [0032]; “A single-frame color image and depth map are acquired simultaneously.”) obtaining position label information (Yan, [0012]; “ Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle.”) in each region in the target map box, and according to the position label information and the mapping relationship, correlating the position label information to first pose information (Yan, [0008]; “combined with camera pose, camera model parameters and depth information acquired by the depth camera, the spatial position information of the point cloud is calculated,”) of a first color camera corresponding to a color image in each region to obtain the two-dimensional map (Yan,[0058]; “generate a two-dimensional occupied grid map”). Yan fails to teach: and determining a target map box corresponding to the two-dimensional point cloud image; and obtaining a two-dimensional point cloud image according to the depth image, and determining a target map box corresponding to the two-dimensional point cloud image; Qiao teaches: and determining a target map box (Qiao, [Page 1 Line 58]; “current scene”) corresponding to the two-dimensional point cloud image (Qiao, [ Page 1 Line 59]; “a corresponding two-dimensional color image”); and obtaining a two-dimensional point cloud image according to the depth image, (Qiao, [Page 1 Line 58];” A color depth image acquisition unit for acquiring depth point cloud data of the current scene and a corresponding two-dimensional color image”) and determining a target map box (Qiao, [Page 3 Line 23]; “current scene”) corresponding to the two-dimensional point cloud image; and (Qiao, [ Page 3 Line 22]; “a corresponding two-dimensional color image of the current scene.;”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able to map the relationship between depth and color image to obtain 2D point cloud image and determine target map box. (Qiao, [Page 2 Line 19], “Scan a current scene to acquire second-level point cloud data and a corresponding two-dimensional color image, perform point cloud registration on the second-level point cloud data and the corresponding two-dimensional color image to improve the three-dimensional map”) Regarding claim 2, the combination of Yan and Qiao teaches: wherein the obtaining a depth image and a color image of a target scene, and determining a mapping relationship between the depth image and the color image comprises: (Yan, [0008]; “Using a Kinect camera to simultaneously acquire color image and depth image sequences of an indoor scene”) in response to obtaining one frame of the depth image of the target scene, obtaining one frame of the color image; (Yan, [0008]; “for a single pair of color images and depth images…generating a local single-frame point cloud”) obtaining first pixel information of the color image and second pixel information of the depth image; and (Yan, [0032]; “By traversing all pixels and finding the corresponding depth information,”) determining the mapping relationship (Yan, [0032]; “A single-frame color image and depth map are acquired simultaneously.”) between the depth image and the color image according to the first pixel information and the second pixel information. Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able have a mapping relationship between depth and color image as well including pixel information regarding and color and depth. (Yan, [0033]; “A single-frame color image and depth map are acquired simultaneously. By traversing all pixels and finding the corresponding depth information, a local single-frame point cloud can be generated. [0033] Using mature SLAM technology, the coordinate transformation matrix”) Regarding claim 4, the combination of Yan and Qiao teaches: wherein the obtaining a two-dimensional point cloud image according to the depth image, and determining a target map box (Qiao, [Page 3 Line 23]; “current scene”) corresponding to the two-dimensional point cloud image comprises (Qiao, [ Page 3 Line 22]; “a corresponding two-dimensional color image of the current scene.;”) converting the depth image into three-dimensional point cloud data (Yan, [0030]; “Based on the camera intrinsic and extrinsic parameter model parameters, the spatial location information of the three-dimensional point cloud is calculated.”), wherein the three-dimensional point cloud data carries different color identifiers (Yan, [0015]; “based on RGB-D information”); obtaining the two-dimensional point cloud image by using the three-dimensional point cloud data; and (Yan, [0050]; “The three-dimensional non-ground point cloud represents the indoor environment information. Multiple height levels of non-ground point cloud are extracted in the height direction, and projected onto a two-dimensional plane”) determining the target map box according to the two-dimensional point cloud image. (Yan, [0051]; “The non-ground point cloud is projected onto a two-dimensional plane according to the normal equation of the ground. The two-dimensional plane is divided into grids of a certain size to store the grid occupancy state”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able convert the depth image into 3D point cloud data and determine the target map box. (Yan, [0012]; “Ground point clouds are projected onto a two-dimensional plane using the normal equations. Each grid cell is scanned and queried to determine if it contains a point cloud. Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle”) Regarding claim 7, the combination of Yan and Qiao teaches: wherein the obtaining position label information in each region in the target map box (Qiao, [Page 3 Line 23]; “current scene”) comprises: (Yan, [0012]; “Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle.”) performing a gridding operation on the target map box to obtain each region in the target map box; (Yan, [0051]; “The two-dimensional plane is divided into grids of a certain size to store the grid occupancy state”) obtaining boundary information of each region, position information of each track point in each region, and a projection relationship between each track point in each region and the three-dimensional point cloud data; and (Yan, [0039]; “The indoor navigation area is delineated according to the boundary coordinates of the indoor scene, and the unnecessary outdoor area point cloud is directly filtered out by the pass-through filter.”) using the boundary information of each region, the position information of each track point in each region, and the projection relationship between each track point in each region and the three-dimensional point cloud data as the position label information. (Yan, [0033]; “. The newly generated single-frame point cloud is aligned to the same world coordinate system, completing the stitching generation of the global 3D point cloud map.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able perform a gridding operation, obtain boundary and position information to track each point in region and projection relationship to apply them as well. (Yan, [0012]; “Ground point clouds are projected onto a two-dimensional plane using the normal equations. Each grid cell is scanned and queried to determine if it contains a point cloud. Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle”) Regarding claim 8, Yan teaches: terminal device (Yan, [0003];” data acquisition terminal,”) for generating a two-dimensional map, comprising a memory, a processor, and a program stored in the memory and executable on the processor (Yan, [0003]; “various point cloud processing algorithms”), wherein the processor, when executing the program, performs operations comprising: obtaining a depth image and a color image of a target scene (Yan, [0032]; “A single-frame color image and depth map are acquired simultaneously.”) obtaining position label information (Yan, [0012]; “ Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle.”) in each region in the target map box, and according to the position label information and the mapping relationship, correlating the position label information to first pose information (Yan, [0008]; “combined with camera pose, camera model parameters and depth information acquired by the depth camera, the spatial position information of the point cloud is calculated,”) of a first color camera corresponding to a color image in each region to obtain the two-dimensional map (Yan,[0058]; “generate a two-dimensional occupied grid map”). Yan fails to teach: and determining a target map box corresponding to the two-dimensional point cloud image; and obtaining a two-dimensional point cloud image according to the depth image, and determining a target map box corresponding to the two-dimensional point cloud image; and Qiao teaches: and determining a target map box (Qiao, [Page 1 Line 58]; “current scene”) corresponding to the two-dimensional point cloud image (Qiao, [ Page 1 Line 59]; “a corresponding two-dimensional color image”); and obtaining a two-dimensional point cloud image according to the depth image, (Qiao, [Page 1 Line 58];” A color depth image acquisition unit for acquiring depth point cloud data of the current scene and a corresponding two-dimensional color image”) and determining a target map box (Qiao, [Page 3 Line 23]; “current scene”) corresponding to the two-dimensional point cloud image; and (Qiao, [ Page 3 Line 22]; “a corresponding two-dimensional color image of the current scene.;”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able to map the relationship between depth and color image to obtain 2D point cloud image and determine target map box. (Qiao, [Page 2 Line 19], “Scan a current scene to acquire second-level point cloud data and a corresponding two-dimensional color image, perform point cloud registration on the second-level point cloud data and the corresponding two-dimensional color image to improve the three-dimensional map”) Regarding claim 9, the combination of Yan and Qiao teaches: wherein the obtaining a depth image and a color image of a target scene, and determining a mapping relationship between the depth image and the color image comprises: (Yan, [0008]; “Using a Kinect camera to simultaneously acquire color image and depth image sequences of an indoor scene”) in response to obtaining one frame of the depth image of the target scene, obtaining one frame of the color image; (Yan, [0008]; “for a single pair of color images and depth images…generating a local single-frame point cloud”) obtaining first pixel information of the color image and second pixel information of the depth image; and (Yan, [0032]; “By traversing all pixels and finding the corresponding depth information,”) determining the mapping relationship (Yan, [0032]; “A single-frame color image and depth map are acquired simultaneously.”) between the depth image and the color image according to the first pixel information and the second pixel information. Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able have a mapping relationship between depth and color image as well including pixel information regarding and color and depth. (Yan, [0033]; “A single-frame color image and depth map are acquired simultaneously. By traversing all pixels and finding the corresponding depth information, a local single-frame point cloud can be generated. [0033] Using mature SLAM technology, the coordinate transformation matrix”) Regarding claim 11, the combination of Yan and Qiao teaches: wherein the obtaining a two-dimensional point cloud image according to the depth image, and determining a target map box (Qiao, [Page 3 Line 23]; “current scene”) corresponding to the two-dimensional point cloud image comprises Qiao, [ Page 3 Line 22]; “a corresponding two-dimensional color image of the current scene.;”) Converting the depth image into three-dimensional point cloud data (Yan, [0030]; “Based on the camera intrinsic and extrinsic parameter model parameters, the spatial location information of the three-dimensional point cloud is calculated.”), wherein the three-dimensional point cloud data carries different color identifiers (Yan, [0015]; “based on RGB-D information”); obtaining the two-dimensional point cloud image by using the three-dimensional point cloud data; and (Yan, [0050]; “The three-dimensional non-ground point cloud represents the indoor environment information. Multiple height levels of non-ground point cloud are extracted in the height direction, and projected onto a two-dimensional plane”) determining the target map box according to the two-dimensional point cloud image. (Yan, [0051]; “The non-ground point cloud is projected onto a two-dimensional plane according to the normal equation of the ground. The two-dimensional plane is divided into grids of a certain size to store the grid occupancy state”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able convert the depth image into 3D point cloud data and determine the target map box. (Yan, [0012]; “Ground point clouds are projected onto a two-dimensional plane using the normal equations. Each grid cell is scanned and queried to determine if it contains a point cloud. Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle”) Regarding claim 14, the combination of Yan and Qiao teaches: wherein the obtaining position label information in each region in the target map box (Qiao, [Page 3 Line 23]; “current scene”) comprises: (Yan, [0012]; “Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle.”) performing a gridding operation on the target map box to obtain each region in the target map box; (Yan, [0051]; “The two-dimensional plane is divided into grids of a certain size to store the grid occupancy state”) obtaining boundary information of each region, position information of each track point in each region, and a projection relationship between each track point in each region and the three-dimensional point cloud data; and (Yan, [0039]; “The indoor navigation area is delineated according to the boundary coordinates of the indoor scene, and the unnecessary outdoor area point cloud is directly filtered out by the pass-through filter.”) using the boundary information of each region, the position information of each track point in each region, and the projection relationship between each track point in each region and the three-dimensional point cloud data as the position label information. (Yan, [0033]; “. The newly generated single-frame point cloud is aligned to the same world coordinate system, completing the stitching generation of the global 3D point cloud map.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able perform a gridding operation, obtain boundary and position information to track each point in region and projection relationship to apply them as well. (Yan, [0012]; “Ground point clouds are projected onto a two-dimensional plane using the normal equations. Each grid cell is scanned and queried to determine if it contains a point cloud. Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle”) Regarding claim 15, Yan teaches: A non-transitory computer-readable storage medium (Yan, [0003], “information storage”), storing a program, wherein the program, when executed by a processor, causes the processor to perform operations comprising: obtaining a depth image and a color image of a target scene (Yan, [0032]; “A single-frame color image and depth map are acquired simultaneously.”) obtaining position label information (Yan, [0012]; “ Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle.”) in each region in the target map box, and according to the position label information and the mapping relationship, correlating the position label information to first pose information (Yan, [0008]; “combined with camera pose, camera model parameters and depth information acquired by the depth camera, the spatial position information of the point cloud is calculated,”) of a first color camera corresponding to a color image in each region to obtain the two-dimensional map (Yan,[0058]; “generate a two-dimensional occupied grid map”). Yan fails to teach: and determining a target map box corresponding to the two-dimensional point cloud image; and obtaining a two-dimensional point cloud image according to the depth image, and determining a target map box corresponding to the two-dimensional point cloud image; and Qiao teaches: and determining a target map box (Qiao, [Page 1 Line 58]; “current scene”) corresponding to the two-dimensional point cloud image (Qiao, [ Page 1 Line 59]; “a corresponding two-dimensional color image”); and obtaining a two-dimensional point cloud image according to the depth image, (Qiao, [Page 1 Line 58];” A color depth image acquisition unit for acquiring depth point cloud data of the current scene and a corresponding two-dimensional color image”) and determining a target map box (Qiao, [Page 3 Line 23]; “current scene”) corresponding to the two-dimensional point cloud image; and (Qiao, [ Page 3 Line 22]; “a corresponding two-dimensional color image of the current scene.;”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able to map the relationship between depth and color image to obtain 2D point cloud image and determine target map box. (Qiao, [Page 2 Line 19], “Scan a current scene to acquire second-level point cloud data and a corresponding two-dimensional color image, perform point cloud registration on the second-level point cloud data and the corresponding two-dimensional color image to improve the three-dimensional map”) Regarding claim 16, the combination of Yan and Qiao teaches: wherein the obtaining a depth image and a color image of a target scene, and determining a mapping relationship between the depth image and the color image comprises: (Yan, [0008]; “Using a Kinect camera to simultaneously acquire color image and depth image sequences of an indoor scene”) in response to obtaining one frame of the depth image of the target scene, obtaining one frame of the color image; (Yan, [0008]; “for a single pair of color images and depth images…generating a local single-frame point cloud”) obtaining first pixel information of the color image and second pixel information of the depth image; and (Yan, [0032]; “By traversing all pixels and finding the corresponding depth information,”) determining the mapping relationship (Yan, [0032]; “A single-frame color image and depth map are acquired simultaneously.”) between the depth image and the color image according to the first pixel information and the second pixel information. Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able have a mapping relationship between depth and color image as well including pixel information regarding and color and depth. (Yan, [0033]; “A single-frame color image and depth map are acquired simultaneously. By traversing all pixels and finding the corresponding depth information, a local single-frame point cloud can be generated. [0033] Using mature SLAM technology, the coordinate transformation matrix”) Regarding claim 18, the combination of Yan and Qiao teaches: wherein the obtaining a two-dimensional point cloud image according to the depth image, and determining a target map box (Qiao, [Page 3 Line 23]; “current scene”) corresponding to the two-dimensional point cloud image comprises (Qiao, [ Page 3 Line 22]; “a corresponding two-dimensional color image of the current scene.;”) converting the depth image into three-dimensional point cloud data(Yan, [0030]; “Based on the camera intrinsic and extrinsic parameter model parameters, the spatial location information of the three-dimensional point cloud is calculated.”), wherein the three-dimensional point cloud data carries different color identifiers (Yan, [0015]; “based on RGB-D information”); obtaining the two-dimensional point cloud image by using the three-dimensional point cloud data; and (Yan, [0050]; “The three-dimensional non-ground point cloud represents the indoor environment information. Multiple height levels of non-ground point cloud are extracted in the height direction, and projected onto a two-dimensional plane”) determining the target map box according to the two-dimensional point cloud image. (Yan, [0051]; “The non-ground point cloud is projected onto a two-dimensional plane according to the normal equation of the ground. The two-dimensional plane is divided into grids of a certain size to store the grid occupancy state”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan with Qiao. The motivation for the combination is to be able convert the depth image into 3D point cloud data and determine the target map box. (Yan, [0012]; “Ground point clouds are projected onto a two-dimensional plane using the normal equations. Each grid cell is scanned and queried to determine if it contains a point cloud. Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle”) Claims 3, 10, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yan (CN 111598916 A), and Qiao (CN 106355647 A) further in view of Li (CN111190981B) Regarding claim 3, the combination of Yan and Qiao fails to teach: obtaining second pose information of a second color camera corresponding to the color image and third pose information of a depth camera corresponding to the depth image; and associating the second pose information of the second color camera corresponding to the color image with the third pose information of the depth camera corresponding to the depth according to the mapping relationship. Li teaches: obtaining second pose information (Li, [Page 2 Line 95]; “color image sequence”) of a second color camera corresponding to the color image and third pose information (Li, [Page 2 Line 95]; “depth image sequence”) of a depth camera corresponding to the depth image; and associating the second pose information of the second color camera corresponding to the color image with the third pose information of the depth camera corresponding to the depth according to the mapping relationship (Li, [Page 3 Line 122]; “; the environment image collection includes a color image sequence and a depth image sequence”). Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan, Qiao with Li. The motivation for the combination is to be able have a mapping relationship between depth and color along with corresponding pose information. (Li, [Page 3 Line 174]; “. This application combines the color image sequence and the depth image sequence as the input of the semantic segmentation model. Regarding Claim 10, the combination of Yan and Qiao fails to teach: obtaining second pose information of a second color camera corresponding to the color image and third pose information of a depth camera corresponding to the depth image; and associating the second pose information of the second color camera corresponding to the color image with the third pose information of the depth camera corresponding to the depth according to the mapping relationship. Li teaches: obtaining second pose information (Li, [Page 2 Line 95]; “color image sequence”) of a second color camera corresponding to the color image and third pose information (Li, [Page 2 Line 95]; “depth image sequence”) of a depth camera corresponding to the depth image; and associating the second pose information of the second color camera corresponding to the color image with the third pose information of the depth camera corresponding to the depth according to the mapping relationship (Li, [Page 3 Line 122]; “; the environment image collection includes a color image sequence and a depth image sequence”). Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan, Qiao with Li. The motivation for the combination is to be able have a mapping relationship between depth and color along with corresponding pose information. (Li, [Page 3 Line 174]; “. This application combines the color image sequence and the depth image sequence as the input of the semantic segmentation model. Regarding Claim 17, the combination of Yan and Qiao fails to teach: obtaining second pose information of a second color camera corresponding to the color image and third pose information of a depth camera corresponding to the depth image; and associating the second pose information of the second color camera corresponding to the color image with the third pose information of the depth camera corresponding to the depth according to the mapping relationship. Li teaches: obtaining second pose information (Li, [Page 2 Line 95]; “color image sequence”) of a second color camera corresponding to the color image and third pose information (Li, [Page 2 Line 95]; “depth image sequence”) of a depth camera corresponding to the depth image; and associating the second pose information of the second color camera corresponding to the color image with the third pose information of the depth camera corresponding to the depth according to the mapping relationship (Li, [Page 3 Line 122]; “; the environment image collection includes a color image sequence and a depth image sequence”). Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan, Qiao with Li. The motivation for the combination is to be able have a mapping relationship between depth and color along with corresponding pose information. (Li, [Page 3 Line 174]; “. This application combines the color image sequence and the depth image sequence as the input of the semantic segmentation model. Claims --------------5-6, 12-13, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yan (CN 111598916 A), and Qiao (CN 106355647 A) further in view of Thrun (Learning metric-topological maps for indoor mobile robot navigation) Regarding claim 5, the combination of Yan and Qiao teaches: determining, according to the two-dimensional point cloud image, coordinate information of each track point in the two-dimensional point cloud image (Yan, [0008]; “the spatial position information of the point cloud is calculated,”) The combination of Yan and Qiao fail to teach: determining a maximum horizontal coordinate point, a minimum horizontal coordinate point, a maximum vertical coordinate point, and a minimum vertical coordinate point based on the coordinate information of each track point; and determining a rectangular bounding box according to the maximum horizontal coordinate point, the minimum horizontal coordinate point, the maximum vertical coordinate point, and the minimum vertical coordinate point, and using the rectangular bounding box as the target map box. Thrun teaches: determining a maximum horizontal coordinate point, a minimum horizontal coordinate point, a maximum vertical coordinate point, and a minimum vertical coordinate point based on the coordinate information of each track point; (Thrun, [page 39, Paragraph (v)]; “[xmin, xmax]x[ymin, ymax]”) and determining a rectangular bounding box according to the maximum horizontal coordinate point, the minimum horizontal coordinate point, the maximum vertical coordinate point, and the minimum vertical coordinate point, and using the rectangular bounding box as the target map box. (Thrun, [page 39, Paragraph (v)]; “a rectangular bounding box [xmin, xmax]x[ymin, ymax] is maintained that contains all grid cells in which Vx,y may change.” Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan and Qiao with Thrun. The motivation for the combination is to be able determine the physical characteristic of the bounding box. (Thrun, [Page 38, Paragraph (iii); “Determine motion direction.”) Regarding claim 6, the combination of Yan, Qiao, and Thrun teaches: modifying the target map box by aligning the target map box with a coordinate system corresponding to the coordinate information. (Yan, [0030];” Based on the camera intrinsic and extrinsic parameter model parameters, the spatial location information of the three-dimensional point cloud is calculated. The correspondence between the spatial coordinates (x, y, z) of a single point and the corresponding image pixel coordinates (u, v, d) (d is the depth) is as follows”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan and Qiao with Thrun. The motivation for the combination is to be able align the target map box with respect to the coordinate information. (Yan, [0032];” The newly generated single-frame point cloud is aligned to the same world coordinate system”) Regarding claim 12, the combination of Yan and Qiao teaches: determining, according to the two-dimensional point cloud image, coordinate information of each track point in the two-dimensional point cloud image (Yan, [0008]; “the spatial position information of the point cloud is calculated,”) The combination of Yan and Qiao fail to teach: determining a maximum horizontal coordinate point, a minimum horizontal coordinate point, a maximum vertical coordinate point, and a minimum vertical coordinate point based on the coordinate information of each track point; and determining a rectangular bounding box according to the maximum horizontal coordinate point, the minimum horizontal coordinate point, the maximum vertical coordinate point, and the minimum vertical coordinate point, and using the rectangular bounding box as the target map box. Thrun teaches: determining a maximum horizontal coordinate point, a minimum horizontal coordinate point, a maximum vertical coordinate point, and a minimum vertical coordinate point based on the coordinate information of each track point; (Thrun, [page 39, Paragraph (v)]; “[xmin, xmax]x[ymin, ymax]”) and determining a rectangular bounding box according to the maximum horizontal coordinate point, the minimum horizontal coordinate point, the maximum vertical coordinate point, and the minimum vertical coordinate point, and using the rectangular bounding box as the target map box. (Thrun, [page 39, Paragraph (v)]; “a rectangular bounding box [xmin, xmax]x[ymin, ymax] is maintained that contains all grid cells in which Vx,y may change.” Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan and Qiao with Thrun. The motivation for the combination is to be able determine the physical characteristic of the bounding box. (Thrun, [Page 38, Paragraph (iii); “Determine motion direction.”) Regarding claim 13, the combination of Yan, Qiao, and Thrun teaches: modifying the target map box by aligning the target map box with a coordinate system corresponding to the coordinate information. (Yan, [0030];” Based on the camera intrinsic and extrinsic parameter model parameters, the spatial location information of the three-dimensional point cloud is calculated. The correspondence between the spatial coordinates (x, y, z) of a single point and the corresponding image pixel coordinates (u, v, d) (d is the depth) is as follows”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan and Qiao with Thrun. The motivation for the combination is to be able align the target map box with respect to the coordinate information. (Yan, [0032];” The newly generated single-frame point cloud is aligned to the same world coordinate system”) Regarding claim 19, the combination of Yan and Qiao teaches: determining, according to the two-dimensional point cloud image, coordinate information of each track point in the two-dimensional point cloud image (Yan, [0008]; “the spatial position information of the point cloud is calculated,”) The combination of Yan and Qiao fail to teach: determining a maximum horizontal coordinate point, a minimum horizontal coordinate point, a maximum vertical coordinate point, and a minimum vertical coordinate point based on the coordinate information of each track point; and determining a rectangular bounding box according to the maximum horizontal coordinate point, the minimum horizontal coordinate point, the maximum vertical coordinate point, and the minimum vertical coordinate point, and using the rectangular bounding box as the target map box. Thrun teaches: determining a maximum horizontal coordinate point, a minimum horizontal coordinate point, a maximum vertical coordinate point, and a minimum vertical coordinate point based on the coordinate information of each track point; (Thrun, [page 39, Paragraph (v)]; “[xmin, xmax]x[ymin, ymax]”) and determining a rectangular bounding box according to the maximum horizontal coordinate point, the minimum horizontal coordinate point, the maximum vertical coordinate point, and the minimum vertical coordinate point, and using the rectangular bounding box as the target map box. (Thrun, [page 39, Paragraph (v)]; “a rectangular bounding box [xmin, xmax]x[ymin, ymax] is maintained that contains all grid cells in which Vx,y may change.” Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan and Qiao with Thrun. The motivation for the combination is to be able determine the physical characteristic of the bounding box. (Thrun, [Page 38, Paragraph (iii); “Determine motion direction.”) Regarding claim 20, the combination of Yan, Qiao, and Thrun teaches: wherein the obtaining position label information in each region in the target map box (Qiao, [Page 3 Line 23]; “current scene”) comprises: (Yan, [0012]; “ Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle.”) performing a gridding operation on the target map box to obtain each region in the target map box; (Yan, [0051]; “The two-dimensional plane is divided into grids of a certain size to store the grid occupancy state”) obtaining boundary information of each region, position information of each track point in each region, and a projection relationship between each track point in each region and the three-dimensional point cloud data; and (Yan, [0039]; “The indoor navigation area is delineated according to the boundary coordinates of the indoor scene, and the unnecessary outdoor area point cloud is directly filtered out by the pass-through filter.”) using the boundary information of each region, the position information of each track point in each region, and the projection relationship between each track point in each region and the three-dimensional point cloud data as the position label information. (Yan, [0033]; “. The newly generated single-frame point cloud is aligned to the same world coordinate system, completing the stitching generation of the global 3D point cloud map.”) Before the time of filing, it would have been obvious to one of ordinary skill in the art to combine Yan, Qiao with Thurn. The motivation for the combination is to be able perform a gridding operation, obtain boundary and position information to track each point in region and projection relationship to apply them as well. (Yan, [0012]; “Ground point clouds are projected onto a two-dimensional plane using the normal equations. Each grid cell is scanned and queried to determine if it contains a point cloud. Values of 1 and 0 are assigned to indicate whether the cell is occupied by an obstacle”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANGI SARKAR whose telephone number is (571)272-7262. The examiner can normally be reached M-F: 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANGI SARKAR/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Apr 01, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month