DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/8/2025 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in response to the amendment filed on 10/8/2025 for application 18/158,309. Claim 1 – 20 are pending and have been examined.
Claim 1 and 18 are amended.
Response to Amendment
Applicant’s amendment filed on 10/8/2025 has been entered.
Claim rejection under 35 U.S.C. 101 has been withdrawn in light of the amendment.
Respond to Argument
Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1 – 2 and 18 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ibrahim et al., (hereinafter Ibrahim), “BIM Driven Mission Planning and Navigation for Automatic Indoor Construction Progress Detection” in view of da Silva et. al., (hereinafter da Silva) US20220244741 and Wise et al., (hereinafter Wise), US20170252926.
Regarding Claim 1, Ibrahim discloses: A method of inspecting a building construction site using a mobile robotic system, the mobile robotic system comprising a mobile platform and a sensor system mounted on the mobile platform and configured to generate one or more types of sensor data (Fig. 1, the indoor robotic collection platform having sensors), the method comprising:
receiving, by an object identification information receiving module of a system, object (in-progress element) identification information identifying at least one building object to be inspected by the mobile robotic system in the building construction site (page 183, “A visual quality feedback during data collection planning assures good visual coverage to the in-progress elements” page 184, “on-board computer”; i.e., the system logic is performed by an onboard computer with software (object identification information receiving module) that records/receives the information of the elements that needs be inspected);
determining, by a goal point determining module of the system, at least one goal point in the robot navigation map for the at least one building object, each goal point (waypoint) being a position in the robot navigation map for the mobile robotic system to navigate autonomously to for inspecting corresponding one or more building objects of the at least one building object (page 185, “Model-driven data collection planning is utilized to define waypoints that visually observe the locations of expected changes … color coding the elements with expected progress and creating navigation waypoints close to these elements”, “transform the data collection plan to the navigation map’s coordinates”; page 182, “automatically navigate waypoints … to collect the data”; refer to the mapping above, the onboard computer with software transforms waypoints to the map coordinates, thus the goal point determining module),
wherein determining the at least one goal point in the robot navigation map comprises:
generating, by a path planner module of the system, a path from a current position of the mobile platform in the robot navigation map to a first goal point of the at least one goal point (refer to the mapping above & page 185, “A path planner is used to define a navigable path between the robot’s current configuration and each waypoint successively using A* algorithm”;
performing, by at least one processor of the mobile robotic system, a navigation command to drive the mobile platform to move along the path to the first goal point; obtaining, by the imaging sensor, an image of corresponding one or more first building objects of the at least one building object at the first goal point (Page 184 & Fig. 1, “the rover navigates the data collection path automatically using pre-defined way-points”, Page 185, “Model-driven data collection planning is utilized to define waypoints that visually observe the locations of expected changes” );
Ibrahim does not explicitly disclose:
obtaining, by a robot navigation map obtaining module of the system, a robot navigation map covering the at least one building object based on a building information model for the building construction site
said each goal point is determined by the goal point determining module based on geometric information associated with the corresponding one or more building objects extracted from the building information model and geometric information associated with an imaging sensor of the sensor system for optimizing coverage of the corresponding one or more building objects by the imaging sensor.
subsequent to obtaining the image of the corresponding one or more first building objects at the first goal point, determining a second goal point of the at least one goal point in the robot navigation map for corresponding one or more second building objects of the at least one building object.
da Silva, in the same field of endeavor, explicitly teach:
obtaining, by a robot navigation map obtaining module of the system, a robot navigation map covering the at least one building object based on a building information model for the building construction site (0004, “receiving a building information model (BIM) for the building environment. The BIM includes semantic information identifying one or more permanent objects within the building environment. The operations include generating a plurality of localization candidates for a localization map (navigation map) of the building environment”, “The localization map is configured to localize the robot within the building environment”, “each localization candidate, the operations include determining whether the respective feature corresponding to the respective localization candidate is a permanent object in the building environment identified by the semantic information of the BIM”, 0009 – 0010, “the localization map autonomously guides the robot through the building environment”, “the operations further include receiving, from an operator of the robot, an authored task for the robot to perform within the building environment and autonomously navigating through the building environment to perform the authored task using the localization map.”; i.e., navigation map is generated using semantic information of BIM and is used to perform assigned task, in this case, survey/inspection of in-progress field element (Ibrahim page 185 – 186))
said each goal point is determined by the goal point determining module based on geometric information associated with the corresponding one or more building objects extracted from the building information model (refer to the mapping above & ) and geometric information associated with an imaging sensor of the sensor system for optimizing coverage of the corresponding one or more building objects by the imaging sensor (refer to the mapping above & da Silva, 0023, “A semantic model is a virtual model of a site that contains semantic information regarding geometry and/or data needed to support construction, fabrication, and/or other procurement activities that occur at the site” 0046, “semantic model 30 (e.g., a BIM) that includes semantic information”; 0028 – 0029 “When surveying a field of view F v with a sensor 132 (geometric information with an image sensor), the sensor system 130 generates sensor data 134 (e.g., image data) corresponding to the field of view”; i.e., the goal point for surveying of an object has to consider the field of view (sensor geometry) that cover the size of the observed object (object geometry))
Ibrahim and da Silva both teach the robot application in construction site using building information model (BIM) and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the use of object information stored in the BIM as taught by da Silva in Ibrahim’s application to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification to “ensure reliable localization and navigation” (da Silva, 0021).
Ibrahim and da Silva combination does not explicitly teach:
subsequent to obtaining the image of the corresponding one or more first building objects at the first goal point, determining a second goal point of the at least one goal point in the robot navigation map for corresponding one or more second building objects of the at least one building object.
Wise, in the same field of endeavor, explicitly teach:
subsequent to obtaining the image of the corresponding one or more first building objects at the first goal point, determining a second goal point of the at least one goal point in the robot navigation map for corresponding one or more second building objects of the at least one building object (Wise, fig. 4 & 0086 – 0090, “the first task server waits for a period of time … the waiting period can end upon receipt of an instruction to proceed … For example, an instruction to proceed is generated when a robot completes a task”, “In step 420, the first task server checks if there is a robot available to perform the task”, “In step 430, the first task server assigns the queued task to a best robot that is available.”; 0020 – 0022, “For example, the task comprises surveying a site … For example, the task comprises navigating to a particular destination”; Wise teaches a dynamic queuing task for managing a plurality of robots. When (subsequent to) individual robot complete a task (in this case, surveying a specific construction object at a specific location using camera), the individual robot receives next task (in this case, surveying another construction object at another location). The combination of Ibrahim, da Silva and Wise renders obviousness of the limitation).
Ibrahim (in view of da Silva) and Wise both teach managing autonomous robot application and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the load balancing and queueing technique among a plurality of robots taught by Wise in Ibrahim (in view of da Silva)’s application to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification so that “the task can efficiently be performed” (Wise, 0004).
Regarding Claim 2, Ibrahim, da Silva and Wise combination renders obviousness of all the limitation in Claim 1. The combination further teaches: the geometric information associated with the corresponding one or more building objects comprises, for each of the corresponding one or more building objects, a location (refer to the mapping in Claim 1, the waypoints are selected based on location of elements), a dimension (Ibrahim, page 187, “data collection execution to assure complete visual coverage of in-progress elements”; i.e., if the dimension of the element is large, the waypoint has to be further away) and a surface normal vector of the building object (Ibrahim, page 187, “the quality of the reconstructed point cloud model with a mean density of 6.85 points per m2 and mean accuracy of 0.51 m. It was also observed that locations with higher point cloud density are associated with improved accuracy.” Examiner note that the surface normal vector is relating to the direction of the surface. The point cloud model as shown on fig. 7 are based on the surface of building element. When the surface of the building element is perpendicular or near perpendicular to the field of view of the image sensor, the collected data has low density points. Ibrahim teaches to consider the density of the collected point as part of the data quality that relates to accuracy.), and the geometric information associated with the imaging sensor comprises a height and a field of view of the imaging sensor (refer to the mapping in Claim 1 & da Silva, fig. 1 & 0027 – 0028, “The robot 100 further has a pose P based on the CM relative to the vertical gravitational axis A2 … Movement by the legs 120 relative to the body 110 alters the pose P of the robot 100 (i.e., the combination of the position of the CM of the robot and the attitude or orientation of the robot 100).”, “the sensor 132 has a corresponding field(s) of view F v defining a sensing range or region corresponding to the sensor 132”; as illustrated in fig 1, the field of view of the sensor correspond to the height of the sensor and the height of the sensor is considered/adjusted by the pose).
Regarding Claim 18, Ibrahim, da Silva and Wise combination renders obviousness of all the limitation in Claim 1. Ibrahim further teach: the mobile robotic system comprises at least one memory and at least one processor communicatively coupled to the at least one memory, the at least one processor being configured to control the mobile platform to navigate autonomously in the building construction site based on a robot operating system (ROS) (Ibrahim, page 185, “Robot Operating System ROS … on-board computer … Micro Controller MC unit for mission execution” ). Claim 18 is rejected with the same reason.
Regarding Claim 19, Claim 19 is a system claim corresponding to Claim 1. Ibrahim further teach: at least one memory; and at least one processor communicatively coupled to the at least one memory and configured to perform the method of inspecting the building construction site (Ibrahim, page 185, “Robot Operating System ROS … on-board computer … Micro Controller MC unit for mission execution”). Claim 19 is rejected with the same reason.
Regarding Claim 20, Claim 20 is a non-transitory computer readable storage medium claim corresponding to Claim 1. Claim 20 is rejected with the same reason.
Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Ibrahim et al., (hereinafter Ibrahim), “BIM Driven Mission Planning and Navigation for Automatic Indoor Construction Progress Detection”, da Silva et. al., (hereinafter da Silva) US20220244741 in view of Wise et al., (hereinafter Wise), US20170252926, as applied to Claim 1, further in view of Moselhi et al., (hereinafter Moselhi), US20030023404.
Regarding Claim 3, Ibrahim, da Silva and Wise combination renders obviousness of all the limitation in Claim 1. The combination does not explicitly teach: the at least one building object comprises a plurality of building objects, and
said determining the at least one goal point for the at least one building object comprises:
determining whether the plurality of building objects satisfy a proximity condition and a surface angle condition; and
determining one goal point for the plurality of building objects collectively if the plurality of building objects are determined to satisfy the proximity condition and the surface angle condition.
Moselhi, in the same field of endeavor, explicitly teach:
the at least one building object comprises a plurality of building objects (Moselhi, 0021 – 0027, “detected defects using image analysis techniques, artificial intelligence (AI) … for the classification of defects in … joint displacements, reduction of cross-sectional area”; joint and cross section area involves more than one building objects. the observation need to consider the view of the scene involving more than one building objects), and
said determining the at least one goal point for the at least one building object comprises: determining whether the plurality of building objects satisfy a proximity condition and a surface angle condition; and determining one goal point for the plurality of building objects collectively if the plurality of building objects are determined to satisfy the proximity condition and the surface angle condition (refer to the mapping above and Claim 1 – 2, Moselhi teaches to retrieve image of the concerned multiple objects in a scene. Ibrahim, da Silva and Wise combination teaches the consideration of the observation position, the field of view, the visual coverage of the scene (proximity condition of building objects) and the building element surface angle. The combination renders obviousness of the limitation).
Ibrahim (in view of da Silva and Wise) and Moselhi both teach building inspection using image data and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the combined object consideration of Moselhi’s teaching to Ibrahim (in view of da Silva and Wise)’s waypoint decision to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification to order to have clear view of the issues between multiple building elements.
Claim(s) 4 – 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ibrahim et al., (hereinafter Ibrahim), “BIM Driven Mission Planning and Navigation for Automatic Indoor Construction Progress Detection”, da Silva et. al., (hereinafter da Silva) US20220244741 in view of Wise et al., (hereinafter Wise), US20170252926, as applied to Claim 1, further in view of Ding et al., (hereinafter Ding), CN110889464.
Regarding Claim 4, Ibrahim, da Silva and Wise combination renders obviousness of all the limitation in Claim 1. The combination further teach:
for said each goal point determined: the mobile robotic system is configured to navigate to the goal point for obtaining an image of the corresponding one or more building objects (refer to the mapping of Claim 1. The robot navigate to each of the waypoints to take image)
Ibrahim, da Silva and Wise combination does not explicitly teach:
determining a state of each of the corresponding one or more building objects using a convolutional neural network (CNN)-based object detector based on an image of the corresponding one or more building objects obtained by the mobile robotic system and the building information model, the CNN-based object detector comprising one or more detection models, each detection model being trained to detect a corresponding type of state of building objects.
Ding, in the same field of endeavor, explicitly teach:
determining a state of each of the corresponding one or more building objects using a convolutional neural network (CNN)-based object detector based on the image of the corresponding one or more building objects obtained and the building information model, the CNN-based object detector comprising one or more detection models, each detection model being trained to detect a corresponding type of state of building objects (Ding, translation page 5, “Using the neural network to perform feature extraction on the sample image according to the convolution matrix corresponding to each depth feature map, to obtain a sample feature map corresponding to the sample image; Determine, according to the sample feature map corresponding to the sample image, the existence (a type of state) probability of the target object corresponding to each of the target detection regions in the sample image corresponding to each feature point in the sample feature map;” i.e., CNN based object detector that has one model to detect the existence of the expected object).
Ibrahim (in view of da Silva and Wise) and Ding both teach object detection and classification using image data and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further combine Ding’s teaching of using convolutional neural network for object classification with Ibrahim (in view of da Silva and Wise)’s teaching of applying object classification on construction site to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification to improve the accuracy of the detection (Ding translation page 9, 15, 31).
Regarding Claim 5, Ibrahim, da Silva, Wise and Ding combination renders obviousness of all the limitation in Claim 4. The combination further teach:
the type of state of building objects is one of a building component installation completion type, a building component defect type and a building material presence type (refer to the mapping in Claim 4, detect the existence of object (building material presence type)).
Regarding Claim 6, Ibrahim, da Silva, Wise and Ding combination renders obviousness of all the limitation in Claim 4. The combination further teach:
said determining the state of each of the corresponding one or more building objects comprises, for each corresponding building object: detecting the corresponding building object in the image based on the CNN-based object detector to obtain a detection result (refer to the mapping in Claim 4, the detection model is CNN based);
localizing the detected corresponding building object in the image in a coordinate frame of the building information model; determining geometric information of the detected corresponding building object; determining whether the geometric information of the detected corresponding building object determined and corresponding geometric information associated with the detected corresponding building object extracted from the building information model satisfy a matching condition; and filtering the detection result of the corresponding building object based on whether the geometric information of the detected corresponding building object determined and the corresponding geometric information associated with the detected corresponding building object extracted from the building information model satisfy the matching condition (refer to the mapping in Claim 1 & de Silva, 0008 – 0009, “The localization map is configured to localize the robot within the building environment when the robot moves throughout the building environment. For each localization candidate, the operations include determining whether the respective feature corresponding to the respective localization candidate is a permanent object in the building environment identified by the semantic information of the BIM”; Fig. 2B – 2C & 0048, “the semantic planner 200 may perform a matching (filtering) process that matches features from the sensor data 134 to features in the semantic model 30 in order to align the semantic model 30 and the sensor data 134. In either approach, the localizer 220 then determines whether the respective location in the semantic model 30 that matches the location of the perceived object from the sensor data 134 corresponds to a permanent object PO in the semantic model 30.”; 0042, “the localization map 202 is formed by determining features ( e.g., geometric shapes) of objects in the environment”; i.e., geometric shapes are the features of objects to be matched).
Regarding Claim 7, Ibrahim, da Silva, Wise, and Ding combination renders obviousness of all the limitation in Claim 6. The combination further teach:
the geometric information of the detected corresponding building object determined comprises at least one of a location, a dimension and an orientation of detected corresponding building object, and the geometric information associated with the detected corresponding building object extracted from the building information model comprises at least one of a location, a dimension and an orientation of detected corresponding building object (refer to the mapping of Claim 6 & fig. 2B – 2C, dimension is relating to geometric shape and is used to match the sensor data to BIM semantic data).
Regarding Claim 16, Ibrahim, da Silva, Wise and Ding combination renders obviousness of all the limitation in Claim 4. The combination further teach:
generating an inspection report comprising the determined state of each of the at least one building object (Ibrahim, page 182, “construction progress reporting requires … construction progress”; Examiner notes that the limitation recites the intended use of the claimed invention or a field of use and thus does not bear patentable weight).
Claim(s) 8 are rejected under 35 U.S.C. 103 as being unpatentable over Ibrahim et al., (hereinafter Ibrahim), “BIM Driven Mission Planning and Navigation for Automatic Indoor Construction Progress Detection”, da Silva et. al., (hereinafter da Silva) US20220244741, Wise et al., (hereinafter Wise), US20170252926, Ding et al., (hereinafter Ding), CN110889464 as applied to Claim 6, further in view of Ahmed, US20250014279.
Regarding Claim 8, Ibrahim, da Silva, Wise and Ding combination renders obviousness of all the limitation in Claim 6. The combination further teach:
said localizing the detected corresponding building object in the image in the coordinate frame of the building information model comprises: converting two-dimensional (2D) image points of the image in a coordinate frame of the image to three-dimensional (3D) image points in a coordinate frame of the imaging sensor; and transforming the 3D image points in the coordinate frame of the imaging sensor into 3D image points in the coordinate frame of the building information model (refer to the mapping in Claim 6 & Ahmed, 0038, “a pose of an object may be defined within three-dimensional geometric space, where the three dimensions have corresponding orthogonal axes (typically x, y, z) within the geometric space”; 0043, “use mapping systems. A mapping system is any system that is capable of constructing a three-dimensional map of an environment based on sensor data.”; 0048, “An image typically comprises a two-dimensional array structure”; 0047, “a camera may include a reference to any light-based sensing technology including event cameras and LIDAR sensors (i.e. laser-based distance sensors)”; i.e., the 2D image from camera can be converted into 3D point cloud to match into the 3D BIM model).
Ibrahim (in view of da Silva, Wise and Ding) and Ahmed both teach inspection in construction site using image data and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the mapping localization technique of Ahmed’s teaching to the inspection system for construction site of Ibrahim (in view of da Silva, Wise and Ding)’s teaching to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification to reduce the time and cost of inspection/verification on construction site (Ahmed, 0002 – 0007).
Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Ibrahim et al., (hereinafter Ibrahim), “BIM Driven Mission Planning and Navigation for Automatic Indoor Construction Progress Detection”, da Silva et. al., (hereinafter da Silva) US20220244741, Wise et al., (hereinafter Wise), US20170252926, Ding et al., (hereinafter Ding), CN110889464, Ahmed, US20250014279 as applied to Claim 8, further in view of DiffBot, “Combining Camera and Lidar”.
Regarding Claim 9, Ibrahim, da Silva, Wise , Ding and Ahmed combination renders obviousness of all the limitation in Claim 8. The combination further teach:
the 2D image points of the image in the coordinate frame of the image are converted to the 3D image points in the coordinate frame of the imaging sensor based on a distance between the detected corresponding building object and the imaging sensor obtained from a distance sensor of the sensor system (refer to the mapping of Claim 8 & Ahmed, 0047, “a camera may include a reference to any light-based sensing technology including event cameras and LIDAR sensors (i.e. laser-based distance sensors).”)
The combination does not explicitly teach:
the 3D image points in the coordinate frame of the imaging sensor are transformed into 3D image points in the coordinate frame of the building information model based on a series of homogeneous transformation matrices.
DiffBot, in the same field of endeavor, explicitly teach:
the 3D image points in the coordinate frame of the imaging sensor are transformed into 3D image points in the coordinate frame of the building information model based on a series of homogeneous transformation matrices (DiffBot, page 1, “combine the tracked feature points within the camera images with the 3D Lidar points. For this we use homogeneous coordinates and the transformation matrices related to cameras to geometrically project the LiDAR points into the camera in such a way that we know the position of each 3D Lidar point on the image sensor”; i.e., use transformation matrices to merge both sensor data onto homogeneous 3D coordinate).
Ibrahim (in view of da Silva, Wise, Ding and Ahmed) and DiffBot both teach fusion of camera and Lidar data and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the homogeneous transformation of DiffBot’s teaching to the inspection system for construction site of Ibrahim (in view of da Silva, Wise, Ding and Ahmed)’s teaching to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification to merge sensor data of camera and lidar.
Claim(s) 10 – 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ibrahim et al., (hereinafter Ibrahim), “BIM Driven Mission Planning and Navigation for Automatic Indoor Construction Progress Detection”, da Silva et. al., (hereinafter da Silva) US20220244741 in view of Wise et al., (hereinafter Wise), US20170252926, Ding et al., (hereinafter Ding), CN110889464 as applied to Claim 4, further in view of Lee et al., (hereinafter Lee), KR20100104391.
Regarding Claim 10, Ibrahim, da Silva and Wise, Ding combination renders obviousness of all the limitation in Claim 4. The combination does not explicitly teach:
for each of one or more of said at least one goal point determined: rotating the imaging sensor based on a reference point in the image of the corresponding one or more building objects obtained and a reference point for one or more bounding boxes of the corresponding one or more building objects detected in the image.
Lee, in the same field of endeavor, explicitly teach:
for each of one or more of said at least one goal point determined: rotating the imaging sensor based on a reference point in the image of the corresponding one or more building objects obtained and a reference point for one or more bounding boxes of the corresponding one or more building objects detected in the image (Lee, translation page 5, “the position and alignment criteria of the center of gravity cell (reference point) of the calculated object … automatically controlling the rotatable camera 50 at the calculated horizontal /vertical angle, the object is aligned in the center of the image of the rotatable camera 50, and the object is positioned in the image using the calculated zoom magnification value.” Translation page 12 & fig. 5A – 5B, “the actual size (width, height) of the object appearing in the image”; the objects width and height forms a bounding box of the object as shown in Fig. 5A & 5B; the system move the camera so that the center of the bounding box (reference point for bounding box) is at the center of the captured image (reference point in the image) ).
Ibrahim (in view of da Silva and Wise and Ding) and Lee both teach the gathering of image data and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the camera rotating and zooming technique of Lee’s teaching to the inspection system for construction site of Ibrahim (in view of da Silva and Wise and Ding)’s teaching to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification to achieve better viewing angle and more accurate measurement.
Regarding Claim 11, Ibrahim, da Silva and Wise, Ding and Lee combination renders obviousness of all the limitation in Claim 10. The combination further teach:
the imaging sensor is rotated by an amount based on a distance between the reference point in the image and the reference point for the one or more bounding boxes (refer to the mapping in Claim 10, the system moves camera so the bounding box is in the center of the image. Thus, the rotating amount is based on the difference/distance of the two reference point).
Regarding Claim 12, Ibrahim, da Silva , Wise, Ding and Lee combination renders obviousness of all the limitation in Claim 10. The combination further teach:
the reference point in the image is a center point thereof, and the reference point of the one or more bounding boxes is determined based on a center point of each of the one or more bounding boxes (refer to the mapping in Claim 10, the reference point is the center of the captured image and the other reference point is center of the bounding box).
Regarding Claim 13, Ibrahim, da Silva , Wise, Ding and Lee combination renders obviousness of all the limitation in Claim 10. The combination further teach:
refining the goal point determined by adjusting a distance between the mobile robotic system and the building object based on a dimension of the object and a dimension of an anchor box for detecting the corresponding one or more building objects in the image (refer to the mapping of Claim 1 & 10, da Silva teaches to move robot so that the target is in the field of view (anchor box); Ibrahim teaches to assure complete visual coverage of building elements; The combination renders obviousness that when the waypoint is too close and the captured image cannot cover the entire building element, the waypoint (goal point) should be moved further away so that the field of view can cover the entire building element.).
Regarding Claim 14, Ibrahim, da Silva , Wise, Ding and Lee combination renders obviousness of all the limitation in Claim 13. The combination further teach:
the distance is adjusted based on a difference between the dimension of the object and the dimension of the anchor box (refer to the mapping in Claim 13. The moving distance is determined based on the building element size and the region/size that the field of view covers).
Regarding Claim 15, Ibrahim, da Silva , Wise, Ding and Lee combination renders obviousness of all the limitation in Claim 14. The combination further teach:
the dimension of the object is a height thereof, and the dimension of the anchor box for detecting the object is a height thereof (refer to mapping in Claim 13 & 14 & Lee fig. 5b the bounding box has a height and width; da Silva, fig. 1A, the field of view has limited width and height; i.e., the waypoint should be moved so that the height dimension of the field of view can cover the height of the building element).
Claim(s) 17 is rejected under 35 U.S.C. 103 as being unpatentable over Ibrahim et al., (hereinafter Ibrahim), “BIM Driven Mission Planning and Navigation for Automatic Indoor Construction Progress Detection”, da Silva et. al., (hereinafter da Silva) US20220244741 in view of Wise et al., (hereinafter Wise), US20170252926, as applied to Claim 1, further in view of Chaillan, US20190136508.
Regarding Claim 17, Ibrahim, da Silva and Wise combination renders obviousness of all the limitation in Claim 1. The combination does not explicitly teach: the building construction site is a prefabricated prefinished volumetric construction (PPVC) site.
Chaillan, in the same field of endeavor, explicitly teach:
the building construction site is a prefabricated prefinished volumetric construction (PPVC) site (Chaillan, fig. 2b and fig. 2e illustrate PPVC site. Examiner note that the limitation recites the intended use of the claimed invention or a field of use and thus does not bear patentable weight).
Ibrahim (in view of da Silva and Wise) and Chaillan both teach building construction site and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further apply the automatic building inspection of Ibrahim (in view of da Silva and Wise)’s teaching on the PPVC construction site of Chaillan’s teaching. One of the ordinary skill in the art would have motivated to make this modification to order to reduce the overall cost by automatic mission execution (Ibrahim, abs.).
Conclusion
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIEN MING CHOU whose telephone number is (571)272-9354. The examiner can normally be reached Monday- Friday 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HELAL ALGAHAIM can be reached on (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIEN MING CHOU/Examiner, Art Unit 3666
/HELAL A ALGAHAIM/SPE , Art Unit 3666