DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/24/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Status of Claims
Claims 2-7, and 18-28 filed on 11/19/2025 are presently examined. Claims 1 and 8-17 are cancelled. Claims 2 and 5-7 are amended. Claims 18-28 are new.
Response to Arguments
Regarding 35 USC 101, Applicant’s arguments are unpersuasive. There is nothing in the claims that explicitly shows an improvement over the human mind and/or using pen and paper. The human mind is capable of the limitations. There is nothing in the claims reciting how the computer is explicitly superior. The generalized use of neural networks is insufficient to be significantly more. The features of neural networks argued by the Applicant are not present in the claims. There is nothing in the claims that specifically recites how it results in less errors than the human mind. Examiner recommends reciting direct control of the robot to overcome the 101 rejection, as the human mind cannot control a robot. Examiner will fully consider any arguments upon filed response.
Regarding 35 USC 112, Applicant’s amended claim set filed 11/19/2025 results in this rejection being withdrawn.
Regarding 35 USC 102, Applicant’s arguments are moot. The amendments changed the scope of the invention and a new reference, Choi is applied. The 102 rejection is withdrawn and replaced with a 103 rejection.
Regarding previous 103 rejection, Applicant’s arguments are moot since they depend on claim 2 being rejected only by Kueny. Claim 2 is now rejected under 103 by Kueny and Choi.
Claim Objections
Claim 1, 24, and 28 are objected to because of the following informalities: “generating … each recognizer output corresponds a respective sensor of the plurality of sensors” should instead read: “each recognizer output corresponds to a respective sensor of the plurality of sensors”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 2-17 are rejected under 35 USC § 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claims 2-17 are directed to a method. Therefore, claims 2-17 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 2 recites similar limitations as independent claims 24 and 28 and will be used as a representative claim.
Claim 2 is recited below and limitations that recite an abstract idea are emphasized in bolding below:
A method for detecting features in an environment, comprising:
receiving, by a processor, a collection of sensor data from a plurality of sensors deployed on a robot traversing a pipeline, each of the plurality of sensors associated with a sensor type,
tracking, by the processor and based on data received from a position sensor of the robot, a position of the robot within the pipeline;
generating, by the processor and using a plurality of deep learning algorithms, a plurality of recognizer outputs, wherein each recognizer output corresponds a respective sensor of the plurality of sensors.
recognizing, by the processor and based on a weighted combination of the plurality of recognizer outputs, a feature;
mapping the feature relative to the position of the robot; and
outputting an identification of the feature and a location of the feature within in the pipeline.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. The bolded limitations in the context of this claim encompasses a person mentally tracking the position of a robot traveling in a pipeline, recognizing features in said environment, mapping the features relative to a position of the robot based on a weighting step, and outputting an identification of the feature and its location within the pipeline. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A method for detecting features in an environment, comprising:
receiving, by a processor, a collection of sensor data from a plurality of sensors deployed on a robot traversing a pipeline, each of the plurality of sensors associated with a sensor type,
tracking, by the processor and based on data received from a position sensor of the robot, a position of the robot within the pipeline;
generating, by the processor and using a plurality of deep learning algorithms, a plurality of recognizer outputs, wherein each recognizer output corresponds a respective sensor of the plurality of sensors.
recognizing, by the processor and based on a weighted combination of the plurality of recognizer outputs, a feature;
mapping the feature relative to the position of the robot; and
outputting an identification of the feature and a location of the feature within in the pipeline.
For the following reason(s), the examiner submits that the above underlined additional limitations do not integrate the above-noted abstract idea into a practical application.
The examiner submits that these additional limitations merely use a sensors to perform an insignificant extra-solution activity of data gathering, and a computer (processor, generic computer components, sensors) to perform otherwise mental judgements is not sufficient to integrate the abstract idea into a practical application.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 9 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a processor or generic computer components such as sensors to gather data and perform the otherwise mental judgements amounts to nothing more than applying the exception using generic computer components. Generally applying an exception using a generic computer component cannot provide an inventive concept. Further the additional limitations are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application, merely use generic computer components in their ordinary capacity to perform an otherwise mental process or judgement, and do not amount to significantly more. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
Dependent claims 3-7, 18-23, and 25-27 do not recite any further limitations that cause the claims to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application, merely use generic computer components in their ordinary capacity to perform an otherwise mental process or judgement or data gathering, and do not amount to significantly more.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2, 4-5, 24, and 28 are rejected under 35 U.S.C. 103 as unpatentable over Kueny et al. (US 11029257 B2) in view of Choi et al. (KR 20190131207 A), hereinafter referred to as Kueny and Choi, respectively.
Regarding claims 2, 24, and 28, Kueny discloses A method for detecting features in an environment, comprising:
receiving, by a processor, a collection of sensor data from a plurality of sensors deployed on a robot traversing a pipeline, each of the plurality of sensors associated with a sensor type ([column 1, lines 52-57] “a multi-sensor pipe inspection robot that traverses through the interior of a pipe and obtains two or more sets of condition assessment data for the interior of the pipe during a single pass through the pipe interior; the multi-sensor pipe inspection robot comprising a first sensor type and a second sensor type”),
tracking, by the processor and based on data received from a position sensor of the robot, a position of the robot within the pipeline ([column 6, lines 17-21] “if a sensor data type, e.g., structured laser light data, indicates that a crack is present in the wall of the pipe at a particular location, the corresponding LIDAR data for this location may be obtained” [column 4, lines 46-60] “obtain data points related to the distance between the sensors 220, 230 and the interior wall surface of the pipe 200 … measure distance between the sensor 220 and the interior surface of the pipe wall.”);
recognizing, by the processor and based on a weighted combination of the plurality of recognizer outputs, a feature ([column 5 lines 30-31] “One of the sensors reports data that may be used to identify a feature, as indicated at 302.” [column 5, lines 31-40] “a laser sensor that collects structured laser light data may indicate that a feature of interest is present, e.g., a crack in the wall of the pipe. The identification of the feature may correspond, for example, to a data signature that is identified in real-time or near real-time. The data signature for the feature may be learned. The data signature may be learned in a variety of ways. For example, the data signature may be matched using a statistical analysis, or may be classified using a classification scheme, for example a machine learning algorithm.”);
mapping the feature relative to the position of the robot ([column 4, lines 46-64] “To obtain data points related to the distance between the sensors 220, 230 and the interior wall surface of the pipe 200, a time for a sensor output (e.g., laser light) that travels at a known speed to return to a detector is used, such that an image may be constructed from the distance data … in order to measure distance between the sensor 220 and the interior surface of the pipe wall. The sensors 220, 230 may thus operate to calculate distance using time of flight, i.e., time to reflect to a detector, in order to build an image of the interior of the pipe 200.”); and
outputting an identification of the feature and a location of the feature within in the pipeline ([column 5, lines 43-48] “Once the feature is identified at 302, which may include classification of an image or image feature by matching the image or image feature to a predetermined feature among a predetermined feature set, an embodiment determines if there is an image processing technique associated with the feature, as indicated at 303.” [column 3, lines 49-53] “identify a pipe feature and selects an image processing method applied to another sensor data type based on the pipe feature type. This leverages the fact that different statistical methods are more appropriate for forming images of certain features.” [column 6 lines 33-42] “when viewing the image using a single data pass of LIDAR data, the area of the pipe 200 including the crack 240 may appear as a small depression. However, if the second sensor type 230 is, e.g., a structured laser light sensor, it may have less noise or error when detecting a feature such as crack 240. Therefore, when sensor type 230 passes by the area of the pipe 200 including crack 240, the pipe feature, in this case crack 240, may be detected based on a feature signature in the laser scan data and noted. Such identification or notation may then be used to improve the LIDAR data”).
Kueny discloses generating, by the processor, a plurality of recognizer outputs, wherein each recognizer output corresponds a respective sensor of the plurality of sensors ([column 5, lines 31-40] “a laser sensor that collects structured laser light data may indicate that a feature of interest is present, e.g., a crack in the wall of the pipe. The identification of the feature may correspond, for example, to a data signature that is identified in real-time or near real-time. The data signature for the feature may be learned. The data signature may be learned in a variety of ways. For example, the data signature may be matched using a statistical analysis, or may be classified using a classification scheme, for example a machine learning algorithm.” [column 6, lines 28-46] “If the first sensor type 220 is a LIDAR unit, the crack 240 may go undetected or result in a low-resolution image of the crack 240 using a standard image processing technique, … if the second sensor type 230 is, e.g., a structured laser light sensor, it may have less noise or error when detecting a feature such as crack 240. … Such identification or notation may then be used to improve the LIDAR data by changing (which may include initially selecting) the image processing technique applied to the LIDAR data associated with the region of interest, e.g., within a predetermined distance of the crack 240.” [column 3, lines 49-53] “identify a pipe feature and selects an image processing method applied to another sensor data type based on the pipe feature type. This leverages the fact that different statistical methods are more appropriate for forming images of certain features.” It’s not entirely clear whether Kueny is using multiple deep learning networks. However, Kueny is certainly outputting recognized features in the pipeline based on different sensor types’ strengths and weaknesses.).
Kueny fails to explicitly disclose generating, by the processor and using a plurality of deep learning algorithms, a plurality of recognizer outputs, wherein each recognizer output corresponds a respective sensor of the plurality of sensors.
Hoiwever, Choi teaches generating, by the processor and using a plurality of deep learning algorithms, a plurality of recognizer outputs, wherein each recognizer output corresponds a respective sensor of the plurality of sensors ([See FIG. 4 on page 14, FIG. 5 on page 15] two different deep learning algorithms for RGB camera and LIDAR. They both output a feature map. The feature maps get weighted and combined into an output. [0023] “a feature map acquisition unit that acquires a first feature map and a second feature map by inputting different data into respective deep neural networks; a fusion unit that fuses the acquired first feature map and the second feature map through a fusion network; and a detection unit that detects an object based on a new feature map fused through the fusion network” [0049] “the RGB image of the camera (hereinafter, camera image) (111) and the lidar image (122) of the lidar can be input into their respective deep neural networks.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Choi’s teaching of a using two different deep neural networks for different types of sensor data to recognize objects and their locations. One would be motivated, with reasonable expectation of success, to use two (a plurality) separate deep neural networks in order to improve reliability of the system and show good performance (Choi [0050] “robustness can be determined to determine which data between the camera and the lidar is more reliable.” [0005] “sensor fusion technology based on deep learning, intermediate fusion techniques that pass each sensor signal through a separate CNN, combine them in the middle, and process the combined feature values in the final stage through a CNN are showing good performance”).
Regarding claim 4, Kueny discloses The method of claim 3 wherein the sensor is a light detection and ranging (LIDAR) sensor and further comprising creating a three-dimensional image of the environment and wherein the feature is mapped onto the three-dimensional image ([column 4 lines 57-63] “sensor 220, may take time of flight measurements (e.g., LIDAR measurements), again in order to measure distance between the sensor 220 and the interior surface of the pipe wall. The sensors 220, 230 may thus operate to calculate distance using time of flight, i.e., time to reflect to a detector, in order to build an image of the interior of the pipe 200.” [claim 6] “the image of the interior of the pipe comprises a three-dimensional image.”).
Regarding claim 5, Kueny discloses The method of claim 4 wherein the pipeline is a first pipeline and the feature is a second pipeline laterally intersecting the first pipeline ([FIG. 2] circle 250, indicating laterally intersecting pipe mapped by the robot [column 5, line 15] “a manhole opening, as indicated at 250, etc.”).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Kueny in view of Choi as applied to claim 2, further in view of Vicenti (US 9630319 B2), hereinafter referred to as Vicenti.
Regarding claim 3, Kueny fails to disclose The method of claim 2 wherein the position of the robot is determined based on a reading from an encoder associated with a wheel of the robot.
However, Vicenti teaches the position of the robot is determined based on a reading from an encoder associated with a wheel of the robot ([column 8, line 12] “wheel encoders 112a-b.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Vicenti’s teaching of using an encoder associated with a wheel. One would be motivated, with reasonable expectation of success, to determine the distance travelled ([column 10, lines 29-31] “the controller 190 can use the encoders 112a-b to track the distance traveled by the robot 100.”).
Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kueny in view of Choi as applied to claim 4, further in view of Liang et al. (US 11500099 B2), hereafter Liang.
Regarding claim 6, Kueny fails to explicitly disclose The method of claim 4 wherein the feature is associated with a second sensor type and wherein the second sensor type is a two-dimensional camera and wherein the output is predicted by combining data from the two-dimensional camera and the three-dimensional image ([column 4, lines 18-20] “MSI pipe inspection robot 110a may include imaging sensors including at least LIDAR units, sonar units, and visible light cameras.” [column 6, lines 12-17] “if a pipe feature is encountered that is associated with an image processing technique, as determined at 303, then an image processing technique may be selected based on the pipe feature, as indicated at 305, and use of the second image processing technique may be implemented at 306.”).
However, Liang teaches the feature is associated with a second sensor type and wherein the second sensor type is a two-dimensional camera and wherein the output is predicted by combining data from the two-dimensional camera and the three-dimensional image ([FIGs 13-15] [column 3, lines 12-25] “receiving, by the computing system, one or more target data points associated with the image data. The computer-implemented method includes extracting, by the computing system and for each target data point, a plurality of source data points associated with the LIDAR point cloud data based on a distance of each source data point to the target data point. The computer-implemented method also includes fusing, by the computing system, information from the plurality of source data points in the one or more fusion layers to generate an output feature at each target data point. The computer-implemented method also includes generating, by the computing system, a feature map comprising the output feature at each of the one or more target data points.” [column 36, lines 54-58] “In response to receiving the feature map, the machine-learned detector model can be trained to generate as output a plurality of detections corresponding to identified objects of interest within the feature map.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Liang’s teaching of combining visual camera and LIDAR data to identify objects of interest in a generated feature map of the two data streams. One would be motivated, with reasonable expectation of success, to fuse camera and LIDAR data to identify objects of interest in order to improve object detection and localization within three-dimensional space ([column 13, lines 62-67] “an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle.”).
Claims 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kueny in view of Choi and Liang as applied to claim 6 further in view of Huang (US 10726579 B1), hereinafter referred to as Huang.
Regarding claim 7, Kueny fails to disclose The method of claim 6 wherein there is a first frame of reference associated with a position of the two-dimensional camera with respect to the robot and a second frame of reference associated with a position of the LIDAR sensor with respect to the robot and the combining step is performed based on the difference between the first frame of reference and the second frame of reference.
However Huang teaches there is a first frame of reference associated with a position of the two-dimensional camera with respect to the robot and a second frame of reference associated with a position of the LIDAR sensor with respect to the robot and the combining step is performed based on the difference between the first frame of reference and the second frame of reference ([column 2 line 67 through column 3 line 6] “a LiDAR sensor and a camera associated with the LiDAR sensor to acquire Point Cloud Data (PCD) frames and image frames, respectively, of a target (e.g., a calibration pattern) from different viewpoints. At each viewpoint, the target is arranged at a particular viewing angle, a scale, or an orientation in a common field-of-view (FOV) of the LiDAR sensor and the camera.” [column 13 lines 36-43] “compute a transform between the extracted first normal and the extracted second normal to determine final values of the extrinsic calibration parameters for the sensor system 106. Based on the final values of the extrinsic calibration parameters, a correspondence may be established between every point of a PCD frame from the LiDAR sensor 108 and respective pixels of an image frame from the camera 110.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Huang’s teaching of calibrating visual camera and LIDAR data which are collected from different viewpoints by transforming the data. One would be motivated, with reasonable expectation of success, to fuse camera and LIDAR data in a calibrated manner taking into account their different view points in order to improve the representation of the environment and the objects of interest ([column 3 lines 45-49] “The fused sensor data may be a multimodal representation of the surrounding environment and may provide an enriched description of the surrounding environment and/or object(s) of interest in the surrounding environment.”),
Claims 18 and 25 are rejected under 35 U.S.C. 103 as unpatentable over Kueny in view of Choi as applied to claim 2 and 24, further in view of Warren (US 20150114507 A1), hereinafter referred to as Warren.
Regarding claims 18 and 25, Kueny fails to explicitly disclose The method of claim 2, wherein outputting the identification of the feature and the location of the feature within the pipeline is performed for a post-lining scan of the pipeline, and wherein the method further comprises:
determining, by the processor and based on a comparison between the post-lining scan of the pipeline with a pre-lining scan of the pipeline, whether to perform a corrective action at the location of the feature.
However, Warren teaches outputting the identification of the feature and the location of the feature within the pipeline is performed for a post-lining scan of the pipeline, and wherein the method further comprises: determining, by the processor and based on a comparison between the post-lining scan of the pipeline with a pre-lining scan of the pipeline, whether to perform a corrective action at the location of the feature ([0028] “the first and second spheres 12, 14 scan the condition of the pipeline and feed the data to the computer 20 via the umbilical line 20. The computer 20 using the data determines the thickness of the required coating to be applied in order to fill cracks or voids in the pipeline.” [0031] “Based on the data collected about the condition of the pipeline, it can be determined whether further remediation of the pipeline beyond the application of a coating is needed.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Warren’s teaching of a pipeline repair robot which scans the pipeline before laying down the repairing material and collects information after to determine if any further repair is necessary. One would be motivated, with reasonable expectation of success, to scan, repair, and determine if further repair is necessary with a robot in order to solve this known problem in the art (Warren [0033] “evaluating the interior surface and exterior wall conditions of a pipeline while also dynamically installing a repair coating in a pipeline … instant invention is believed to represent a significant advancement in the art.”).
Claims 19-22 and 27 are rejected under 35 U.S.C. 103 as unpatentable over Kueny in view of Choi as applied to claim 2 and 24, further in view of Wehlin et al. (US 20210402609 A1), hereinafter referred to as Wehlin.
Regarding claims 19 and 27, Kueny fails to explicitly disclose The method of claim 2, wherein the position sensor comprises a motor encoder associated with a wheel of the robot and an inertial measurement unit (IMUIJ), and wherein tracking the position of the robot comprises: determining, by the processor, a disparity between a first distance measurement from the motor encoder and a second distance measurement from the IMU; identifying a wheel slippage event based on the disparity; and updating the position of the robot in response to the wheel slippage event.
However, Wehlin teaches The method of claim 2, wherein the position sensor comprises a motor encoder associated with a wheel of the robot and an inertial measurement unit (IMUIJ), and wherein tracking the position of the robot comprises: determining, by the processor, a disparity between a first distance measurement from the motor encoder and a second distance measurement from the IMU; identifying a wheel slippage event based on the disparity; and updating the position of the robot in response to the wheel slippage event ([0175] “Even with low precision, however, IMU 520 can be used to detect significant radial movement (e.g., slipping) if all wheels 110 start to spin in place on pipe 10 because robotic apparatus is stuck on an obstacle.” [0193] “Changes in one or more of the distance measurements may, in some cases, be indicative of radial slip since the distance between the sensors 1010 (which are attached to robotic apparatus 100) and pipe 10 may increase or decrease depending on the direction robotic apparatus is slipping.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Wehlin’s teaching of a similar pipe inspection robot with wheel slippage detection. One would be motivated, with reasonable expectation of success, to detect when wheels slip on a pipe inspection robot in order to correct the position of the robot in the pipe when wheels slipping cause the robot to remain in place despite encoders reporting distance travelled (Wehlin [0063] “automatically detecting and correcting for radial sip, circumferential slip, longitudinal slip, or any combination thereof.” [0150] “a location of robotic apparatus 100 on pipe 10 relative to intended location to develop commands for driving robotic apparatus along a predetermined path and correcting for any deviations therefrom”).
Regarding claim 20, Kueny fails to explicitly disclose The method of claim 19, wherein identifying the wheel slippage event comprises determining that the first distance measurement from the motor encoder indicates movement of the robot while the second distance measurement from the IMU indicates that the robot is stationary.
However, Wehlin teaches identifying the wheel slippage event comprises determining that the first distance measurement from the motor encoder indicates movement of the robot while the second distance measurement from the IMU indicates that the robot is stationary ([0175] “Even with low precision, however, IMU 520 can be used to detect significant radial movement (e.g., slipping) if all wheels 110 start to spin in place on pipe 10 because robotic apparatus is stuck on an obstacle.” [0193] “Changes in one or more of the distance measurements may, in some cases, be indicative of radial slip since the distance between the sensors 1010 (which are attached to robotic apparatus 100) and pipe 10 may increase or decrease depending on the direction robotic apparatus is slipping.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Wehlin’s teaching of a similar pipe inspection robot with wheel slippage detection. One would be motivated, with reasonable expectation of success, to detect when wheels slip on a pipe inspection robot in order to correct the position of the robot in the pipe when wheels slipping cause the robot to remain in place despite encoders reporting distance travelled (Wehlin [0063] “automatically detecting and correcting for radial sip, circumferential slip, longitudinal slip, or any combination thereof.” [0150] “a location of robotic apparatus 100 on pipe 10 relative to intended location to develop commands for driving robotic apparatus along a predetermined path and correcting for any deviations therefrom”).
Regarding claim 21, Kueny fails to explicitly disclose The method of claim 19, wherein updating the position comprises: updating, by the processor, the position of the robot to harmonize the first distance measurement with the second distance measurement in response to the wheel slippage event.
However, Wehlin teaches The method of claim 19, wherein updating the position comprises: updating, by the processor, the position of the robot to harmonize the first distance measurement with the second distance measurement in response to the wheel slippage event ([0175] “Even with low precision, however, IMU 520 can be used to detect significant radial movement (e.g., slipping) if all wheels 110 start to spin in place on pipe 10 because robotic apparatus is stuck on an obstacle.” [0193] “Changes in one or more of the distance measurements may, in some cases, be indicative of radial slip since the distance between the sensors 1010 (which are attached to robotic apparatus 100) and pipe 10 may increase or decrease depending on the direction robotic apparatus is slipping.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Wehlin’s teaching of a similar pipe inspection robot with wheel slippage detection. One would be motivated, with reasonable expectation of success, to detect when wheels slip on a pipe inspection robot in order to correct the position of the robot in the pipe when wheels slipping cause the robot to remain in place despite encoders reporting distance travelled (Wehlin [0063] “automatically detecting and correcting for radial sip, circumferential slip, longitudinal slip, or any combination thereof.” [0150] “a location of robotic apparatus 100 on pipe 10 relative to intended location to develop commands for driving robotic apparatus along a predetermined path and correcting for any deviations therefrom”).
Regarding claim 22, Kueny fails to disclose The method of claim 19, further comprising: correcting the position of the robot by cross-referencing the wheel slippage event with the feature.
However, Wehlin teaches The method of claim 19, further comprising: correcting the position of the robot by cross-referencing the wheel slippage event with the feature ([0175] “Even with low precision, however, IMU 520 can be used to detect significant radial movement (e.g., slipping) if all wheels 110 start to spin in place on pipe 10 because robotic apparatus is stuck on an obstacle.” [0193] “Changes in one or more of the distance measurements may, in some cases, be indicative of radial slip since the distance between the sensors 1010 (which are attached to robotic apparatus 100) and pipe 10 may increase or decrease depending on the direction robotic apparatus is slipping.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Wehlin’s teaching of a similar pipe inspection robot with wheel slippage detection. One would be motivated, with reasonable expectation of success, to detect when wheels slip on a pipe inspection robot in order to correct the position of the robot in the pipe when wheels slipping cause the robot to remain in place despite encoders reporting distance travelled (Wehlin [0063] “automatically detecting and correcting for radial sip, circumferential slip, longitudinal slip, or any combination thereof.” [0150] “a location of robotic apparatus 100 on pipe 10 relative to intended location to develop commands for driving robotic apparatus along a predetermined path and correcting for any deviations therefrom”).
Claims 23 and 26 are rejected under 35 U.S.C. 103 as unpatentable over Kueny in view of Choi as applied to claim 2 and 24, further in view of Erbts et al. (US 20250174016 A1), hereinafter referred to as Erbts.
Regarding claims 23 and 26, Kueny fails to disclose the method of claim 2, wherein outputting comprises generating a digital twin of the environment, and
wherein the weighted combination correlated the collection of sensor data to resolve disparities between the sensor types.
However, Erbts teaches outputting comprises generating a digital twin of the environment ([FIG. 1a] [0005] “capture of the robot environment by the at least one sensor of the robot to obtain the captured environment data; classification of the object in the robot environment in at least one object type by means of a neural network on the basis of the environment data, wherein the neural network is configured to recognize object types, wherein the neural network is executed by a processor; and linking of the classified object type with position information which indicates a position of the object to create the digital twin.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Erbts’ teaching of a digital twin. One would be motivated, with reasonable expectation of success, to scan the environment to form a digital twin using a robot with a plurality of sensors in order to reduce costs and enable functions that are possible because of the digital twin (Erbts [0080] “The entire working process can be automated without the need for any intervention or processing of data by humans. The system can therefore be encapsulated and a plurality of functions can be performed simultaneously, such as the live mapping of the environment and the related digital services such as, for example, a live inventory, localization and simulation of the real environment. Costs and processing times can thus be reduced and data can be collected that is absolutely necessary for an advanced digitization by means of artificial intelligence.”).
Kueny fails to explicitly disclose wherein the weighted combination correlated the collection of sensor data to resolve disparities between the sensor types.
However, Choi teaches wherein the weighted combination correlated the collection of sensor data to resolve disparities between the sensor types ([0050] “After determining the quality of each feature map, weights are assigned to one or more of the feature maps, and then each feature map can be fused by passing it through a fusion network” [0052] “since the fusion network (GFU) combines each feature map by assigning appropriate weights to them, optimal sensor fusion is achieved by adjusting the contribution of feature maps with degraded quality.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kueny with Choi’ teaching of weighting the combination of sensor data from camera and LIDAR after being output from their own neural networks. One would be motivated, with reasonable expectation of success, to use a weighting system after neural network output of the plurality of sensor recognition data in order to increase robustness and reliability (Choi [0050] “robustness can be determined to determine which data between the camera and the lidar is more reliable.”)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R HEIM whose telephone number is (571)270-0120. The examiner can normally be reached M-F 9-6 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.R.H./Examiner, Art Unit 3668
/Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668