DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The amendments filed 2/11/2026 have been entered. Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-15 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Grabner et al. (US Publication No. 2019/0147221) in view of Boman et al. (US Patent No. 9,064,161) and Colin et al. (US Publication No. 2016/0264262).
Grabner teaches:
Re claim 1. A method performed by a computing system, the method comprising:
receiving a multi-dimensional representation of a candidate object upon which a robot is to perform an operation (input images 102, Fig. 1; and paragraph [0003]);
extracting a first feature from the multi-dimensional representation associated with the candidate object by at least inputting first input data from the multi-dimensional representation into a first neural network (pose estimation system 108, Figs. 1 and 2; Real Domain CNN 220, Fig. 2A; and paragraph [0078]: “A real domain convolutional neural network (CNN) 220 can be used to jointly predict the airplane object's 3D dimensions and the 2D projections of the 3D bounding box corners (the dots on each corner of the bounding box surrounding the airplane).”);
comparing the first feature with a second feature associated with a target object by at least inputting second input data […], the second input data based on output data from the first neural network (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models. The affinity between the target object and the candidate 3D models can be estimated based on a comparison between the descriptor of the target object and the set of descriptors computed for the depth maps of the candidate 3D models. In some examples, as noted above, the descriptor matching engine 114 can perform the comparison using an optimization problem. One illustrative example of an optimization problem that can be used by the descriptor matching engine 114 is a nearest-neighbor search.” “The nearest neighbor search can be performed by the descriptor matching engine 114 to find the descriptor from the set of 3D model descriptors that is a closest match to (or is most similar to) the descriptor computed for the target object…. the 2D vector values of the target object descriptor is compared to the 2D vector values of each the 3D model depth maps.” Paragraph [0074]: “Other suitable comparison techniques can also be used to compare the target object and the candidate 3D models of the target object to determine a best-matching 3D model.”);
determining whether the candidate object is the target object based on the comparison (paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models.”).
Grabner fails to specifically teach: (re claim 1) comparing the first feature with a second feature associated with a target object by at least inputting second input data into a second neural network, the second input data based on output data from the first neural network.
Boman teaches, at Fig. 8; column 11, line 64 through column 12, line 11; and column 12, line 54 through column 13, line 6, comparisons of appearance parameters to determine if an imaged candidate matches a known model may be performed using nearest neighbor classifiers, as taught in Grabner, or neural networks. That is, neural networks are an art recognized functional equivalent for comparing appearance parameters of a candidate object.
In view of Boman’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method as taught by Grabner, (re claim 1) comparing the first feature with a second feature associated with a target object by at least inputting second input data into a second neural network, the second input data based on output data from the first neural network, with a reasonable expectation of success, since Boman teaches comparisons of appearance parameters to determine if an imaged candidate matches a known model may be performed using nearest neighbor classifiers, as taught in Grabner, or neural networks. That is, neural networks are an art recognized functional equivalent for comparing appearance parameters of a candidate object.
Grabner fails to specifically teach: (re claim 1) determining a path from a location of the robot to the target object based at least in part on the determining of whether the candidate object is the target object; and causing the robot to traverse the path to the target object.
Colin teaches, at paragraphs [0017-0022, 0087, 0095-0096, and 0101], a robot uses the shape of an aircraft to determine an aircraft it is to inspect. The robot then moves in close to the aircraft, in a path that allows it to view the aircraft to be inspected, while avoiding obstacles, based on the robot’s position relative to the aircraft. This allows for such robots to recognize an aircraft to be inspected and move to inspect the aircraft autonomously.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method as taught by Grabner, (re claim 1) determining a path from a location of the robot to the target object based at least in part on the determining of whether the candidate object is the target object; and causing the robot to traverse the path to the target object, with a reasonable expectation of success, since Colin teaches a robot uses the shape of an aircraft to determine an aircraft it is to inspect. The robot then moves in close to the aircraft, in a path that allows it to view the aircraft to be inspected, while avoiding obstacles, based on the robot’s position relative to the aircraft. This allows for such robots to recognize an aircraft to be inspected and move to inspect the aircraft autonomously.
Grabner further teaches:
Re claim 2. Wherein the first neural network is trained to extract the first feature from the multi-dimensional representation (pose estimation system 108, Figs. 1 and 2; Real Domain CNN 220, Fig. 2A; and paragraph [0078]: “A real domain convolutional neural network (CNN) 220 can be used to jointly predict the airplane object's 3D dimensions and the 2D projections of the 3D bounding box corners (the dots on each corner of the bounding box surrounding the airplane).”).
Re claim 3. Wherein the second neural network is trained to compare the first feature with the second feature to determine whether the candidate object is the target object (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models. The affinity between the target object and the candidate 3D models can be estimated based on a comparison between the descriptor of the target object and the set of descriptors computed for the depth maps of the candidate 3D models. In some examples, as noted above, the descriptor matching engine 114 can perform the comparison using an optimization problem. One illustrative example of an optimization problem that can be used by the descriptor matching engine 114 is a nearest-neighbor search.” “The nearest neighbor search can be performed by the descriptor matching engine 114 to find the descriptor from the set of 3D model descriptors that is a closest match to (or is most similar to) the descriptor computed for the target object…. the 2D vector values of the target object descriptor is compared to the 2D vector values of each the 3D model depth maps.” Paragraph [0074]: “Other suitable comparison techniques can also be used to compare the target object and the candidate 3D models of the target object to determine a best-matching 3D model.”).
Re claim 4. Wherein the multi-dimensional representation further comprises a second candidate object (paragraph [0060]: “process each of the input images 102 to detect one or more target objects in the input images 102.”), and wherein the method further comprises:
extracting a third feature from the multi-dimensional representation, the third feature being associated with the second candidate object (paragraph [0060]: “process each of the input images 102 to detect one or more target objects in the input images 102.”);
comparing the third feature with the second feature (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]); and
determining that the candidate object is the target object based on the comparison of the third feature with the second feature and the comparison of the first feature with the second feature (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]).
Re claim 5. Wherein the target object is an aircraft (retrieved 3D model 216, Fig. 2A).
Grabner fails to specifically teach: (re claim 6) further comprising:
receiving a second multi-dimensional representation while the robot is traversing the path to the target object;
extracting a fourth feature from the multi-dimensional representation, the fourth feature being associated with the target object;
comparing the fourth feature with the second feature;
determining a second path from a second location of the robot to the target object based on the comparison; and
causing the robot to traverse the second path from the second location to the target object.
Colin teaches, at paragraphs [0017-0019, 0087-0089, 0096, and 0101], obtaining additional visual data on an aircraft to be inspected so as to ensure the correct aircraft is going to be inspected and to navigate the robot relative to recognized characteristic shapes or subassemblies of the aircraft. This allows the system to have a greater degree of confidence that the correct aircraft will be inspected from the correct locations, even when anomalies or inconsistencies are detected.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method as taught by Grabner, (re claim 6) further comprising: receiving a second multi-dimensional representation while the robot is traversing the path to the target object; extracting a fourth feature from the multi-dimensional representation, the fourth feature being associated with the target object; comparing the fourth feature with the second feature; determining a second path from a second location of the robot to the target object based on the comparison; and causing the robot to traverse the second path from the second location to the target object, with a reasonable expectation of success, since Colin teaches obtaining additional visual data on an aircraft to be inspected so as to ensure the correct aircraft is going to be inspected and to navigate the robot relative to recognized characteristic shapes or subassemblies of the aircraft. This allows the system to have a greater degree of confidence that the correct aircraft will be inspected from the correct locations, even when anomalies or inconsistencies are detected.
Grabner fails to specifically teach: (re claim 7) further comprising causing the robot to perform the operation on the target object.
Colin teaches, at paragraph [0094], carrying out the visual inspection operations on the identified aircraft to be inspected. This allows for such robots to autonomously inspect aircraft, thus reducing the workload of human inspectors.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the method as taught by Grabner, (re claim 7) further comprising causing the robot to perform the operation on the target object, with a reasonable expectation of success, since Colin teaches carrying out the visual inspection operations on the identified aircraft to be inspected. This allows for such robots to autonomously inspect aircraft, thus reducing the workload of human inspectors.
Grabner further teaches:
Re claim 8. A robot (paragraph [0003]), comprising:
a processor (paragraph [0010]); and
a computer-readable medium including instructions that, when executed by the processor, cause the robot to perform operations comprising (paragraph [0011]):
obtaining a multi-dimensional representation of a candidate object upon which a robot is to perform an operation (input images 102, Fig. 1);
extracting a first feature from the multi-dimensional representation associated with the candidate object by at least inputting first input data from the multi-dimensional representation into a first neural network (pose estimation system 108, Figs. 1 and 2; Real Domain CNN 220, Fig. 2A; and paragraph [0078]: “A real domain convolutional neural network (CNN) 220 can be used to jointly predict the airplane object's 3D dimensions and the 2D projections of the 3D bounding box corners (the dots on each corner of the bounding box surrounding the airplane).”);
comparing the first feature with a second feature associated with a target object by at least inputting second input data […], the second input data based on output data from the first neural network (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models. The affinity between the target object and the candidate 3D models can be estimated based on a comparison between the descriptor of the target object and the set of descriptors computed for the depth maps of the candidate 3D models. In some examples, as noted above, the descriptor matching engine 114 can perform the comparison using an optimization problem. One illustrative example of an optimization problem that can be used by the descriptor matching engine 114 is a nearest-neighbor search.” “The nearest neighbor search can be performed by the descriptor matching engine 114 to find the descriptor from the set of 3D model descriptors that is a closest match to (or is most similar to) the descriptor computed for the target object…. the 2D vector values of the target object descriptor is compared to the 2D vector values of each the 3D model depth maps.” Paragraph [0074]: “Other suitable comparison techniques can also be used to compare the target object and the candidate 3D models of the target object to determine a best-matching 3D model.”);
determining whether the candidate object is the target object based on the comparison (paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models.”);
Grabner fails to specifically teach: (re claim 8) comparing the first feature with a second feature associated with a target object by at least inputting second input data into a second neural network, the second input data based on output data from the first neural network.
Boman teaches, at Fig. 8; column 11, line 64 through column 12, line 11; and column 12, line 54 through column 13, line 6, comparisons of appearance parameters to determine if an imaged candidate matches a known model may be performed using nearest neighbor classifiers, as taught in Grabner, or neural networks. That is, neural networks are an art recognized functional equivalent for comparing appearance parameters of a candidate object.
In view of Boman’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 8) comparing the first feature with a second feature associated with a target object by at least inputting second input data into a second neural network, the second input data based on output data from the first neural network, with a reasonable expectation of success, since Boman teaches comparisons of appearance parameters to determine if an imaged candidate matches a known model may be performed using nearest neighbor classifiers, as taught in Grabner, or neural networks. That is, neural networks are an art recognized functional equivalent for comparing appearance parameters of a candidate object.
Grabner fails to specifically teach: (re claim 8) determining a path from a location of the robot to the target-object based at least in part on the determining of whether the candidate object is the target object; and traversing the path to the target object.
Colin teaches, at paragraphs [0017-0022, 0087, 0095-0096, and 0101], a robot uses the shape of an aircraft to determine an aircraft it is to inspect. The robot then moves in close to the aircraft, in a path that allows it to view the aircraft to be inspected, while avoiding obstacles, based on the robot’s position relative to the aircraft. This allows for such robots to recognize an aircraft to be inspected and move to inspect the aircraft autonomously.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 8) determining a path from a location of the robot to the target-object based at least in part on the determining of whether the candidate object is the target object; and traversing the path to the target object, with a reasonable expectation of success, since Colin teaches a robot uses the shape of an aircraft to determine an aircraft it is to inspect. The robot then moves in close to the aircraft, in a path that allows it to view the aircraft to be inspected, while avoiding obstacles, based on the robot’s position relative to the aircraft. This allows for such robots to recognize an aircraft to be inspected and move to inspect the aircraft autonomously.
Grabner further teaches:
Re claim 9. Further comprising a vision sensor, and wherein obtaining the multi-dimensional representation comprises receiving a light-based signal from the candidate object using the vision sensor (paragraph [0058]).
Re claim 10. wherein the multi-dimensional representation further comprises a second candidate object (paragraph [0060]: “process each of the input images 102 to detect one or more target objects in the input images 102.”), and wherein the computer-readable medium includes further instructions that, when executed by the processor, cause the robot to perform further operations comprising:
extracting a third feature from the multi-dimensional representation, the third feature being associated with the second candidate object (paragraph [0060]: “process each of the input images 102 to detect one or more target objects in the input images 102.”);
comparing the third feature with the second feature (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]); and
determining that the candidate object is the target object based on the comparison (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]).
Re claim 11. Wherein the first neural network is trained to extract the first feature from the multi-dimensional representation (pose estimation system 108, Figs. 1 and 2; Real Domain CNN 220, Fig. 2A; and paragraph [0078]: “A real domain convolutional neural network (CNN) 220 can be used to jointly predict the airplane object's 3D dimensions and the 2D projections of the 3D bounding box corners (the dots on each corner of the bounding box surrounding the airplane).”).
Re claim 12. Wherein the second neural network is trained to compare the first feature with the second feature to determine whether the candidate object is the target object (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models. The affinity between the target object and the candidate 3D models can be estimated based on a comparison between the descriptor of the target object and the set of descriptors computed for the depth maps of the candidate 3D models. In some examples, as noted above, the descriptor matching engine 114 can perform the comparison using an optimization problem. One illustrative example of an optimization problem that can be used by the descriptor matching engine 114 is a nearest-neighbor search.” “The nearest neighbor search can be performed by the descriptor matching engine 114 to find the descriptor from the set of 3D model descriptors that is a closest match to (or is most similar to) the descriptor computed for the target object…. the 2D vector values of the target object descriptor is compared to the 2D vector values of each the 3D model depth maps.” Paragraph [0074]: “Other suitable comparison techniques can also be used to compare the target object and the candidate 3D models of the target object to determine a best-matching 3D model.”).
Re claim 13. Wherein the multi-dimensional representation further comprises a second candidate object (paragraph [0060]: “process each of the input images 102 to detect one or more target objects in the input images 102.”), and wherein the computer-readable medium includes further instructions that, when executed by the processor, cause the robot to perform further operations comprising:
extracting a third feature from the multi-dimensional representation, the third feature being associated with the second candidate object (paragraph [0060]: “process each of the input images 102 to detect one or more target objects in the input images 102.”);
comparing the third feature with the first feature (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]);
determining that the candidate object and the second candidate object are a same object type based on the comparison (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]).
Grabner fails to specifically teach: (re claim 13)
receiving image data of the candidate object based on the determination, wherein the image data includes an object identifier;
receiving a second object identifier of the target object;
comparing the object identifier and the second object identifier; and
determining that the candidate object is the target object based on the comparison of the object identifier and the second object identifier.
Colin teaches, at paragraph [0087], identifying a target aircraft from a plurality of aircraft based on the detected shape of the aircraft and the detected registration number of the aircraft. The system ensures both the shape and the registration number correspond to the target aircraft to ensure the desired aircraft is inspected.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 13) receiving image data of the candidate object based on the determination, wherein the image data includes an object identifier; receiving a second object identifier of the target object; comparing the object identifier and the second object identifier; and determining that the candidate object is the target object based on the comparison of the object identifier and the second object identifier, with a reasonable expectation of success, since Colin teaches identifying a target aircraft from a plurality of aircraft based on the detected shape of the aircraft and the detected registration number of the aircraft. The system ensures both the shape and the registration number correspond to the target aircraft to ensure the desired aircraft is inspected.
Grabner fails to specifically teach: (re claim 14) wherein the computer-readable medium stores further instructions that, when executed by the processor, cause the robot to perform further operations comprising:
receiving a second multi-dimensional representation while the robot is traversing the path to the target object;
extracting a fourth feature from the multi-dimensional representation, the fourth feature being associated with the target object;
comparing the fourth feature with the second feature;
determining a second path from a second location of the robot to the target object based on the comparison; and
causing the robot to traverse the second path from the second location to the target object.
Colin teaches, at paragraphs [0017-0019, 0087-0089, 0096, and 0101], obtaining additional visual data on an aircraft to be inspected so as to ensure the correct aircraft is going to be inspected and to navigate the robot relative to recognized characteristic shapes or subassemblies of the aircraft. This allows the system to have a greater degree of confidence that the correct aircraft will be inspected from the correct locations, even when anomalies or inconsistencies are detected.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 14) wherein the computer-readable medium stores further instructions that, when executed by the processor, cause the robot to perform further operations comprising: receiving a second multi-dimensional representation while the robot is traversing the path to the target object; extracting a fourth feature from the multi-dimensional representation, the fourth feature being associated with the target object; comparing the fourth feature with the second feature; determining a second path from a second location of the robot to the target object based on the comparison; and causing the robot to traverse the second path from the second location to the target object, with a reasonable expectation of success, since Colin teaches obtaining additional visual data on an aircraft to be inspected so as to ensure the correct aircraft is going to be inspected and to navigate the robot relative to recognized characteristic shapes or subassemblies of the aircraft. This allows the system to have a greater degree of confidence that the correct aircraft will be inspected from the correct locations, even when anomalies or inconsistencies are detected.
Grabner fails to specifically teach: (re claim 15) wherein the robot further comprises an odometry system, and wherein determining the second path comprises determining a second location of the robot along the path using the odometry system, wherein the second path is from the second location to the target object, and wherein the second path diverges from the path.
Colin teaches, at paragraph [0096 and 0101], using odometry techniques to localize a robot while updating the robot’s path to avoid obstacles. This allows the system to use feedback based on the robot’s position and allows the robot to avoid colliding with obstacles.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 15) wherein the robot further comprises an odometry system, and wherein determining the second path comprises determining a second location of the robot along the path using the odometry system, wherein the second path is from the second location to the target object, and wherein the second path diverges from the path, with a reasonable expectation of success, since Colin teaches using odometry techniques to localize a robot while updating the robot’s path to avoid obstacles. This allows the system to use feedback based on the robot’s position and allows the robot to avoid colliding with obstacles.
Re claim 18. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform operations comprising (paragraphs [0010-0011]):
receiving a multi-dimensional representation of a candidate object upon which a robot is to perform an operation (input images 102, Fig. 1; and paragraph [0003]);
extracting a first feature from the multi-dimensional representation associated with the candidate object by at least inputting first input data from the multi-dimensional representation into a first neural network (pose estimation system 108, Figs. 1 and 2; Real Domain CNN 220, Fig. 2A; and paragraph [0078]: “A real domain convolutional neural network (CNN) 220 can be used to jointly predict the airplane object's 3D dimensions and the 2D projections of the 3D bounding box corners (the dots on each corner of the bounding box surrounding the airplane).”);
comparing the first feature with a second feature associated with a target object by at least inputting second input data […], the second input data based on output data from the first neural network (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models. The affinity between the target object and the candidate 3D models can be estimated based on a comparison between the descriptor of the target object and the set of descriptors computed for the depth maps of the candidate 3D models. In some examples, as noted above, the descriptor matching engine 114 can perform the comparison using an optimization problem. One illustrative example of an optimization problem that can be used by the descriptor matching engine 114 is a nearest-neighbor search.” “The nearest neighbor search can be performed by the descriptor matching engine 114 to find the descriptor from the set of 3D model descriptors that is a closest match to (or is most similar to) the descriptor computed for the target object…. the 2D vector values of the target object descriptor is compared to the 2D vector values of each the 3D model depth maps.” Paragraph [0074]: “Other suitable comparison techniques can also be used to compare the target object and the candidate 3D models of the target object to determine a best-matching 3D model.”);
determining whether the candidate object is the target object based on the comparison (paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models.”).
Grabner fails to specifically teach: (re claim 18) comparing the first feature with a second feature associated with a target object by at least inputting second input data into a second neural network, the second input data based on output data from the first neural network.
Boman teaches, at Fig. 8; column 11, line 64 through column 12, line 11; and column 12, line 54 through column 13, line 6, comparisons of appearance parameters to determine if an imaged candidate matches a known model may be performed using nearest neighbor classifiers, as taught in Grabner, or neural networks. That is, neural networks are an art recognized functional equivalent for comparing appearance parameters of a candidate object.
In view of Boman’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 18) comparing the first feature with a second feature associated with a target object by at least inputting second input data into a second neural network, the second input data based on output data from the first neural network, with a reasonable expectation of success, since Boman teaches comparisons of appearance parameters to determine if an imaged candidate matches a known model may be performed using nearest neighbor classifiers, as taught in Grabner, or neural networks. That is, neural networks are an art recognized functional equivalent for comparing appearance parameters of a candidate object.
Grabner fails to specifically teach: (re claim 18) determining a path from a location of the robot to the target-object based at least in part on the determining of whether the candidate object is the target object; and causing the robot to traverse the path to the target object.
Colin teaches, at paragraphs [0017-0022, 0087, 0095-0096, and 0101], a robot uses the shape of an aircraft to determine an aircraft it is to inspect. The robot then moves in close to the aircraft, in a path that allows it to view the aircraft to be inspected, while avoiding obstacles, based on the robot’s position relative to the aircraft. This allows for such robots to recognize an aircraft to be inspected and move to inspect the aircraft autonomously.
In view of Colin’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 18) determining a path from a location of the robot to the target-object based at least in part on the determining of whether the candidate object is the target object; and causing the robot to traverse the path to the target object, with a reasonable expectation of success, since Colin teaches a robot uses the shape of an aircraft to determine an aircraft it is to inspect. The robot then moves in close to the aircraft, in a path that allows it to view the aircraft to be inspected, while avoiding obstacles, based on the robot’s position relative to the aircraft. This allows for such robots to recognize an aircraft to be inspected and move to inspect the aircraft autonomously.
Grabner further teaches:
Re claim 19. Wherein the first neural network is trained to extract the first feature from the multi-dimensional representation (pose estimation system 108, Figs. 1 and 2; Real Domain CNN 220, Fig. 2A; and paragraph [0078]: “A real domain convolutional neural network (CNN) 220 can be used to jointly predict the airplane object's 3D dimensions and the 2D projections of the 3D bounding box corners (the dots on each corner of the bounding box surrounding the airplane).”).
Re claim 20. Wherein the second neural network is trained to compare the first feature with the second feature to determine whether the candidate object is the target object (descriptor matching engine 114, Fig. 2A; and paragraphs [0071-0072]: “The descriptors can be used by the descriptor matching engine 114 to determine an affinity between the region of the input image corresponding to the target object and each of the candidate 3D models. The affinity between the target object and the candidate 3D models can be estimated based on a comparison between the descriptor of the target object and the set of descriptors computed for the depth maps of the candidate 3D models. In some examples, as noted above, the descriptor matching engine 114 can perform the comparison using an optimization problem. One illustrative example of an optimization problem that can be used by the descriptor matching engine 114 is a nearest-neighbor search.” “The nearest neighbor search can be performed by the descriptor matching engine 114 to find the descriptor from the set of 3D model descriptors that is a closest match to (or is most similar to) the descriptor computed for the target object…. the 2D vector values of the target object descriptor is compared to the 2D vector values of each the 3D model depth maps.” Paragraph [0074]: “Other suitable comparison techniques can also be used to compare the target object and the candidate 3D models of the target object to determine a best-matching 3D model.”).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Grabner et al. (US Publication No. 2019/0147221) as modified by Boman et al. (US Patent No. 9,064,161) and Colin et al. (US Publication No. 2016/0264262) as applied to claim 8 above, and further in view of Sugaki et al. (US Publication No. 2019/0314990).
The teachings of Grabner have been discussed above. Grabner fails to specifically teach: (re claim 16) wherein the robot comprises a robot arm, and wherein the computer-readable medium stores further instructions that, when executed by the processor, cause the robot to perform further operations comprising: detecting an obstacle along the path based on the multi-dimensional representation; determining a set of robot arm motions for the robot to perform with the robot arm to avoid colliding with the obstacle while traversing the path to the target object.
Sugaki teaches, at paragraphs [0108-0109], such unmanned vehicles may be equipped with manipulator arms which perform an obstacle avoiding motion during movement of the vehicle based on sensed obstacles. This eliminates or minimizes collision accidents between the arm and obstacles.
In view of Sugaki’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 16) wherein the robot comprises a robot arm, and wherein the computer-readable medium stores further instructions that, when executed by the processor, cause the robot to perform further operations comprising: detecting an obstacle along the path based on the multi-dimensional representation; determining a set of robot arm motions for the robot to perform with the robot arm to avoid colliding with the obstacle while traversing the path to the target object, with a reasonable expectation of success, since Sugaki teaches such unmanned vehicles may be equipped with manipulator arms which perform an obstacle avoiding motion during movement of the vehicle based on sensed obstacles. This eliminates or minimizes collision accidents between the arm and obstacles.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Grabner et al. (US Publication No. 2019/0147221) as modified by Boman et al. (US Patent No. 9,064,161) and Colin et al. (US Publication No. 2016/0264262) as applied to claim 8 above, and further in view of Avnery et al. (US Publication No. 2004/0245481).
The teachings of Grabner have been discussed above. Grabner fails to specifically teach: (re claim 17) wherein the robot comprises a robot arm, and wherein the computer-readable media stores further instructions that, when executed by the processor, cause the robot to perform further operations comprising, upon reaching the target object, decontaminating the target object.
Avnery teaches, at Fig. 10 and paragraph [0046], such mobile robots may be equipped with a maneuverable arm to perform decontamination on target surfaces. This allows for such robots to perform decontamination, thus reducing the need for humans to perform decontamination.
In view of Avnery’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include, with the apparatus as taught by Grabner, (re claim 17) wherein the robot comprises a robot arm, and wherein the computer-readable media stores further instructions that, when executed by the processor, cause the robot to perform further operations comprising, upon reaching the target object, decontaminating the target object, with a reasonable expectation of success, since Avnery teaches such mobile robots may be equipped with a maneuverable arm to perform decontamination on target surfaces. This allows for such robots to perform decontamination, thus reducing the need for humans to perform decontamination.
Response to Arguments
Applicant’s arguments, see page 8, filed 2/11/2026, with respect to the claim objections, double patenting rejection, and 35 USC § 101 rejection have been fully considered and are persuasive. The claim objections, double patenting rejection, and 35 USC § 101 rejection have been withdrawn.
Applicant’s arguments, see page 9, filed 2/11/2026, with respect to the rejection(s) of claim(s) 1 under 35 USC § 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Grabner et al. (US Publication No. 2019/0147221) in view of Boman et al. (US Patent No. 9,064,161) and Colin et al. (US Publication No. 2016/0264262) as discussed above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SPENCER D PATTON whose telephone number is (571)270-5771. The examiner can normally be reached Monday to Friday 9:00-5:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571)272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SPENCER D PATTON/ Primary Examiner, Art Unit 3656