Prosecution Insights
Last updated: April 19, 2026
Application No. 18/110,563

REMOTE VISUAL INSPECTION GUIDANCE

Non-Final OA §103
Filed
Feb 16, 2023
Examiner
ABDI, AMARA
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Roke Manor Research Limited
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
76%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
677 granted / 816 resolved
+21.0% vs TC avg
Minimal -8% lift
Without
With
+-7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
849
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
60.7%
+20.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 6, 2026 has been entered. Response to Amendment Applicant’s response to the last office action, filed March 6, 2026 has been entered and made of record. Claims 1, 9, 16, and 21 have been amended. Claims 1-21 are pending in this application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7-12, 15, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Oetiker et al, (US-PGPUB 20210310962) in view of Ma, (US-PGPUB 20210023703); and further in view of Kim et al, (US-Patent 11,687,086) In regards to claim 1, Oetiker discloses a method of guiding a remote visual inspection device along an inspection path through a machine defined by a pre-stored inspection path-defining image group, from a current position to a target position and collecting an inspection image of an object within the machine from the target position, (this limitation as not been given any weight, as it occurs in the preamble), the method comprising: capturing the live image feed from the remote visual inspection device from the current position during an inspection of an object, (see at least: Par. 0018, 0022, and 0066, the climbing robot may include a video camera for capturing images of the interior surfaces of the confined space, based on sensed pose (i.e. position and orientation) data indicative of the current position and orientation of the robot), and the control device 34 implicitly receives the captured images); generating guidance instructions, based on localization data, (see at least: Fig. 6, steps S31-S35, and Par. 0090-0094, stored inspection data including the localization data may be used in step S32 to plan feasible or optimal paths for robotic or other inspection systems inside confined spaces, and using the method to automatically or semi-automatically guide a robot, such as the climbing robot 12, or any other actuated tool inside a confined space along a specific path or trajectory, [i.e., implicitly generating guidance instructions by the control device 34, for robotic or other inspection systems based on localization data]); and outputting the guidance instructions to enable the device to be moved along the inspection path to the target position, (see at least: Par. 0091, output 3D visualization of the data to the operator on a monitor or other display device, including showing to the operator the sensor's field of view in a three-dimensional view and visualizing the data in the form of markers or data visualizations, like textures, attached to the asset visualization, to automatically or semi-automatically guide a robot, such as the climbing robot 12, or any other actuated tool inside a confined space along a specific path or trajectory, [i.e., implicitly outputting the guidance instructions, “3D visualization of the data including markers”, to enable the device to be moved along the inspection path, “guide a robot, along a specific path or trajectory”, to the determined exact position, “target position”, based on the localization data]). Oetiker does not expressly disclose matching features of the live captured image to image features in the pre-stored inspection path-defining image group; identifying a key-frame image which is the next closest image in the path-defining image group corresponding to the target position; and estimating a transform between the live captured image and next key-frame image, using a transformation estimation method; and generating guidance instructions, based on the transform However, Ma discloses also capturing the live image feed from the remote visual inspection device from the current position, (see at least: Par. 0022-0026,0047-0048, a vision output may be provided to the operator via a video feed); and matching features of the live captured image to image features in the pre-stored inspection path-defining image group, (see at least: Par. 0069-0070, keyframes captured along the route are used as the set of keyframe for the navigation task; and the robot compares the current image to the set of keyframes to identify its current location, [i.e., implicitly matching features of the live captured image, “current image 302”, to image features in the pre-stored inspection path-defining image group, “keyframes captured along the route are used as the set of keyframe for the navigation task”]); and identifying a key-frame image which is the next closest image in the path-defining image group corresponding to the target position, (see at least: Par. 0062, robot may determine its current location based on the keyframe and use the current location to navigate on a path to another location, “target position”. Also, Par. 0069-0070, the robot captures a keyframe at certain points along the route—from start to finish, and the robot compares the current image to the set of keyframes to identify its current location, [i.e., implicitly identifying a key-frame image, “keyframe 308”, which is the next closest image in the path-defining image group, corresponding to the target position, “implicitly identifying a key-frame image 308 from the set of keyframes captured along the route”, corresponding to the current location”]); and estimating a transform between the live captured image and next key-frame image, using a transformation estimation method, (see at least: Par. 0064-0067, a pose delta, “i.e., transformation”, may be determined based on the comparison of the current image and the keyframe, where the delta refers to the change in the robot's pose from the keyframe to the current image, [i.e., estimating a transform, “pose delta”, between the live captured image and next key-frame image, “based on comparison of the current image and the keyframe“, using a transformation estimation method]); and generating guidance instructions, based on the transform, (see at least: Par. 0067, pose delta (e.g., relative transformation) between one or more number values of the pixel descriptors of the pixels may be used to update the parameters of the behavior, such as the one or more positions that is to be executed; and from Par. 0062, a behavior associated with the matched keyframes may be executed, such that the robot may determine its current location based on the keyframe and use the current location to navigate on a path to another location. That is, when the behavior is an instruction to navigate the robot to the next location, the control device 34 technically uses the pose delta (e.g., relative transformation) to navigate the robot to the new location, (target position). Oetiker and Ma are combinable because they are both concerned with a robot navigating. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Oetiker, to use the pose delta transformation method, as though by Ma, in order to update position that is to be executed, based on the transformation, (Ma, Par. 0067), to thereby navigate the robot to the updated position. Although, Oetiker discloses collecting the inspection image from the remote visual inspection device, (see at least: Par. 0022-0026,0047-0048, and 0069-0070); the combine teaching Oetiker and Ma as whole does not expressly disclose that the image being collected at the target position. However, Kim et al discloses collecting the image from the remote visual inspection device at the target position, (see at least: Fig. 5, col. 16, lines 4-27, processors of the robot 120, such as the one executing the planner 250, control 520 the robot 120 to the target location 474 along a path 470, … upon reaching the target location, the robot 120 performs the action specified by the input command, such as taking a picture of the inventory at the target location, [i.e., collecting the inspection image from the remote visual inspection device, “implicit by taking a picture of the inventory at the storage site 110 by the image sensor 210”, at the target position, “the target location of the robot 120”]). Oetiker, Ma, and Kim et al are combinable because they are all concerned with a robot navigation. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Oetiker and Ma, to control the robot to the target location, as though by Kin et al, in order to capture images of the object at the target location, (Kim, col. 16-lines 18-20) In regards to claim 2, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 1. Ma further discloses extracting all feature data from the live captured image before matching it, (see at least: Par. 0027-0028, the task is performed based on features extracted from an image of a current view of the robot, where the features refer to distinct characteristic of an image (e.g., corners, edges, high contrast areas, low contrast areas, etc., [i.e., feature data corresponds to characteristic of an image (e.g., corners, edges, high contrast areas, low contrast areas, etc.,”]). In regards to claim 3, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 1. Oetiker further discloses repeating the steps until the remote visual inspection device reaches the target position, (see at least: Par. 0013, repeatedly continuously localizing a current position of the robot system while navigating the robot system along the function path using the sensor data and the stored map; and from Par. 0085, procedure 47 may further include a prediction step S22 in which the poses of the particles are updated based on the received sensed pose data, [i.e., repeating the steps until the remote visual inspection device reaches the target position, “repeatedly continuously localizing a current position of the robot system while navigating the robot system along the function path, implicitly until the robot system reaches its target position”]). In regards to claim 4, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 1. Oetiker further discloses generation of guidance markings on a display of the remote visual inspection device, (see at least: Par. 0091, output 3D visualization of the data to the operator on a monitor or other display device, including showing to the operator the sensor's field of view in a three-dimensional view and visualizing the data in the form of markers or data visualizations, like textures, attached to the asset visualization, to automatically or semi-automatically guide a robot, such as the climbing robot 12, or any other actuated tool inside a confined space along a specific path or trajectory, [i.e., generation of guidance markings on the display of the remote visual inspection device, “visualizing the data in the form of markers or data visualizations to automatically guide a robot, such as the climbing robot 12, along a specific path or trajectory”]). In regards to claim 7, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 1. Oetiker further discloses an autonomous device driver for moving the remote visual inspection device along the inspection path, (see at least: Par. 0004, climbing robots may carry an inspection camera that can capture images of the wall or structure, which can be used to detect defects in the wall; and from Par. 0032, automatically or semi-automatically guide a robot or actuated tool along a specific path or trajectory, [i.e., autonomous device driver, “climbing robot”, for moving the remote visual inspection device, “climbing robot may carry an inspection camera that can capture images of the structure”, along the inspection path, “implicit by guiding the robot or actuated tool along a specific path or trajectory”]). In regards to claim 8, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 1. Oetiker further discloses loading the inspection path-defining image group captured during a previous inspection, (see at least: Par. 0032, moving the robot or other tool inside confined spaces by presenting the robot or tool and the camera's or sensor's field of view in a three-dimensional view. This may be used for automatically move a robot along a path recorded during previous missions, [i.e., loading the inspection path-defining image group captured during a previous inspection, “implicitly loading the robot with prestored path recorded during previous missions”]). Regarding claim 9, claim 9 recites substantially similar limitations as set forth in claim 1. As such, claim 9 is rejected for at least similar rational. The Examiner further acknowledged the following additional limitation(s): “remote visual inspection device “. However, Oetiker discloses the “remote visual inspection device”, (see at least: Par. 0015, including magnetic crawler robots and cameras, “i.e., mobile remote inspection device”). Regarding claim 10, claim 10 recites substantially similar limitations as set forth in claim 2. As such, claim 10 is rejected for at least similar rational. Regarding claim 11, claim 11 recites substantially similar limitations as set forth in claim 3. As such, claim 11 is rejected for at least similar rational. In regards to claim 12, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 9. Oetiker further discloses a display, and a guidance marker arranged to generate guidance markings on the display of the remote visual inspection device, (see at least: Par. 0091, output 3D visualization of the data to the operator on a monitor or other display device, “implicit the display”, including showing to the operator the sensor's field of view in a three-dimensional view and visualizing the data in the form of markers or data visualizations, like textures, attached to the asset visualization, to automatically or semi-automatically guide a robot, such as the climbing robot 12, or any other actuated tool inside a confined space along a specific path or trajectory, [i.e., generation of guidance markings on the display of the remote visual inspection device, “visualizing the data in the form of markers or data visualizations to automatically guide a robot, such as the climbing robot 12, along a specific path or trajectory”]). Regarding claim 15, claim 15 recites substantially similar limitations as set forth in claim 7. As such, claim 15 is rejected for at least similar rational. Regarding claim 21, claim 21 recites substantially similar limitations as set forth in claim 1. As such, claim 21 is rejected for at least similar rational. The Examiner further acknowledged the following additional limitation(s): “a non-transitory computer-readable medium storing instructions executable by a remote visual inspection device, wherein the instructions, when executed, cause to the remote visual inspection device to …. the method of claim 1”. However, Oetiker discloses the “non-transitory computer-readable medium storing instructions executable by a remote visual inspection device, wherein the instructions, when executed, cause to the remote visual inspection device …. the method of claim 1”, (see at least: Par. 0060, memory 38 is any memory or storage arranged to store program and data and may be, among others, a RAM, ROM, PROM, EPROM, EEPROM and combinations thereof”). In regards to claim 22, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 1. Kim further discloses wherein there are a plurality of target positions along the inspection path, (see at least: col. 4, lines 40-42, the computing server 150 may direct the robot 120 to scan and capture pictures of inventory stored at various locations at the storage site 110, “i.e., various locations at the storage site 110 corresponds to the plurality of target positions along the inspection path”]). Claims 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Oetiker, Ma, and Kim, as applied to claims 4, and 12 above; and further in view of Hato et al, (US-PGPUB 20210372810) In regards to claim 5, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 4. Oetiker further discloses wherein the guidance markings are on-screen lines oriented to indicate a direction the device must be moved, (see at least: Par. 0091, visualizing the data in the form of markers or data visualizations, like textures, attached to the asset visualization, to automatically guide a robot, such as the climbing robot 12, along a specific path or trajectory, “implicitly indicating the direction the device must be moved by automatically guiding, “i.e., directing”, a robot, such as the climbing robot 12, along a specific path or trajectory”]). The combine teaching Oetiker, Ma, and Kim as whole does not expressly disclose the guidance markings are on-screen lines. Hato discloses guidance markings are on-screen lines, (see at least: Par. 0091, the guide lane marking line Pg1 informs the driver of the planned route of the own vehicle based on the route information by the two-line display shape extending in the traveling direction along the road surface, [i.e., implicit the guidance markings are on-screen lines]). Oetiker, Ma, Kim, and Hato are combinable because they are all concerned with a navigation method. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Oetiker, Ma, and Kim, to use the guide lane marking line Pg1 on screen unit 77, as though by Hato, in order to provide the planned route based on the two-line display shape extending in the traveling direction along the road surface, (Par. 0091) Regarding claim 13, claim 13 recites substantially similar limitations as set forth in claim 5. As such, claim 13 is rejected for at least similar rational. Claims 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ma, (US-PGPUB 20210023703) in view of Fagg et al, (US-PGPUB 20200250837); and further in view of Kim et al, (US-Patent 11,687,086); and further in view of Spiegel et al, (US-PGPUB 20200401617) In regards to claim 16, Ma discloses a method of creating an inspection path through a machine for inspecting an object within the machine using a remote visual inspection device, (this limitation has not been given any weight in the claim, as it occurs in preamble), comprising: capturing a video stream of a series of images of the object in an initial inspection using the remote visual inspection device, (see at least: Par. 0022-0026, 0047-0048, a vision output may be provided to the operator via a video feed); extracting features of the object shown in the images from the video stream, (see at least: Par. 0027-0028, the task is performed based on features extracted from an image of a current view of the robot, where the features refer to distinct characteristic of an image (e.g., corners, edges, high contrast areas, low contrast areas, etc., [i.e., feature data corresponds to characteristic of an image (e.g., corners, edges, high contrast areas, low contrast areas, etc.,”]); matching the extracted features from the images with the extracted features from others of the images, (see at least: Par. 0069-0070, keyframes captured along the route are used as the set of keyframe for the navigation task; and the robot compares the current image to the set of keyframes to identify its current location, [i.e., implicitly matching features of the live captured image, “current image 302”, to image features in the pre-stored inspection path-defining image group, “keyframes captured along the route are used as the set of keyframe for the navigation task”]); and estimating a transform between one image and the next image in the series, using a transformation estimation method operating on the matched features of those images, (see at least: Par. 0064-0067, a pose delta, “i.e., transformation”, may be determined based on the comparison of the current image and the keyframe, where the delta refers to the change in the robot's pose from the keyframe to the current image, [i.e., estimating a transform, “pose delta”, between the live captured image and next key-frame image, “based on comparison of the current image and the keyframe“, using a transformation estimation method, implicitly operating on the matched features of those images]); and Ma does not expressly disclose selecting a subset of images from the series of images which include features of the object which are present in both the previous and subsequent images, the subset of images defining an inspection path-defining image group of key-frames. Fagg et al, discloses selecting a subset of images from the series of images which include features of the object which are present in both the previous and subsequent images, the subset of images defining an inspection path-defining image group of key-frames, (Par. 0031, the flow computation unit can extract a set of images for each image frame (e.g., the current image frame, the plurality of previous image frame), and determine the matched-feature data based on one or more matching image features from the plurality of sets of image features); and the flow computation unit can determine one or more pixels in each of a plurality of image frames that represent the same object, [i.e., selecting a subset of images from the series of images which include features of the object, “determine one or more pixels in each of a plurality of image frames that represent the same object”, which are present in both the previous and subsequent images, “implicit by matched-feature data from the plurality of sets of image features”]. Further, Par. 0034, the observation formulation unit can determine a Cartesian velocity associated with a portion of a scene depicted by a current image frame based on a comparison between the current image frame and a plurality of previous image frames, [i.e., the subset of images, “the one or more pixels in each of a plurality of image frames that represent the same object”, defining an inspection path-defining image group of key-frames, “defining cartesian velocity associated with a portion of a scene, which the portion of the scene corresponds to the image group of key-frames”]). Ma and Fagg are combinable because they are both concerned with object-based path guidance. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Ma, to use the observation formulation unit, as though by Fagg, in order to determine a Cartesian velocity associated with a portion of a scene depicted by a current image frame, (Fagg, see at least: Par. 0034) The combine teaching Ma and Fagg as whole does not expressly disclose marking an image from the series of images as a target image to be used to inspect the object collected from the remote visual inspection device located at the target position. However, Kim et al discloses collecting series of images from the remote visual inspection device located at the target position, (see at least: col. see at least: Fig. 5, col. 16, lines 4-27, processors of the robot 120, such as the one executing the planner 250, control 520 the robot 120 to the target location 474 along a path 470, …As the robot 120 moves to the target location 474, the robot 120 captures 530 images of the storage site 110 using the image sensor 210, where the images captured may be in a sequence of images, [i.e., collecting series of images of the object, “capturing sequence of images of the storage site 110”, from the remote visual inspection device, “robot 120 “, located at the target position, “the target location 474”]). Ma, Fagg, and Kim et al are combinable because they are all concerned with a robot navigation. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ma and Fagg, to control the robot to the target location, as though by Kin et al, in order to capture images of the object at the target location, (Kim, col. 16-lines 18-20) The combine teaching Ma, Fagg, and Kim et al as whole does not expressly disclose marking an image from the series of images as a target image to be used to inspect the object. However, Spiegel discloses marking an image from the series of images as a target image to be used to inspect the object, (see at least: Par. 0075, using “Scene recognition” for finding in an image sequence the image having the closest scene to the reference image, and then using the “locate Point of Interest” process for finding and marking the POI in the image, [i.e., marking an image, “implicit by using the “locate Point of Interest” process for finding and marking (“POI”) the image having the closest scene to the reference image as the target image or POI image”, from the series of images, “in an image sequence”]. Note that the limitation: “to be used to inspect the object”, is an intended use in the claim”). Ma, Fagg, Kim, and Spiegel are combinable because they are all concerned with processing acquired images. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ma, Fagg, and Kim, to use the “locate Point of Interest” process, as though by Spiegel, in order to find and mark the image in an image sequence, having the closest scene to the reference image, (Spiegel, Par. 0075) In regards to claim 17, the combine teaching Ma, Fagg, Kim, and Spiegel as whole discloses the limitations of claim 16. Ma further discloses removing mismatched features, (see at least: Par. 0070-0071, implicit by determining a matching confidence for the current image to a particular keyframe, which technically removes the mismatched features, when the matching confidence being explicitly smaller than a threshold). Regarding claim 20, claim 20 recites substantially similar limitations as set forth in claim 16. As such, claim 20 is rejected for at least similar rational. The Examiner further acknowledged the following additional limitation(s): “a remote visual inspection device”. However, Ma discloses the “remote visual inspection device”, (see at least: Par. 0005, “robot device”). Claims 23 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Oetiker, Ma, and Kim, as applied to claims 5, and 13 above; and further in view of Leen et al, (US-PGPUB 20220107239) In regards to claim 23, the combine teaching Oetiker, Ma, and Kim as whole discloses the limitations of claim 5. The combine teaching Oetiker, Ma, and Kim as whole does not expressly disclose where the lines are arrows. Leen discloses the mobile compute device 120 presents the route data to a non-human investigator, such as a robot or autonomous vehicle, and may display one or more directional arrows indicative of a direction to travel along the route, (Par. 0169-0170). Oetiker, Ma, Kim, and Leen are combinable because they are all concerned with a navigation method. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Oetiker, Ma, and Kim, to display one or more directional arrows, as though by Leen, in order to indicate a direction in which to travel along the route, (Leen, Par. 0170) Regarding claim 25, claim 25 recites substantially similar limitations as set forth in claim 23. As such, claim 25 is rejected for at least similar rational. Allowable Subject Matter Claims 6, 14, 18, and 19 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. With respect to claim 6, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “wherein the lines extend between feature points in the key-frame and corresponding feature points in the live captured image” The relevant prior art of record, Oetiker (US-PGPUB 20210310962) discloses a method for guiding a remote visual inspection device, (climbing robot), along an inspection path defined by a pre-stored inspection path-defining image group, from a current position to a target position, (see at least: Figs. 3-4, and Par. 0032, automatically move a robot along a path recorded during previous missions to a target position), the method comprising: capturing the live image feed from the remote visual inspection device from the current position during an inspection of an object, (see at least: Par. 0018, 0022, and 0066, the climbing robot may include a video camera for capturing images of the interior surfaces of the confined space, based on sensed pose (i.e. position and orientation) data indicative of the current position and orientation of the robot), and the control device 34 implicitly receives the captured images); generating guidance instructions, based on localization data, (see at least: Fig. 6, steps S31-S35, and Par. 0090-0094, stored inspection data including the localization data may be used in step S32 to plan feasible or optimal paths for robotic or other inspection systems inside confined spaces, and using the method to automatically or semi-automatically guide a robot, such as the climbing robot 12, or any other actuated tool inside a confined space along a specific path or trajectory, [i.e., implicitly generating guidance instructions by the control device 34, for robotic or other inspection systems based on localization data]); and outputting the guidance instructions to enable the device to be moved along the inspection path to the target position, (see at least: Par. 0091, output 3D visualization of the data to the operator on a monitor or other display device, including showing to the operator the sensor's field of view in a three-dimensional view and visualizing the data in the form of markers or data visualizations, like textures, attached to the asset visualization, to automatically or semi-automatically guide a robot, such as the climbing robot 12, or any other actuated tool inside a confined space along a specific path or trajectory, [i.e., implicitly outputting the guidance instructions, “3D visualization of the data including markers”, to enable the device to be moved along the inspection path, “guide a robot, along a specific path or trajectory”, to the determined exact position, “target position”, based on the localization data]). However, Oetiker fails to teach or suggest, either alone or in combination with the other cited references, wherein the lines extend between feature points in the key-frame and corresponding feature points in the live captured image A further prior art of record, Hato, (US-PGPUB 20210372810), discloses guidance markings are on-screen lines, (see at least: Par. 0091, the guide lane marking line Pg1 informs the driver of the planned route of the own vehicle based on the route information by the two-line display shape extending in the traveling direction along the road surface, [i.e., implicit the guidance markings are on-screen lines]); but fails to teach or suggest, either alone or in combination with the other cited references, wherein the lines extend between feature points in the key-frame and corresponding feature points in the live captured image Regarding claim 14, claim 14 recites substantially similar limitations as set forth in claim 6. As such, claim 14 is in condition for allowance, for at least similar reasons, as stated above. With respect to claim 18, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “wherein the inspection path-defining image group are the minimum number of key-frames required to allow a repeat navigation using the technique of this invention” The relevant prior art of record, Ma (US-PGPUB 20210023703) discloses a method of creating an inspection path for inspecting an object using a remote visual inspection device, (see at least: Figs. 4-5), comprising: capturing a video stream of a series of images of the object in an initial inspection using the remote visual inspection device, (see at least: Par. 0022-0026, 0047-0048, a vision output may be provided to the operator via a video feed); extracting features of the object shown in the images from the video stream, (see at least: Par. 0027-0028, the task is performed based on features extracted from an image of a current view of the robot, where the features refer to distinct characteristic of an image (e.g., corners, edges, high contrast areas, low contrast areas, etc., [i.e., feature data corresponds to characteristic of an image (e.g., corners, edges, high contrast areas, low contrast areas, etc.,”]); matching the extracted features from the images with the extracted features from others of the images, (see at least: Par. 0069-0070, keyframes captured along the route are used as the set of keyframe for the navigation task; and the robot compares the current image to the set of keyframes to identify its current location, [i.e., implicitly matching features of the live captured image, “current image 302”, to image features in the pre-stored inspection path-defining image group, “keyframes captured along the route are used as the set of keyframe for the navigation task”]); and estimating a transform between one image and the next image in the series, using a transformation estimation method operating on the matched features of those images, (see at least: Par. 0064-0067, a pose delta, “i.e., transformation”, may be determined based on the comparison of the current image and the keyframe, where the delta refers to the change in the robot's pose from the keyframe to the current image, [i.e., estimating a transform, “pose delta”, between the live captured image and next key-frame image, “based on comparison of the current image and the keyframe“, using a transformation estimation method, implicitly operating on the matched features of those images]); and However, Ma fails to teach or suggest, either alone or in combination with the other cited references, wherein the inspection path-defining image group are the minimum number of key-frames required to allow a repeat navigation using the technique of this invention. A further prior art of record, Fagg et al, (US-PGPUB 20200250837), discloses selecting a subset of images from the series of images which include features of the object which are present in both the previous and subsequent images, the subset of images defining an inspection path-defining image group of key-frames, (Par. 0031, the flow computation unit can extract a set of image features for each image frame (e.g., the current image frame, the plurality of previous image frames), and determine the matched-feature data based on one or more matching image features from the plurality of sets of image features); and the flow computation unit can determine one or more pixels in each of a plurality of image frames that represent the same object, [i.e., selecting a subset of images from the series of images which include features of the object, “determine one or more pixels in each of a plurality of image frames that represent the same object”, which are present in both the previous and subsequent images, “implicit by matched-feature data from the plurality of sets of image features”]. Further, Par. 0034, the observation formulation unit can determine a Cartesian velocity associated with a portion of a scene depicted by a current image frame based on a comparison between the current image frame and a plurality of previous image frames, [i.e., the subset of images, “the one or more pixels in each of a plurality of image frames that represent the same object”, defining an inspection path-defining image group of key-frames, “defining cartesian velocity associated with a portion of a scene, which the portion of the scene corresponds to the image group of key-frames”]). However, Fagg et al fails to teach or suggest, either alone or in combination with the other cited references, wherein the inspection path-defining image group are the minimum number of key-frames required to allow a repeat navigation using the technique of this invention. With respect to claim 19, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole): “identifying additional images displaced from the inspection path and generating a recovery path to assist a user following the inspection path back to it if they drift from it” The prior art of record, Ma (US-PGPUB 20210023703) as stated above with respect to claim 18, applies similarly also to claim 19. However, Ma fails to teach or suggest, either alone or in combination with the other cited references, generating a recovery path to assist a user following the inspection path back to it if they drift from it A further prior art of record, Naithani et al, (US-Patent 11,022,982) discloses identifying additional images displaced from the inspection path, (see at least: col. 2, lines 26-40, identifying a misaligned segment of the route based on one or more differences between the one or more images and the benchmark visual profile); but fails to teach or suggest, either alone or in combination with the other cited references, generating a recovery path to assist a user following the inspection path back to it if they drift from it. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMARA ABDI/Primary Examiner, Art Unit 2668 03/21/2026
Read full office action

Prosecution Timeline

Feb 16, 2023
Application Filed
Mar 22, 2023
Response after Non-Final Action
May 17, 2025
Non-Final Rejection — §103
Aug 21, 2025
Response Filed
Nov 04, 2025
Final Rejection — §103
Mar 06, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602822
METHOD DEVICE AND STORAGE MEDIUM FOR BACK-END OPTIMIZATION OF SIMULTANEOUS LOCALIZATION AND MAPPING
2y 5m to grant Granted Apr 14, 2026
Patent 12597252
METHOD OF TRACKING OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12576595
SYSTEMS AND METHODS FOR IMPROVED VOLUMETRIC ADDITIVE MANUFACTURING
2y 5m to grant Granted Mar 17, 2026
Patent 12574469
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12563154
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
76%
With Interview (-7.5%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month