Prosecution Insights
Last updated: April 19, 2026
Application No. 18/232,766

MODULE FOR UNDERWATER REMOTELY OPERATED VEHICLES

Non-Final OA §102§112
Filed
Aug 10, 2023
Examiner
STARCK, ERIC ANTHONY
Art Unit
3615
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
TidalX AI Inc.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
12 granted / 17 resolved
+18.6% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
28 currently pending
Career history
45
Total Applications
across all art units

Statute-Specific Performance

§103
33.8%
-6.2% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
41.7%
+1.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the application file on 10 August 2023. Claims 1-20 are presently pending and are presented for examination. Priority Acknowledgment is made of applicant’s priority to provisional Application No. 63/415587, filed on 12 October 2022. Claim Objections Claim 3 objected to because of the following informalities: Claim 3 recites “…a machine learning engine trained generate a result…” and should be “…a machine learning engine trained to generate a result…”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are as follows {underline and bold is for the generic placeholder, italics for functional language}: “An apparatus configured to attach to a remotely operated vehicle (ROV)…” in claim 1 preamble. Is interpreted to include “mounting hardware” as found in para [0004]. “…the one or more sensors configured to generate sensor data that is associated with an underwater task…” in claim 1. Is interpreted to as “…cameras, lights… robotic arms…” as found in para [0002]; “(e.g., cameras, magnetic sensors, sonar, etc.)” as found in para [0009]; “…a camera sensor, a sonar sensor, a magnetometer, a radar sensor, or a combination of these…” as found in para [0023]; “e.g., cameras, sonars, radars, magnetic sensors, etc.” as found in para [0027]; or “sensors (e.g., oil sensor, turbidity sensor, and salinity sensor), etc.” as found in para [0040]. “…the one or more processors configured to: receive the sensor data from the one or more sensors; generate a navigation plan for the ROV using the sensor data; determine, using the navigation plan, control instructions configured to control the ROV to perform the underwater task; and provide the control instructions to an interface of the ROV configured to communicate with the apparatus.” in claim 1. Where one or more processors is being interpreted as a generic computer where control instructions are being interpreted as a computer program as found in at least paras [0050] to at least [0054]. “…graphics processing unit configured to process the sensor data using a machine learning algorithm.” in claim 2. Is interpreted as “an on-board graphics processing unit (GPU)” as found in para [0008]. “…a communication engine configured to communicate with a surface vessel or another ROV…” in claim 7. Is interpreted as either “send the data for the underwater task to a computer through wireless or wired communication techniques” as found in para [0034] or “e.g., an acoustic communication engine” as found in para [0045]. “…one or more processors are configured to generate the navigation plan for the ROV using the sensor data by fusing the sensor data obtained from multiple sensors…” in claim 8. “…the apparatus is configured to attach to a remotely operated vehicle (ROV)…” in claim 10. “…one or more sensors is in a watertight housing and is configured to generate sensor data that is associated with an underwater task…” in claim 10. “…control instructions configured to control the ROV to perform the underwater task…” in claim 10. “…an interface of the ROV configured to communicate with the apparatus…” in claim 10. Is interpreted as found in para. [0032] “…the same interface 124 that usually connects to the surface vessel 108 via an umbilical cable 110…” which is a connection port (plug or socket) supplied on the ROV. “…communication engine configured to communicate with a surface vessel or another ROV…” in claim 16. “…the apparatus is configured to attach to a remotely operated vehicle (ROV)…” in claim 19. “…one or more sensors is in a watertight housing and is configured to generate sensor data that is associated with an underwater task…” in claim 19. “…control instructions configured to control the ROV to perform the underwater task…” in claim 19. “…an interface of the ROV configured to communicate with the apparatus…” in claim 19. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 and 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation "one or more computer vision tasks" in line 2. Claim 1 line 6 recite “an underwater task”. Where it is not clear if a computer vision task is the same or different than an underwater task. The Examiner notes: para. [0038] recites “generate a result of object detection tasks to detect fish in the water” as an example of a computer vision task where para [0015] recite “used to count types and numbers of fish in an underwater region” as an example of underwater tasks. To the best of the Examiners understanding an underwater task is computer vision task or vice versa. There is insufficient antecedent basis for this limitation in the claim. Claim 12 recites the limitation "one or more computer vision tasks" in line 2. Claim 10 line 5 recite “an underwater task”. Where it is not clear if a computer vision task is the same or different than an underwater task. The Examiner notes: para. [0038] recites “generate a result of object detection tasks to detect fish in the water” as an example of a computer vision task where para [0015] recite “used to count types and numbers of fish in an underwater region” as an example of underwater tasks. To the best of the Examiners understanding an underwater task is a computer vision task or vice versa. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Willners et al. (NPL: “From market-ready ROVs to low-cost AUVs”). WILLNERS J. S. et al. From market-ready ROVs to low-cost AUVs [article online]. IEEE Xplore, 2021 [retrieved on 2025-11-20]. Retrieved from the Internet: <URL: https://ieeexplore.ieee.org/document/9705798>. Regarding claim 1, Willners et al. discloses an apparatus (the platform; See at least: fig. 2 right side embodiment and page 2, III first para “…To enable the platform to operate reliably in an autonomous fashion, additional hardware is often desired. In this section, we describe the necessary hardware modifications done on the BlueROV2 vehicle…”) configured to attach to a remotely operated vehicle (ROV) (See at least: fig. 2 right side embodiment which has a frame bolted to an ROV), the apparatus comprising: a watertight housing (See at least: fig. 2 which shows in the right of the fig. “an integrated system with cameras and all additional electronics enclosed in a single housing”, where Page 3, III.B. second para. Fig. 2 (right) shows our vision system designed according to this approach. This requires a dedicated underwater housing, which is difficult to design and produce, as well as much more expensive than typical cylinder-shaped enclosures.” where a underwater housing is interpreted as watertight and Page 3, III.C first para indicates it is known to use water tight enclosures with electronics “Additionally, with some care during the mounting stage the aluminium water tight enclosures from the BlueRobotics family can be utilised to provide efficient cooling to the processing units.”); a mounting hardware (See at least: fig. 2 which shows a frame mounted to the ROV and the “integrated vision system” mounted to the frame) that attaches the watertight housing to the ROV(See at least: fig. 2); one or more sensors (Doppler Velocity Log (DVL), stereo cameras, custom-made sensor; See at least: page 2, III.A. second para. “…To enable a full 6 DoF pose estimation we equipped the vehicle with a DVL…” and page 3 III.B. fist para. “We equipped the platform developed in this paper with a set of stereo cameras with a custom-made sensor…” and second para. “…the design of a single housing to incorporate all cameras.”) in the watertight housing (See at least: fig. 2), the one or more sensors configured to generate sensor data that is associated with an underwater task (mapping, inspection, planning and payload tasks: See at least: page 5, V.A. “how mapping can be performed with the developed system. The vehicle has been used with stereo vision ORB-SLAM [34] with online extrinsic calibration of a DVL, to incorporate the sensor measurements (velocity, depth and orientation) into the visual pose estimation.” Which discusses how the sensors are using data to perform mapping tasks; The other tasks are discussed in pages 5-6, sections V.A. through V.B.); and one or more processors (computers: See at least: page 3, III.C. first para. “…embedded computers from the Nvidia Jetson family. Thanks to the included GPU, this boards are extremely effective for testing and implementing different vision algorithmin real-time...”) in the watertight housing, the one or more processors configured to (See at least: fig. 3 which shows the navigation stack flow chart): receive the sensor data from the one or more sensors (See at least: fig. 3); generate a navigation plan (See at least: fig. 3 and Page 4, IV.B. “Pose Estimation System… To enable autonomous navigation, the robot needs the ability to estimate its own position…” page 5 IV. D. “…The pilot serves as the interface that all high-level planning algorithms will use to control the robot…” and page 5, V.A.1.a. “…the high-level task planner [36] generates a plan—sequence of actions that leads the robot from the initial to the final state where all goals are achieved—that allows the mapping of a structure amongst other actions. The plan actions are dispatched to the low-level system, including the hardware and software components previously discussed in this paper, for execution. Our system combines goal-based mission planning, based on a temporal planner [38], and a knowledge-based framework to achieve plans for dynamic problems. Therefore, this framework can adapt the initial plan to maintain robot operability when unexpected changes (not considered in the initial plan) occur…”) for the ROV using the sensor data (See at least: fig. 3); determine, using the navigation plan, control instructions (PID-controller, waypoint pilot : See at least: fig. 3 and page 5, IV. C. first para “…However, if the vehicle’s frontseat is not endowed with this capability, the position can be controlled from the backseat using e.g., a cascaded PID-controller [28]…” and page 5, IV. D. first para “…The pilot serves as the interface that all high-level planning algorithms will use to control the robot…”) configured to control the ROV to perform the underwater task; and provide the control instructions to an interface (Interpreter node; See at least: fig. 3 and page 5, IV. A. first para. “…The backseat driver is able to use the data from the ROV’s sensors and command the ROV through an interpreter node, which enables user-defined software to take control over the ROV. The backseat driver can be deployed either as software on the frontseat’s dedicated computer or as a separate computer connected to the frontseat…” ) of the ROV configured to communicate with the apparatus. Regarding claim 2, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses wherein the one or more processors comprise a graphics processing unit (GPU : See at least: page 3, III.C. first para. “…embedded computers from the Nvidia Jetson family. Thanks to the included GPU, this boards are extremely effective for testing and implementing different vision algorithmin real-time...”) configured to process the sensor data using a machine learning algorithm (See at least: fig. 3 where sensor data are inputs to the navigation stack pages 4-5, IV.B. “…In our navigation stack we use the Robot Localization Package from ROS which fuses the sensor data through an Extended Kalman Filter (EKF) [20] to perform DR. DR is however based on the integration of data containing potential noise and biases, hence the error and uncertainty can therefore grow without bound. An alternative to DR is to use natural features as references for estimating the pose using e.g., visual [21] or acoustic [22] simultaneous localisation and mapping (SLAM)… we use additional odometry from a visual SLAM node fused with the DR generated by the frontseat, to improve the position estimate, further described in section V-A...” where both the EKF and SLAM are interpreted as a machine learning algorithm). Regarding claim 3, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses comprising a machine learning engine (robust underwater SLAM system, high-level task planner: See at least: fig. 3, page 4, IV.B. first para “…An alternative to DR is to use natural features as references for estimating the pose using e.g., visual [21] or acoustic [22] simultaneous localisation and mapping (SLAM)…” and page 5, V.A.1. “…Autonomous Robust Inspection: An integrated SLAM with active relocalisation for map-merging/loop-closure was deployed to test a robust underwater SLAM system [35]. The approach combines task-planning [36] and viewpoint generation with the SLAM system to endow the system with a map-merging procedure when visual tracking is lost…” and page 5, V.A.1.a. “… AI planning solutions have shown promising results while solving complex missions in the underwater domain [37], including environment’s inspection. For the use case we present in this paper, the high-level task planner [36] generates a plan…”) trained generate a result of one or more computer vision tasks (See at least: citations above for “machine learning engine” where mapping, map-merging, viewpoint generation, visual tracking can be either of an underwater task or a computer vision task) based on input indicative of the sensor data (See at least: fig. 3). Regarding claim 4, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses wherein the mounting hardware comprises at least one of a clamping system (See at least: fig. 2 which shows clamp hangers with resilient material and fasteners attaching the integrated system to the frame), a screwing system (See at least fig. 2: where the clamp hangers and frame use screwing methods such as fasteners to make the frame and connect the hanger to the frame), or a magnetic system (not disclosed). {Examiner note: See additional NPL “EDF deploys unique underwater drone” for more views of what appears to be the same platform as shown in Willners et al. fig. 2.} Regarding claim 5, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses wherein the ROV is untethered (See at least: pages 3-4 III.C. starting on the last para of page 3 “to fully become an autonomous vehicle the tether should preferably be removed completely… we removed the fixed tether connection present in the ROV, and replaced it with an underwater connector from suburban marine [18]… In this way, the vehicle can perform tasks autonomously by disconnecting the tether completely…”) to any surface vessel (Support ship; See at least: page 1 I. first para “…continuous operation of ROVs can be costly as they require constant monitoring from an operator, who is connected with a tether to the ROV from a support ship [4]…” and as the researchers untethered the ROV to make it autonomous, it is interpreted as to be also untethered to the introduction application of use from a support ship.). Regarding claim 6, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses wherein the one or more sensors are customized (stereo cameras, custom-made sensor; See at least: page 3 III.B. fist para. “We equipped the platform developed in this paper with a set of stereo cameras with a custom-made sensor. Additionally, we include in the enclosures the required computational capabilities for processing the visual information….”) for the apparatus in accordance with the underwater task (See at least: Page 5, V.A.1. Autonomous Robust Inspection for the underwater task of mapping and relocalise when visual tracking is lost). Regarding claim 7, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses wherein the apparatus comprises a communication engine (acoustic communication; See at least: III.D. second para found on pages 3-4) configured to communicate with a surface vessel (See at least: page 1, I. first para. “…continuous operation of ROVs can be costly as they require constant monitoring from an operator, who is connected with a tether to the ROV from a support ship [4].” and as the researchers untethered the ROV to make it autonomous, it is interpreted as to be also untethered to the introduction application of use from a support ship where both the support ship and the autonomous ROV is in communication by the disclosed acoustic communication.) or another ROV (not disclosed). {Examiner note: See additional prior art of Vagata et al. (US 20220145756 A1) for teaching of communication between ROVs.} Regarding claim 8, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses wherein the one or more processors are configured to generate the navigation plan (See at least: fig. 3 and Page 4, IV.B. “Pose Estimation System… To enable autonomous navigation, the robot needs the ability to estimate its own position.… In our navigation stack we use the Robot Localization Package from ROS which fuses the sensor data through an Extended Kalman Filter (EKF) [20] to perform DR…” along with previous cited sections for “navigation plan” above.) for the ROV using the sensor data by fusing the sensor data obtained from multiple sensors (See at least: fig. 3 and page 2, II.A. “Navigation Sensors… With this set of sensors on board the ROV, sensor fusion can then be used for pose estimation…”). Regarding claim 9, Willners et al. discloses all the limitations of claim 1 as noted above. Additionally, Willners et al. discloses wherein the interface of the ROV comprises an application programming interface (API) (See at least: fig. 3 and page 4, IV.A. “Interpreter node… Robot Operating System (ROS) is being used in many robotics applications as the de facto standard for handling internal communication, offering an easy approach to a modular software system [19]. Guided by this, we designed our system to leverage ROS communications) through which the ROV receives control instructions (See at least: fig. 3 and page 4, IV.A. 3rd para. “…communication between the front and backseat…”). Regarding claim 10, Willners et al. discloses A computer-implemented method (See at least fig. 3, a navigation stack and all of pages 4-7, Section IV “Software” and Section V.A. “Stereo Visual SLAM and Autonomous Inspection”), comprising: {Examiner Notes: The structure and claim language for the rest of claim 10 is similar to that of claim 1, and therefore all the structure cited in the claim 1 rejection above would apply here to claim 10. Instead of re-reciting the structure, the Examiner will to the best of their understanding explain using Fig. 3 the associated claimed steps listed in this method as it applies to the Applicant’s fig. 2. See copy of Willners et al. fig. 3 reproduced below with comparison to the Applicant’s fig. modified by the Examiner.} PNG media_image1.png 597 939 media_image1.png Greyscale receiving sensor data (fig. 3 shows that the pose estimation receives sensor data from the front seat sensors and the additional sensors) from one or more sensors included in an apparatus, wherein the apparatus is configured to attach to a remotely operated vehicle (ROV), wherein the one or more sensors is in a watertight housing and is configured to generate sensor data that is associated with an underwater task, wherein a mounting hardware attaches the watertight housing to the ROV; generating a navigation plan (As best understood by the Examiner, fig. 3 pose estimation, pilot, and applications/additional sensor are used for generating a navigation plan) for the ROV using the sensor data; determining, using the navigation plan, control instructions (As best understood by the Examiner, fig. 3 pilot, and PID-Controller are used for determining control instructions) configured to control the ROV to perform the underwater task; and providing the control instructions (As best understood by the Examiner, fig. 3 interpreter is used for providing the control instructions) to an interface of the ROV configured to communicate with the apparatus. Therefore, claim 10 is rejected for at least the same reasoning as applied to claim 1 above along with the additional explanation using the Examiner modified fig. 3. Regarding claim 11, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses comprising: processing, by a graphics processing unit (GPU : See at least: page 3, III.C. first para. “…embedded computers from the Nvidia Jetson family. Thanks to the included GPU, this boards are extremely effective for testing and implementing different vision algorithmin real-time...”), the sensor data using a machine learning algorithm (See at least: fig. 3 where sensor data are inputs to the navigation stack’s “pose estimation” and pages 4-5, IV.B. “…In our navigation stack we use the Robot Localization Package from ROS which fuses the sensor data through an Extended Kalman Filter (EKF) [20] to perform DR. DR is however based on the integration of data containing potential noise and biases, hence the error and uncertainty can therefore grow without bound. An alternative to DR is to use natural features as references for estimating the pose using e.g., visual [21] or acoustic [22] simultaneous localisation and mapping (SLAM)… we use additional odometry from a visual SLAM node fused with the DR generated by the frontseat, to improve the position estimate, further described in section V-A...” where both the EKF and SLAM are interpreted as a machine learning algorithm). Regarding claim 12, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses comprising: generating, by a machine learning engine (robust underwater SLAM system, high-level task planner: See at least: fig. 3, page 4, IV.B. first para “…An alternative to DR is to use natural features as references for estimating the pose using e.g., visual [21] or acoustic [22] simultaneous localisation and mapping (SLAM)…” and page 5, V.A.1. “…Autonomous Robust Inspection: An integrated SLAM with active relocalisation for map-merging/loop-closure was deployed to test a robust underwater SLAM system [35]. The approach combines task-planning [36] and viewpoint generation with the SLAM system to endow the system with a map-merging procedure when visual tracking is lost…” and page 5, V.A.1.a. “… AI planning solutions have shown promising results while solving complex missions in the underwater domain [37], including environment’s inspection. For the use case we present in this paper, the high-level task planner [36] generates a plan…”), a result of one or more computer vision tasks (See at least: citations above for “machine learning engine” where mapping, map-merging, viewpoint generation, visual tracking can be either of an underwater task or a computer vision task) based on input indicative of the sensor data (See at least: fig. 3). Regarding claim 13, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses wherein the mounting hardware comprises at least one of a clamping system (See at least: fig. 2 which shows clamp hangers with resilient material and fasteners attaching the integrated system to the frame), a screwing system (See at least fig. 2: where the clamp hangers and frame use screwing methods such as fasteners to make the frame and connect the hanger to the frame), or a magnetic system (not disclosed). Regarding claim 14, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses wherein the ROV is untethered (See at least: pages 3-4 III.C. starting on the last para of page 3 “to fully become an autonomous vehicle the tether should preferably be removed completely… we removed the fixed tether connection present in the ROV, and replaced it with an underwater connector from suburban marine [18]… In this way, the vehicle can perform tasks autonomously by disconnecting the tether completely…”) to any surface vessel (Support ship; See at least: page 1 I. first para “…continuous operation of ROVs can be costly as they require constant monitoring from an operator, who is connected with a tether to the ROV from a support ship [4]…” and as the researchers untethered the ROV to make it autonomous, it is interpreted as to be also untethered to the introduction application of use from a support ship.). Regarding claim 15, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses wherein the one or more sensors are customized (stereo cameras, custom-made sensor; See at least: page 3 III.B. fist para. “We equipped the platform developed in this paper with a set of stereo cameras with a custom-made sensor. Additionally, we include in the enclosures the required computational capabilities for processing the visual information….”) for the apparatus in accordance with the underwater task (See at least: Page 5, V.A.1. Autonomous Robust Inspection for the underwater task of mapping and relocalise when visual tracking is lost). Regarding claim 16, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses wherein the apparatus comprises a communication engine (acoustic communication; See at least: III.D. second para found on pages 3-4) configured to communicate with a surface vessel (See at least: page 1, I. first para. “…continuous operation of ROVs can be costly as they require constant monitoring from an operator, who is connected with a tether to the ROV from a support ship [4].” and as the researchers untethered the ROV to make it autonomous, it is interpreted as to be also untethered to the introduction application of use from a support ship where both the support ship and the autonomous ROV is in communication by the disclosed acoustic communication.) or another ROV (not disclosed). Regarding claim 17, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses wherein generating the navigation plan (See at least: fig. 3 and Page 4, IV.B. “Pose Estimation System… To enable autonomous navigation, the robot needs the ability to estimate its own position.… In our navigation stack we use the Robot Localization Package from ROS which fuses the sensor data through an Extended Kalman Filter (EKF) [20] to perform DR…” along with previous cited sections for “navigation plan” above.) for the ROV using the sensor data comprises fusing the sensor data obtained from multiple sensors (See at least: fig. 3 “Pose Estimation” and page 2, II.A. “Navigation Sensors… With this set of sensors on board the ROV, sensor fusion can then be used for pose estimation…”) to generate the navigation plan. Regarding claim 18, Willners et al. discloses all the limitations of claim 10 as noted above. Additionally, Willners et al. discloses wherein the interface of the ROV comprises an application programming interface (API) (See at least: fig. 3 and page 4, IV.A. “Interpreter node… Robot Operating System (ROS) is being used in many robotics applications as the de facto standard for handling internal communication, offering an easy approach to a modular software system [19]. Guided by this, we designed our system to leverage ROS communications) through which the ROV receives control instructions (See at least: fig. 3 “Interpreter” and page 4, IV.A. 3rd para. “…communication between the front and backseat…”). Regarding claim 19, Willners et al. discloses a system comprising one or more computers (computers: See at least: page 2 II. 2ed para “…the BlueRov2 has two computers on-board, the Pixhawk flight controller [14] and a Raspberry Pi called a companion…” and page 3, III.C. first para. “…embedded computers from the Nvidia Jetson family. Thanks to the included GPU, this boards are extremely effective for testing and implementing different vision algorithmin real-time...”) and one or more storage devices (Micro-SD card, eMMC flash storage; See at least: NPL technical sheets for Raspberry Pi 4 and NVIDIA Jetson TX2 that were available at the time when Willners et al. disclosed their use.) storing instructions (Software; See least fig. 3, a navigation stack and all of pages 4-7, Section IV. “Software” and Section V.A. “Stereo Visual SLAM and Autonomous Inspection”)) that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: {Examiner Notes: The structure and claim language for the rest of claim 19 is similar to that of claim 10, and therefore all the structure cited in the claim 1 and 10 rejections above would apply here to claim 19.} receiving sensor data from one or more sensors included in an apparatus, wherein the apparatus is configured to attach to a remotely operated vehicle (ROV), wherein the one or more sensors is in a watertight housing and is configured to generate sensor data that is associated with an underwater task, wherein a mounting hardware attaches the watertight housing to the ROV; generating a navigation plan for the ROV using the sensor data; determining, using the navigation plan, control instructions configured to control the ROV to perform the underwater task; and providing the control instructions to an interface of the ROV configured to communicate with the apparatus. Therefore, claim 19 is rejected for at least the same reasoning as applied to claims 1 and 10 above. Regarding claim 20, Willners et al. discloses all the limitations of claim 19 as noted above. Additionally, Willners et al. discloses the operations comprise: processing, by a graphics processing unit (GPU : See at least: page 3, III.C. first para. “…embedded computers from the Nvidia Jetson family. Thanks to the included GPU, this boards are extremely effective for testing and implementing different vision algorithmin real-time...”), the sensor data using a machine learning algorithm(See at least: fig. 3 where sensor data are inputs to the navigation stack’s “pose estimation” and pages 4-5, IV.B. “…In our navigation stack we use the Robot Localization Package from ROS which fuses the sensor data through an Extended Kalman Filter (EKF) [20] to perform DR. DR is however based on the integration of data containing potential noise and biases, hence the error and uncertainty can therefore grow without bound. An alternative to DR is to use natural features as references for estimating the pose using e.g., visual [21] or acoustic [22] simultaneous localisation and mapping (SLAM)… we use additional odometry from a visual SLAM node fused with the DR generated by the frontseat, to improve the position estimate, further described in section V-A...” where both the EKF and SLAM are interpreted as a machine learning algorithm). Additional Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure and may be found in the accompanying PTO-892 Notice of References Cited: EDF deploys unique underwater drone to carry out first ever autonomous robotic inspection of wind farm foundations [article online]. www.edfenergy.com, posted May 23, 2022 [retrieved on 2025-11-24]. Retrieved from the Internet: <URL: https://www.edfenergy.com/energywise/edf-deploys-unique-underwater-drone-carry-out-first-ever-autonomous-robotic-inspection-wind> This prior art relates to the disclosure of Willners et al. as the images in this article match that of the fig. 2 of Willners et al. used in the rejection above. This article provides more images of the ROV with the attached platform along with the underwater task of the article. Raspberry Pi 4 Tech Sheet [online]. www.raspberrypi.com, archived November 27, 2021 [retrieved on 2025-11-25]. Retrieved from the Internet: <URL: https://web.archive.org/web/20211127173321/https://www.raspberrypi.com/products/raspberry-pi-4-model-b/specifications/> Relates to claim 19. NVIDIA Jetson TX2 Tech Sheet [online]. www.nvidia.com, archived September 07, 2020 [retrieved on 2025-11-25]. Retrieved from the Internet: <URL: https://web.archive.org/web/20200907063659/https://developer.nvidia.com/embedded/jetson-tx2-4gb> Relates to claim 19. Vagata et al. (US 20220145756 A1) relates to at least claims 7 and 16 and teaches a Digital Underwater Communication Network 100 that ensures links between the subsea vehicles, and the Surface Mission Control Center (See at least: para.[0037] and fig. 1). A Natural Language Interface with Relayed Acoustic Communications for Improved Command and Control of AUVs [online]. ieeexplore.ieee.org, 2018 [retrieved on 2025-11-26]. Retrieved from the Internet: <URL: https://ieeexplore.ieee.org/abstract/document/8729778>. Relates to at least claims 7 and 16 and is Willners et al. reference [4]. PIXHAWK: A System for Autonomous Flight using Onboard Computer Vision [online]. ieeexplore.ieee.org, 2011 [retrieved on 2025-11-25]. Retrieved from the Internet: <URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5980229>. This is Willners et al. reference [14]. Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain [online]. ieeexplore.ieee.org, 2019 [retrieved on 2025-11-25]. Retrieved from the Internet: <URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8968049>. This is Willners et al. reference [21] and relates to claims with machine learning algorithms/engine and computer vision/underwater tasks. Robust underwater slam using autonomous relocalisation [online]. www.sciencedirect.com, 2021 [retrieved on 2025-11-25]. Retrieved from the Internet: <URL: https://www.sciencedirect.com/science/article/pii/S2405896321015068>. This is Willners et al. reference [35] and relates to claims with machine learning algorithms/engine and computer vision/underwater tasks. Situation-Aware Task Planning for Robust AUV Exploration in Extreme Environments [online]. umass.edu, 2021 [retrieved on 2025-11-25]. Retrieved from the Internet: <URL: http://rbr.cs.umass.edu/r2aw/papers/R2AW_paper_14.pdf>. This is Willners et al. reference [36] and relates to claims with navigation plan and underwater tasks. Temporal planning with preferences and time-dependent continuous costs. [online]. ojs.aaai.org, 2012 [retrieved on 2025-11-25]. Retrieved from the Internet: <URL: https://ojs.aaai.org/index.php/ICAPS/article/view/13509/13358>. This is Willners et al. reference [38] and relates to claims with navigation plan and underwater tasks. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC ANTHONY STARCK whose telephone number is (571)272-6651. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm Eastern Standard Time (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SAMUEL J MORANO can be reached at (571) 272-6684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.A.S./Examiner, Art Unit 3615 /S. Joseph Morano/Supervisory Patent Examiner, Art Unit 3615
Read full office action

Prosecution Timeline

Aug 10, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600437
WEIGHT RELEASE DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12595038
Propulsion Unit for a Marine Vessel
2y 5m to grant Granted Apr 07, 2026
Patent 12583570
FUEL CELL SHIP
2y 5m to grant Granted Mar 24, 2026
Patent 12570373
PLEASURE CRAFT HAVING AN IMPROVED DECK CONSTRUCTION
2y 5m to grant Granted Mar 10, 2026
Patent 12553573
STORAGE TANK
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+33.3%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month