Prosecution Insights
Last updated: April 19, 2026
Application No. 19/090,629

ENCODING & DECODING USING GENERATIVE AI FOR COMPRESSION OF VIDEO STREAM WITH DEHAZING CAPABILITIES

Non-Final OA §103
Filed
Mar 26, 2025
Examiner
PICON-FELICIANO, ANA J
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Oceaneering International Inc.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
294 granted / 428 resolved
+10.7% vs TC avg
Strong +22% interview lift
Without
With
+21.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
459
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is sent in response to Applicant’s Communication received on March 26,2025 for application 19/090,629. This Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract and Claims. 3. Claims 1-17 are presented for examination. Oath/Declaration 4. An inventor’s oath or declaration has not been received. Priority 5. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. IN202411023741, filed on March 26, 2024. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. Claims 1, 3-4, 6-7, 10 and 13-17 are rejected under 35 U.S.C. 103 as being unpatentable over HABIBIAN et al.(US 2023/0336754 A1)(hereinafter Habibian) in view of Salman et al.(US 2022/0262104 A1)(hereinafter Salman) in further view of DOYLE et al.(US 2020/0320776 A1)(hereinafter Doyle). Regarding claim 1, Habibian discloses a system for encoding and decoding using generative Al for compression of video stream[See Habibian: at least Figs. Figs. 4A-4B, par. 37, 71, 74 and 80 regarding FIG. 4A illustrates an example autoencoder 400 that trains neural networks to compress and decompress video content. Autoencoder 400 may thus train the encoder 462 of transmitting device 460 and the decoder 472 of receiving device 470 illustrated in FIG. 4B…To train encoders and decoders that compress and decompress content using deep generative models, a deep neural network, such as a variational autoencoder, can learn a latent variable model in which latent variables capture important information to be transmitted (and preserved) in a compressed version of an input.], comprising: a) a camera[See Habibian: at least par. 17-19, 131, 136, 138 regarding The fixed environment may include, for example, a fixed ambient scene in the video content (e.g., in security camera footage, where building features may remain static across frames) or a fixed vantage point for captured video content (e.g., for dashcam footage, where the environment may change across frames, but each frame is captured from a fixed camera with a fixed angle of view)…]; b) an encoder operatively in communication with the camera [See Habibian: at least par. 131-132 regarding At block 904, the system encodes the received video content into a latent code space through an encoder implemented by a first artificial neural network. The first artificial neural network may be trained against data from a fixed environment from which the video content was captured. The fixed environment may include, for example, a fixed ambient scene in the video content (e.g., in security camera footage, where building features may remain static across frames) or a fixed vantage point for captured video content (e.g., for dashcam footage, where the environment may change across frames, but each frame is captured from a fixed camera with a fixed angle of view)… At block 906, the system generates a compressed version of the encoded video through a probabilistic model implemented by a second neural network…] and configured to compress video data into compressed video data at a rate sufficient to allow the compressed video data to be transmitted [See Habibian: at least Figs. Figs. 4A-4B, par. 73-74 regarding Encoder 402 may be refined using a decoder 406 that decompresses code z to obtain a reconstruction X ^ of the first training video. Generally the reconstruction X ^ may be an approximation of the uncompressed first training video and need not be an exact copy of the first training video x. Encoder 402 may compare x and X ^ to determine a distance vector or other difference value between the first training video and the reconstructed first training video. Based on the determined distance vector or other difference value, encoder 402 may adjust mappings between received video content (e.g., on a per-frame basis) and the latent code space to reduce the distance between an input uncompressed video and an encoded video generated as output by encoder 402. Encoder 402 may repeat this process using, for example, stochastic gradient descent techniques to minimize or otherwise reduce differences between an input video x and a reconstructed video X ^ resulting from decoding of a generated code z. Code X ^ may be compressed through a code model 408, which uses a probabilistic function p(z) to compress the generated code X ^ into a bitstream for output to a decoder 406…] over high latency data rates[See Habibian: at least par. 67 regarding Because uncompressed video content may result in large files that may involve sizable memory for physical storage and considerable bandwidth for transmission, techniques may be utilized to compress such video content. For example, consider the delivery of video content over wireless networks. It is projected that video content will comprise the majority of consumer internet traffic, with over half of that video content being delivered to mobile devices over wireless networks (e.g., via LTE, LTE-Advanced, New Radio (NR), or other wireless networks). Despite advances in the amount of available bandwidth in wireless networks, it may still be desirable to reduce the amount of bandwidth used to deliver video content in these networks…], the encoder comprising generative artificial intelligence (Al) software operative in the encoder to encode video data[See Habibian: at least par. 80 regarding To train encoders and decoders that compress and decompress content using deep generative models, a deep neural network, such as a variational autoencoder, can learn a latent variable model in which latent variables capture important information to be transmitted (and preserved) in a compressed version of an input. ..]; c) a transmitter operatively in communication with the encoder[See Habibian: at least Figs. 4A-4B and par. 70, 71, 85, 92, 94 regarding The encoder and decoder may be deployed on different devices (e.g., the encoder on a transmitting device, and the decoder on a receiving device) so that a transmitting device can transmit compressed video and a receiving device can decompress the received compressed video for output (e.g., to a display)… a system 450 including a transmitting device 460 that compresses video content and transmits the compressed video content to a receiving device 470 for decompression and output on receiving device 470 and/or video output devices connected to receiving device 470.]; d) a receiver operatively in communication with the transmitter [See Habibian: at least Figs. 4A-4B and par. 70, 71, 85, 92, 94 regarding The encoder and decoder may be deployed on different devices (e.g., the encoder on a transmitting device, and the decoder on a receiving device) so that a transmitting device can transmit compressed video and a receiving device can decompress the received compressed video for output (e.g., to a display)… a system 450 including a transmitting device 460 that compresses video content and transmits the compressed video content to a receiving device 470 for decompression and output on receiving device 470 and/or video output devices connected to receiving device 470…]; e) a processor operatively in communication with the receiver [See Habibian: at least Fig. 6 and par. 103 regarding FIG. 6 illustrates example operations 600 for decompressing encoded video (e.g., a received bitstream) into video content in a deep neural network according to aspects described herein. Operations 600 may be performed by a system with one or more processors (e.g., CPU, DSP, GPU, etc.) implementing the deep neural network. For example, the system may be receiving device 420.]; f) a decoder operatively in communication with the processor [See Habibian: at least Figs. 4A-4B, 6 and par. 70-74, 80-85, 88, 93-94, 103-109 regarding Code X ^ may be compressed through a code model 408, which uses a probabilistic function p(z) to compress the generated code X ^ into a bitstream for output to a decoder 406… FIG. 6 illustrates example operations 600 for decompressing encoded video (e.g., a received bitstream) into video content in a deep neural network according to aspects described herein. Operations 600 may be performed by a system with one or more processors (e.g., CPU, DSP, GPU, etc.) implementing the deep neural network…]; and g) a visual display operatively in communication with the decoder [See Habibian: at least Figs. 4A-4B, 6, par. 70, 94, 107 regarding The encoder and decoder may be deployed on different devices (e.g., the encoder on a transmitting device, and the decoder on a receiving device) so that a transmitting device can transmit compressed video and a receiving device can decompress the received compressed video for output (e.g., to a display)…]. Habibian does not explicitly disclose a) a camera adapted to be disposed subsea; b) an encoder configured to compress the video data that still support active control of a subsea structure by a remote controller; and e) a processor operatively in communication with the receiver at a low latency data rate. However, Salman teaches a) a camera adapted to be disposed subsea[See Salman: at least Fog. 1 and par. 74-75 regarding As to the subsea surveillance equipment, it can include a wave glider 121, a glider 122, a remotely operated vehicle (ROV) 123, an autonomous underwater vehicle (AUV) 124, and/or a manned underwater vehicle (MUV) 125. As an example, one or more of the subsea surveillance equipment 121, 122, 123, 124 and 125 may be operatively coupled to one or more other pieces of equipment, for example, for control and/or data transmission. As an example, the subsea surveillance equipment 121, 122, 123, 124 and 125 can include data acquisition equipment, which can include one or more imaging devices that can acquire image data in a subsea environment. Such image data may be visual, sonic, infrared, laser, or another type of image data.]; and b) an encoder configured to compress the video data [See Salman: at least par. 71, 113, 169-173, 191-193, 199, 203, 218, 233-236 regarding As an example, a video compression process may perform various actions such as partitioning that partitions a frame into macroblocks, transformation (e.g., using a DCT or wavelet transform), quantization and entropy encoding.] that still support active control of a subsea structure by a remote controller[See Salman: at least Fig. 3 and par. 79 regarding FIG. 3 shows an example of a surveillance vehicle 310 and an image 320 of a portion of a subsea environment as acquired using the surveillance vehicle 310. In such an example, an inspection tool may desire output of one or more features, what a scene entails, why a scene appears as it does and what action may be suitable given a scene (e.g., control of the surveillance vehicle 310, issuance of an alarm, triggering further training of a machine learning model, etc.)]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Habibian with Salman teachings by including “a) a camera adapted to be disposed subsea; and b) an encoder configured to compress the video data that still support active control of a subsea structure by a remote controller” because this combination has the benefit of incorporating a subsea camera and encoder for control and/or data transmission of a subsea structure[See Salman: at least Figs. 1 and 3 and par. 74, 79]. Habibian and Salman do not explicitly disclose e) a processor operatively in communication with the receiver at a low latency data rate. However, Doyle teaches e) a processor operatively in communication with the receiver at a low latency data rate [See Doyle: See at least Fig. 30B and par. 300-301 regarding FIG. 30B illustrates an exemplary inferencing system on a chip (SOC) 3100 suitable for performing inferencing using a trained model. The SOC 3100 can integrate processing components including a media processor 3102, a vision processor 3104, a GPGPU 3106 and a multi-core processor 3108. The SOC 3100 can additionally include on-chip memory 3105 that can enable a shared on-chip data pool that is accessible by each of the processing components. The processing components can be optimized for low power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of the SOC 3100 can be used as a portion of the main control system for an autonomous vehicle. Where the SOC 3100 is configured for use in autonomous vehicles the SOC is designed and configured for compliance with the relevant functional safety standards of the deployment jurisdiction. During operation, the media processor 3102 and vision processor 3104 can work in concert to accelerate computer vision operations. The media processor 3102 can enable low latency decode of multiple high-resolution (e.g., 4K, 8K) video streams…]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Habibian and Salman with Doyle teachings by including “e) a processor operatively in communication with the receiver at a low latency data rate” because this combination has the benefit of optimizing the system for low power operation to enable deployment to a variety of machine learning platforms[See Doyle: See at least Fig. 30B and par. 300-301]. Regarding claim 3, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Salman teaches or suggests wherein the subsea structure comprises a subsea vehicle or a stationery subsea structure [See Salman: at least par. 86 regarding As an example, a system may provide for surveillance with an autonomous underwater vehicle (AUV) of one or more subsea assets (e.g., as to integrity, etc.) where an inference (e.g., resulting from active learning of a ML tool) can occur in real-time in an embedded mode (e.g., an edge device) to compress a mission duration whilst maximizing detection likelihood of one or more features of interest. In such an example, a method can include controlling one or more parameters of operation of the AUV (e.g., itself and/or instrument(s) thereof). For example, consider controlling speed of the AUV based on features present in a field of view (FOV) and/or one or more features ahead, which may be further below a path of the AUV (e.g., optionally detected by one or more other sensors).]. Regarding claim 4, Habibian, Salman and Doyle teach all of the limitations of claim 3, and are analyzed as previously discussed with respect to that claim. Further on, Salman teaches or suggests wherein the subsea vehicle comprises a remotely operated vehicle (ROV) or an autonomous underwater vehicle (AUV) [See Salman: at least par. 86 regarding As an example, a system may provide for surveillance with an autonomous underwater vehicle (AUV) of one or more subsea assets (e.g., as to integrity, etc.) where an inference (e.g., resulting from active learning of a ML tool) can occur in real-time in an embedded mode (e.g., an edge device) to compress a mission duration whilst maximizing detection likelihood of one or more features of interest. In such an example, a method can include controlling one or more parameters of operation of the AUV (e.g., itself and/or instrument(s) thereof). For example, consider controlling speed of the AUV based on features present in a field of view (FOV) and/or one or more features ahead, which may be further below a path of the AUV (e.g., optionally detected by one or more other sensors).]. Regarding claim 6, Habibian, Salman and Doyle teach all of the limitations of claim 3, and are analyzed as previously discussed with respect to that claim. Further on, Salman teaches or suggests wherein the camera is disposed in the subsea vehicle or positioned subsea in the subsea structure [See Salman: at least Fog. 1 and par. 74-75 regarding As to the subsea surveillance equipment, it can include a wave glider 121, a glider 122, a remotely operated vehicle (ROV) 123, an autonomous underwater vehicle (AUV) 124, and/or a manned underwater vehicle (MUV) 125. As an example, one or more of the subsea surveillance equipment 121, 122, 123, 124 and 125 may be operatively coupled to one or more other pieces of equipment, for example, for control and/or data transmission. As an example, the subsea surveillance equipment 121, 122, 123, 124 and 125 can include data acquisition equipment, which can include one or more imaging devices that can acquire image data in a subsea environment. Such image data may be visual, sonic, infrared, laser, or another type of image data.]. Regarding claim 7, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Habibian teaches or suggests wherein the encoder is operatively in communication with the transmitter via a wired connection, an optical connection, a wireless connection, or a combination thereof[See Habibian: at least Fig. 4B and par. 71, 92 regarding Autoencoder 400 may thus train the encoder 462 of transmitting device 460 and the decoder 472 of receiving device 470 illustrated in FIG. 4B…At receiving device 470, the bitstream generated by arithmetic coder 466 and transmitted from transmitting device 460 may be received by receiving device 470. Transmission between transmitting device 460 and receiving device 470 may occur via any of various suitable wired or wireless communication technologies. Communication between transmitting device 460 and receiving device 470 may be direct or may be performed through one or more network infrastructure components (e.g., base stations, relay stations, mobile stations, network hubs, etc.).]. Regarding claim 10, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Habibian teaches or suggests wherein the camera and the encoder are co-located or operatively in communication but not co-located [See Habibian: at least par. 131 -132 regarding At block 904, the system encodes the received video content into a latent code space through an encoder implemented by a first artificial neural network. The first artificial neural network may be trained against data from a fixed environment from which the video content was captured. The fixed environment may include, for example, a fixed ambient scene in the video content (e.g., in security camera footage, where building features may remain static across frames) or a fixed vantage point for captured video content (e.g., for dashcam footage, where the environment may change across frames, but each frame is captured from a fixed camera with a fixed angle of view).]. Regarding claim 13, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Doyle teaches or suggests wherein data from the receiver are provided to the processor over a transmission path at low latency data rates of up to several gigabits per second [See Doyle: at least par. 300-301, 316 regarding During operation, the media processor 3102 and vision processor 3104 can work in concert to accelerate computer vision operations. The media processor 3102 can enable low latency decode of multiple high-resolution (e.g., 4K, 8K) video streams…In addition, as described above, a distributed approach to denoising may be employed in which the GPU 3105 is in a computing device coupled to other computing devices over a network or high speed interconnect. In this embodiment, the interconnected computing devices share neural network learning/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications…(By transmitting data at higher rate reduces communication latency)]. Regarding claim 14, Habibian, Salman and Doyle teach all of the limitations of claim 13, and are analyzed as previously discussed with respect to that claim. Further on, Habibian teaches or suggests wherein the transmission path comprises one or more of a wired transmission path, a wireless transmission path, an optical transmission path, an acoustic transmission path, or a combination thereof [See Habibian: at least par. 92 regarding At receiving device 470, the bitstream generated by arithmetic coder 466 and transmitted from transmitting device 460 may be received by receiving device 470. Transmission between transmitting device 460 and receiving device 470 may occur via any of various suitable wired or wireless communication technologies. Communication between transmitting device 460 and receiving device 470 may be direct or may be performed through one or more network infrastructure components (e.g., base stations, relay stations, mobile stations, network hubs, etc.).] Regarding claim 15, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Habibian teaches or suggests wherein the processor is located proximate the receiver or at a distant location [See Habibian: at least par. 92-93 regarding At receiving device 470, the bitstream generated by arithmetic coder 466 and transmitted from transmitting device 460 may be received by receiving device 470. Transmission between transmitting device 460 and receiving device 470 may occur via any of various suitable wired or wireless communication technologies. Communication between transmitting device 460 and receiving device 470 may be direct or may be performed through one or more network infrastructure components (e.g., base stations, relay stations, mobile stations, network hubs, etc.). As illustrated, receiving device 470 may include an arithmetic coder 476, a code model 474, and a decoder 472. Decoder 472 may be trained by autoencoder 400 using the same or set used to train encoder 462 so that decoder 472, for a given input, can reconstruct an approximation of an input encoded by encoder 462.]. Regarding claim 16, Habibian, Salman and Doyle teach all of the limitations of claim 15, and are analyzed as previously discussed with respect to that claim. Further on, Salman teaches or suggests wherein the distant location comprises an onshore location, a surface vessel, or a rig [See Salman: at least par. 73 regarding the surface equipment can include a floating production, storage and offloading (FPSO) vessel 112, which may be coupled to a tanker 115 and various other equipment. For example, the FPSO vessel 112 may be in fluid communication with the subsea fluid equipment 131 where the FPSO vessel 112 may receive hydrocarbon fluid that may be pumped to the tanker 115. The surface equipment can include an offshore platform 113, which may be in contact with a seafloor (e.g., a seabed) and in fluid communication with the subsea fluid equipment 132. The surface equipment can include an offshore platform 114, which may be a tension leg type of platform and in fluid communication with the subsea fluid equipment 133.] Regarding claim 17, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Habibian teaches or suggests wherein video data from the camera are provided directly to the visual display via a direct, normal video path, through a video path from the decoder after applying compression steps, or via both a direct, normal video path and through a video path from the decoder after applying compression steps [See Habibian: at least Fig. 12 and par. 146-147 regarding FIG. 12 is a block diagram illustrating an exemplary software architecture 1200 that may modularize artificial intelligence (AI) functions. Using architecture 1200, applications may be designed that may cause various processing blocks of an SOC 1220 (for example a CPU 1222, a DSP 1224, a GPU 1226, and/or an NPU 1228) to support video compression and/or decompression using deep generative models, according to aspects of the present disclosure. The AI application 1202 may be configured to call functions defined in a user space 1204 that may, for example, compress and/or decompress video signals (or encoded versions thereof) using deep generative models and adaptively compress and/or decompress video signals based on the content included in a video signal.]. 9. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over HABIBIAN et al.(US 2023/0336754 A1)(hereinafter Habibian) in view of Salman et al.(US 2022/0262104 A1)(hereinafter Salman) in further view of DOYLE et al.(US 2020/0320776 A1)(hereinafter Doyle) and in further view of Grilikhes(US 10,924,682 B2)(hereinafter Grilikhes). Regarding claim 2, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Habibian, Salman and Doyle do not explicitly disclose further comprising a dehazer operatively in communication with the camera and with the encoder and operative to process video data from the camera into dehazed video and provide the dehazed video to the encoder. However, Grilikhes teaches further comprising a dehazer [See Grilikhes: at least Fig. 1 and col. 3 line 63- col. 4 line 22 regarding system 100 for providing haze removal in video. As shown in FIG. 1, system 100 receives input video 111 including a sequence of input frames as shown with respect to input frame 120 and system 100 generates output video 112 including a sequence of de-hazed output frames as shown with respect to output frame 130. Input video 111 and output video 112 may include any suitable video frames, video pictures, sequence of video frames, pictures, groups of pictures, video data, or the like in any suitable resolution.] operatively in communication with the camera [See Grilikhes: at least col. 4 lines 23-37 regarding As discussed, haze is an atmospheric phenomenon in which water, dust, or other atmospheric aerosol particles reflect some amount of light towards a capturing camera.] and with the encoder and operative to process video data from the camera into dehazed video and provide the dehazed video to the encoder[See Grilikhes: at least col. 19 lines 40-43 regarding Processing continues at operation 806, where the de-hazed second frame in the first color space is transmitted for presentation to a user, video encode, computer vision processing, etc. ]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Habibian, Salman and Doyle with Grilikhes teachings by including “further comprising a dehazer operatively in communication with the camera and with the encoder and operative to process video data from the camera into dehazed video and provide the dehazed video to the encoder” because this combination has the benefit of improving quality of the captured video frames by providing a haze removal operation to captured video frames[See Grilikhes: at least col. 1 lines 6-36, col. 4 lines 23-37]. 9. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over HABIBIAN et al.(US 2023/0336754 A1)(hereinafter Habibian) in view of Salman et al.(US 2022/0262104 A1)(hereinafter Salman) in further view of DOYLE et al.(US 2020/0320776 A1)(hereinafter Doyle) and in further view of Pugh et al.(US 8,548,742 B2)(hereinafter Pugh). Regarding claim 5, Habibian, Salman and Doyle teach all of the limitations of claim 3, and are analyzed as previously discussed with respect to that claim. Habibian, Salman and Doyle do not explicitly disclose wherein stationery structure comprises a blowout preventer (BOP). However, Pugh teaches wherein stationery structure comprises a blowout preventer (BOP)[See Pugh: at least Fig. 1B and col. 6 line 54- col. 7 line 2 regarding FIG. 1B shows an alternate placement of the non-contact measurement system 100, with the digital camera 115, lens system 134, and light source 125 in the sensor head 140, mounted above the blowout preventer 160…]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Habibian, Salman and Doyle with Pugh teachings by including “wherein stationery structure comprises a blowout preventer (BOP)” because this combination has the benefit of providing an alternate component for the image and video capturing control [See Pugh: at least Fig. 1B and col. 6 line 54- col. 7 line 2, col. 12 line 59- col. 13 line 41]. 10. Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over HABIBIAN et al.(US 2023/0336754 A1)(hereinafter Habibian) in view of Salman et al.(US 2022/0262104 A1)(hereinafter Salman) in further view of DOYLE et al.(US 2020/0320776 A1)(hereinafter Doyle) and in further view of Beye et al.(US 2024/0062507 A1)(hereinafter Beye). Regarding claim 8, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Habibian, Salman and Doyle do not explicitly disclose wherein the compression is at least 1000:1. However, Beye teaches wherein the compression is at least 1000:1[See Beye: at least Fig. 19, par. 194, 250, 286, 291 regarding FIG. 19 is a flowchart showing an example of the processing procedure performed by the transmission-side device 30. FIG. 19 shows an example of the processing procedure on a single image when the transmission-side device 30 sends to the reception-side device 40 multiple images (frames in the case of a video image), such as video images or continuous still images. The transmission-side device 30 repeats the processing of FIG. 19 for each image… In the information processing system 1 or the information processing system 2, the setting of the processing performed by the transmission-side device may be dynamically updated, such as by dynamically changing the compression ratio of the communication data. At that time, the setting of the processing performed by the reception-side device may also be dynamically updated…]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Habibian, Salman and Doyle with Beye teachings by including “wherein the compression is at least 1000:1” because this combination has the benefit of dynamically adjusting the compression ratio at the transmitter side [See Beye: at least par. 194, 286, 291]. Regarding claim 9, Habibian, Salman, Doyle and Beye teach all of the limitations of claim 8, and are analyzed as previously discussed with respect to that claim. Further on, Beye teaches or suggests where the compression is 1090:1 with a compression rate up to around 97.39% of space saving [See Beye: at least Fig. 19, par. 194, 250, 286, 291 regarding FIG. 19 is a flowchart showing an example of the processing procedure performed by the transmission-side device 30. FIG. 19 shows an example of the processing procedure on a single image when the transmission-side device 30 sends to the reception-side device 40 multiple images (frames in the case of a video image), such as video images or continuous still images. The transmission-side device 30 repeats the processing of FIG. 19 for each image… In the information processing system 1 or the information processing system 2, the setting of the processing performed by the transmission-side device may be dynamically updated, such as by dynamically changing the compression ratio of the communication data. At that time, the setting of the processing performed by the reception-side device may also be dynamically updated…As a result, it is expected that processing settings such as the compression ratio of communication data can be dynamically changed, and that the reception-side device 52 can restore feature data with high accuracy.(It is noted that by adjusting the compression ratio at the transmitter side to a higher ratio, it will require lower resources for transmission of the data)]. 11. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over HABIBIAN et al.(US 2023/0336754 A1)(hereinafter Habibian) in view of Salman et al.(US 2022/0262104 A1)(hereinafter Salman) in further view of DOYLE et al.(US 2020/0320776 A1)(hereinafter Doyle) and in further view of Embry et al.(US 2021/0382171 A1)(hereinafter Embry). Regarding claim 11, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Habibian, Salman and Doyle do not explicitly disclose wherein: a) the receiver comprises a first acoustic modem that is operational subsea; and b) the transmitter comprises a second acoustic modem operational subsea and configured to transmit data to the receiver subsea. However, Embry teaches wherein: a) the receiver comprises a first acoustic modem that is operational subsea [See Embry: at least Figs. 5A-6, par. 47 and 68 regarding An acoustic transponder 228 can additionally implement an acoustical modem function to send, relay, or receive information encoded on acoustic carrier frequencies… the metrology system 202 can itself comprise a subsea system with a platform with numerous selectable functions. In embodiments in which the metrology system 202 includes a support structure or frame 316 that holds multiple lidar devices 308, the lidar devices 308 and acoustic transceiver or transceivers 310 can be precisely located on the single structure so they create a single referenced point cloud. By mounting the lidar devices 308 on pan and tilt heads 312, they can provide hemispherical coverage. Cameras and lights 328 can be mounted on the support structure 316 or the pan and tilt heads 312 to enable the acquisition of visual data along with the lidar data…]; and b) the transmitter comprises a second acoustic modem operational subsea and configured to transmit data to the receiver subsea[See Embry: at least par. 50 regarding As can also be appreciated by one of skill in the art after consideration of the present disclosure, an acoustic transceiver 310 is an acoustic system that can include active and passive acoustic components. The active components can provide an acoustic signal that identifies the associated metrology system 202, provides information that allows an acoustic transceiver provided as part of another instrument or device to determine a relative range and bearing to the emitting acoustic transceiver 310, provides interrogation signals to specific acoustic transponders 228, performs an acoustic modem function, for example to transmit location information to an acoustic transponder 228, and/or the like.]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Habibian, Salman and Doyle with Embry teachings by including “wherein: a) the receiver comprises a first acoustic modem that is operational subsea; and b) the transmitter comprises a second acoustic modem operational subsea and configured to transmit data to the receiver subsea” because this combination has the benefit of allowing the use of acoustic carrier frequencies for communications[See Embry: at least par. 68]. 12. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over HABIBIAN et al.(US 2023/0336754 A1)(hereinafter Habibian) in view of Salman et al.(US 2022/0262104 A1)(hereinafter Salman) in further view of DOYLE et al.(US 2020/0320776 A1)(hereinafter Doyle) and in further view of Kimpe(US 2007/0183493 A1)(hereinafter Kimpe). Regarding claim 12, Habibian, Salman and Doyle teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Habibian, Salman and Doyle do not explicitly disclose wherein data communication between the transmitter and the receiver is high latency. However, Kimpe teaches wherein data communication between the transmitter and the receiver is high latency [See Kimpe: at least par. 6 regarding It is an object of the present invention to provide a method and device to solve problems that currently exist with transmission of image- and video transmission over bandwidth-limited networks, in particular high-quality or medical image- and video transmission. Such problems are e.g., but not limited thereto, low overall image/video quality when only a low bandwidth channel is available, severe image degradation due to introduction of bit errors on the channel and the problem of latency when communication is bidirectional. The present invention will disclose a solution based on automatic selection of the best codec type depending on spatial location in the image, methods to improve security and priority signaling in a multi-user environment, methods to make maximum use of available calculation power by automatically reconfiguring calculation blocks, methods to reduce and hide high latency problems, and methods to reduce power consumption for portable battery-operated devices.]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Habibian, Salman and Doyle with Kimpe teachings by including “wherein data communication between the transmitter and the receiver is high latency” because this combination has the benefit of allowing video transmission over bandwidth-limited networks [See Kimpe: at least par. 6]. Conclusion 13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ana Picon-Feliciano/Examiner, Art Unit 2482 /CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Mar 26, 2025
Application Filed
Mar 15, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598287
DISPLAY DEVICE, METHOD, COMPUTER PROGRAM CODE, AND APPARATUS FOR PROVIDING A CORRECTION MAP FOR A DISPLAY DEVICE, METHOD AND COMPUTER PROGRAM CODE FOR OPERATING A DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593021
ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12567163
IMAGING SYSTEM AND OBJECT DEPTH ESTIMATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12561788
FLUORESCENCE MICROSCOPY METROLOGY SYSTEM AND METHOD OF OPERATING FLUORESCENCE MICROSCOPY METROLOGY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554122
TECHNIQUES FOR PRODUCING IMAGERY IN A VISUAL EFFECTS SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
90%
With Interview (+21.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month