Prosecution Insights
Last updated: April 19, 2026
Application No. 18/870,152

A MARINE SURROUND SENSING SYSTEM

Non-Final OA §101§103
Filed
Nov 27, 2024
Examiner
GOODBODY, JOAN T
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Cpac Systems AB
OA Round
1 (Non-Final)
49%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
89%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
98 granted / 199 resolved
-2.8% vs TC avg
Strong +40% interview lift
Without
With
+39.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
28 currently pending
Career history
227
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
56.6%
+16.6% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 199 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-8, 14, 17, and 19-20 are amended. Claims 9 and 18 cancelled. Claims 1-8, 10-17, and 19-20 are pending. Priority Acknowledgment is made of applicant’s claim for priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application SE2250688-5, filed on 06/08/2022. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. Under a broadest reasonable interpretation (BRI), words of the claim must be given their plain meaning, unless such meaning is inconsistent with the specification. The plain meaning of a term means the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time. The ordinary and customary meaning of a term may be evidenced by a variety of sources, including the words of the claims themselves, the specification, drawings, and prior art. However, the best source for determining the meaning of a claim term is the specification - the greatest clarity is obtained when the specification serves as a glossary for the claim terms. The words of the claim must be given their plain meaning unless the plain meaning is inconsistent with the specification. 2111.01 (I). See also In re Marosi, 710 F.2d 799, 802, 218 USPQ 289, 292 (Fed. Cir. 1983) ("'[C]laims are not to be read in a vacuum, and limitations therein are to be interpreted in light of the specification in giving them their ‘broadest reasonable interpretation.'"2111.01 (II). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “control unit” in claims 1 and 10. The “control unit” is only doing calculations which is software. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim recites “A computer program product comprising program code for performing, when executed by a processing circuitry ” which is software only and which is not tangible. In order to overcome this rejection, the Office recommends amending the claims so that they recite only non-transitory media/medium. The Office recommends the use of the phrase “non-transitory media/medium comprising program code for performing…”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-8, 10-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. [US20200074239, now Park], in view of Suresh et al. [US20200050893, now Suresh], further in view of Minear et al. [US20100207936, now Minear]. Claim 1 Park discloses a marine surround sensing system for controlling a marine vessel[see at least Park, ¶ 0052 (“According to another aspect of the present invention, an autonomous navigation method of a ship”)] comprises: Light Detection And Ranging, LiDAR, sensors mounted around the marine vessel for registering surroundings of the marine vessel[see at least Park, ¶ 0194 – 195 (“The object information is not limited to object information obtained through image segmentation, and object information obtained through another sensor, such as a radar or a LiDAR, may also be an input to the obstacle map update operation. It is also possible to combine all or some of the pieces of object information. [0195] The obstacle map refers to a means for presenting object information. As an example, the obstacle map may be a grid map. In the grid map, the space may be divided into unit regions, and object information may be displayed according to each unit region. As another example, the obstacle map may be a vector map. The obstacle map is not limited to being two dimensional and may be a 3D obstacle map. Meanwhile, the obstacle map may be a global map which presents all zones related to ship sailing from a starting point to a destination or a local map which presents certain zones around the ship.”)], a control unit with a neural network to process information about the registered surroundings which has been registered by the LiDAR sensors, wherein the information registered by the LiDAR sensors is in the form of a 3D point cloud, wherein the processing [see at least Park, ¶ 0042 (“According to an aspect of the present invention, a method for learning a neural network performed by a computing means wherein the neural network receives a marine image and outputs information about a type and a distance of at least one object included in the marine image, the method may comprises: obtaining a marine image including a sea and an obstacle; obtaining a labelling data including a plurality of labelling values generated based on the marine image, wherein the plurality of labelling values includes at least one first labelling value corresponding to the sea and at least one second labelling value corresponding to the obstacle and determined by reflecting a type information and a distance information of the obstacle; obtaining an output data including a plurality of output values by using a neural network, wherein the neural network receives the marine image and outputs the output data, and the plurality of output values includes a first output value corresponding to the first labelling value and a second output value corresponding to the second labelling value; calculating an error value by using the labelling data and the output data, wherein the error value is calculated by considering the difference between the first labelling value and the first output value and the difference between the second labelling value and the second output value; and updating the neural network based on the error value; wherein the plurality of labelling values and the plurality of output values are selected from a plurality of identification values, and at least one identification value of the plurality of identification values is determined by a combination of information about a type and a distance of an object.”)] comprises; a first projection, by the control unit, of the 3D point cloud into one or two 2D maps; segmentation, by the neural network in the control unit, of the one or two 2D maps, wherein an output of the segmentation is a segmented 2D map with class information for each point in the one or two 2D maps and a second projection, by the control unit of the segmented 2D map back to the 3D point cloud [see at least Park, ¶ 0048 (“According to another aspect of the present invention, a method for situation awareness of a ship using a neural network performed by a computing means, the method may comprise: obtaining a marine image including a sailable region and an obstacle by a camera installed on a ship, wherein the marine image includes a first pixel corresponding to the sailable region and a second pixel corresponding to the obstacle; obtaining an output data by using a neural network performing image segmentation, wherein the neural network receives the marine image and outputs the output data, and the output data includes a first output value corresponding to the first pixel and a second output value corresponding to the second pixel and determined by reflecting a type information of the obstacle and a first distance information of the second pixel; obtaining a second distance information of the second pixel based on a location of the second pixel on the marine image and a position information of the camera, wherein the position information of the camera is calculated by considering a position information of the ship; and updating the second output value by reflecting a final distance information, wherein the final distance information may be calculated based on the first distance information and the second distance information.”); 0003 (“The present invention relates to a situation awareness method and device using image segmentation and more particularly, to a situation awareness method and device using a neural network which performs image segmentation.”); 0203 (“When obstacles are presented as weights, the weights may be categorized into a plurality of numeric ranges. The respective numeric ranges may correspond to the sailable region, the unsailable region, and the unknown region. FIG. 20 is a set of diagrams relating to an obstacle map according to an exemplary embodiment of the present invention. FIG. 20 shows examples of a 2D grid map. Referring to (a) of FIG. 20, whether there is an obstacle in each grid and/or unit region or whether each grid and/or unit region is suitable for sailing is presented using an integer of 0 to 255 as a weight. A smaller value represents that the probability of presence of an obstacle is high or the unit region is unsuitable for sailing, and a unit region without a value has a weight of 255. Unsailable regions may correspond to a weight of 0 to a, unknown regions may correspond to a weight of a+1 to b, and sailable regions may correspond to a weight of b+1 to 255. a and b are integers satisfying 0≤a<b≤254. For example, unsailable regions may correspond to a weight of 0, sailable regions may correspond to a weight of 255, and unknown regions may correspond to a weight of 1 to 254. Alternatively, unsailable regions may correspond to a weight of 0 to 50, sailable regions may correspond to a weight of 51 to 200, and unknown regions may correspond to a weight of 201 to 255. Referring to (b) of FIG. 20, the obstacle map presented with weights may be visualized. For example, the obstacle map may be presented by adjusting the brightness of an achromatic color. In (b) of FIG. 20, a smaller weight is presented with lower brightness. A detailed method of allocating a weight to a unit region will be described below.”)]; and, where the control unit is programmed to visualize the registered surroundings based on LiDAR data enriched by the neural network displaying an image or map representing the 3D point cloud from the second projection, and wherein the enrichment comprises classification of the registered information into class objects in order to distinct between different types of objects in the surroundings, wherein the control unit is arranged to make decisions adapted to the visualized objects nearby the marine vessel depending on their class objects [see at least Park, ¶ 0103 (“A neural network which performs image segmentation to sense surroundings may receive an image and output object information. Referring to FIGS. 2 and 3, training data and input data may be in the form of an image, which may include a plurality of pixels. Output data and labeling data may be object information. Additionally, it is possible to visually deliver information by visualizing output data and labeling data.”); 0196 (“The obstacle map may include a plurality of unit regions. The plurality of unit regions may be presented in various ways according to classification criteria. As an example, the obstacle map may include a sailable region, an unsailable region, and an unknown region which may not be determined to be sailable or not. Specifically, the sailable region may be a region in which there is no obstacle or any obstacle is highly unlikely to exist, and the unsailable region may be a region in which there is an obstacle or an obstacle is highly likely to exist. Alternatively, the sailable region may correspond to a suitable zone for sailing, and the unsailable region may correspond to an unsuitable zone for sailing. The unknown region may be a region excluding the sailable region and the unsailable region.”); 0208 (“An update region may be determined according to the location of a ship, heading of the ship, and the like. Alternatively, an update region may be determined according to an obstacle detection sensor, such as a camera, a radar, or a LiDAR. For example, an update region may be determined by the angle of view and/or the maximum observation distance of a camera. An update angle may be determined by the angle of view of a camera, and an update distance may be determined by the maximum observation distance. Alternatively, an update region may be a certain region around a ship. An update region may be larger than a minimum region for detecting and then avoiding an obstacle.”)]. Park does not disclose some of the aspects of a marine vessel that Suresh does teach about a marine vessel [see at least Suresh ¶ 0017 (“The objects can include a seashore, a watercraft, an iceberg, a static far object, a moving far object, or plain sea. The watercraft can include a personal non-powered vessel, recreational powered vessel, sailing yacht, cargo ship, cruise ship, coast guard boat, naval vessel, barge, tugboat, fishing vessel, workboat, under-powered vessel, or anchored vessel.”)]; mounted LiDAR sensors [see at least Suresh ¶ 0066 (“LiDAR’s”)]; surroundings analysis [see at least Suresh, ¶ 0014 (“A method is provided in a first embodiment. The method comprises training, using a processor, an object detection network with training images to identify and classify objects in images from a sensor system disposed on a maritime vessel. Objects in the images are identified and classified using the processor in an offline mode. Heat maps are generated in the offline mode. Instructions regarding operation of the maritime vessel are sent using the processor based on the objects that are identified. The instructions include a speed or a heading.”)]; positional information using trained neural network (“In an instance, images can be received by a trained neural network, such as an object detection network, image segmentation network, and object identification network. The neural network can be trained on the cloud. However, once trained, the neural network binary can be deployed to an offline system to identify objects across particular categories or classify the entire image. The neural network can provide a bounding box (e.g., x cross y) or other graphical shape (two-dimensional or three-dimensional) around an object in an image, which can size the object in the image using various methods, which may include using the diagonal length of the shape to infer size. The neural network also can provide a classification of the object in the bounding box with a confidence score.”)]. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Neither Park or Suresh specifically disclose/teach but Minear more specifically teaches the techniques for taking 2D and 3D images and combine the information needed for the image to have distinct and show the obstacles [see at least Minear, Abstract (“Method and system for combining a 2D image with a 3D point cloud for improved visualization of a common scene as well as interpretation of the success of the registration process. The resulting fused data contains the combined information from the original 3D point cloud and the information from the 2D image. The original 3D point cloud data is color coded in accordance with a color map tagging process. By fusing data from different sensors, the resulting scene has several useful attributes relating to battle space awareness, target identification, change detection within a rendered scene, and determination of registration success.”); 0002 (“The inventive arrangements concern registration of two-dimensional and three dimensional image data, and more particularly methods for visual interpretation of registration performance of 2D and 3D image data. This technique is used as a metric to determine registration success.”); 0008 (“The invention concerns a method and system for combining a 2D image with a 3D point cloud for improved visualization of a common scene as well as interpretation of the success of the registration process. The resulting fused data contains the combined information from the original 3D point cloud and the information from the 2D image. The original 3D point cloud data is color coded in accordance with a color map tagging process. By fusing data from different sensors, the resulting scene has several useful attributes relating to battle space awareness, target identification, change detection within a rendered scene, and determination of registration success.”)]. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh, further with the ability to a 3D point cloud data and 2D data to provide maps of Minear. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Claim 2 Park, Suresh and Minear disclose/teach the system of Claim 1. Park further discloses a helm station to visualize the registered surroundings and to provide input for manually controlling a driveline of the marine vessel [see at least Park, ¶ 0054 (“0243 (“The control signal includes a speed control signal and a heading control signal. The speed control signal may be a signal for adjusting the rotational speed of a propeller. Alternatively, the speed control signal may be a signal for adjusting revolution per unit time of the propeller. The heading control signal may be a signal for adjusting the rudder. Alternatively, the heading control signal may be a signal for adjusting the wheel or helm.”); 0248 (“Even in autonomous navigation, information may be required for a person to monitor the ship. Such information may be transferred to a person through visualization. Results used or generated in the above-described situation awareness, path-planning, and path-following operations may be processed and visualized.”)]. Park does disclose this limitation but Suresh also teaches this limitation with more clarity [see at least Suresh ¶ 0082 (“Additionally, instructions may include suggestions to a pilot, helmsman, or captain to make any of these adjustments. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Claim 3 Park, Suresh and Minear disclose/teach the system of Claim 1. Park further discloses classified information from the classification is disclosed as a three dimensional, 3D, point cloud visualization with positional information and class information [see at least Park, ¶ 0195; 0196 (“The obstacle map may include a plurality of unit regions. The plurality of unit regions may be presented in various ways according to classification criteria. As an example, the obstacle map may include a sailable region, an unsailable region, and an unknown region which may not be determined to be sailable or not. Specifically, the sailable region may be a region in which there is no obstacle or any obstacle is highly unlikely to exist, and the unsailable region may be a region in which there is an obstacle or an obstacle is highly likely to exist. Alternatively, the sailable region may correspond to a suitable zone for sailing, and the unsailable region may correspond to an unsuitable zone for sailing. The unknown region may be a region excluding the sailable region and the unsailable region.”)]. Park does disclose this limitation but Suresh also teaches this limitation with more clarity [see at least Suresh, ¶ 0111 (“FIG. 6 is a flowchart of an example situational awareness system 600 in accordance with an embodiment. At 601, a GPS position is obtained. At 602, a stored nautical chart for 10×10 nautical miles around the GPS position is obtained. At 603, static objects and depth information from the nautical chart may be used to populate the map. At 604, AIS receiver data for 10×10 nautical miles around the GPS position is obtained. At 605, the AIS receiver data is populated on the map. If internet access is available, at 606, the map is compared with the local marine traffic authority's map and missing information is added to the map. At 607, LIDAR data is obtained and used to populate the map. At 608, RADAR data is obtained and used to populate the map. At 609, objects on the map are annotated based on camera data. At 610, duplicate objects are removed and error checks are run. At 611, a consolidated two-dimensional map is displayed on the monitors.”)]. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Claim 4 Park, Suresh and Minear disclose/teach the system of Claim 1. Park further discloses classified information from the classification is disclosed as a probability map [see at least Park, ¶ 0203-0204 (“When obstacles are presented as weights, the weights may be categorized into a plurality of numeric ranges. The respective numeric ranges may correspond to the sailable region, the unsailable region, and the unknown region. FIG. 20 is a set of diagrams relating to an obstacle map according to an exemplary embodiment of the present invention. FIG. 20 shows examples of a 2D grid map. Referring to (a) of FIG. 20, whether there is an obstacle in each grid and/or unit region or whether each grid and/or unit region is suitable for sailing is presented using an integer of 0 to 255 as a weight. A smaller value represents that the probability of presence of an obstacle is high or the unit region is unsuitable for sailing, and a unit region without a value has a weight of 255. Unsailable regions may correspond to a weight of 0 to a, unknown regions may correspond to a weight of a+1 to b, and sailable regions may correspond to a weight of b+1 to 255. a and b are integers satisfying 0≤a<b≤254. For example, unsailable regions may correspond to a weight of 0, sailable regions may correspond to a weight of 255, and unknown regions may correspond to a weight of 1 to 254. Alternatively, unsailable regions may correspond to a weight of 0 to 50, sailable regions may correspond to a weight of 51 to 200, and unknown regions may correspond to a weight of 201 to 255. Referring to (b) of FIG. 20, the obstacle map presented with weights may be visualized. For example, the obstacle map may be presented by adjusting the brightness of an achromatic color. In (b) of FIG. 20, a smaller weight is presented with lower brightness. A detailed method of allocating a weight to a unit region will be described below. [0204] Object information for updating an obstacle map may be information based on an attribute of a means for obtaining the object information. For example, location information of an obstacle obtained through image segmentation may be location information relative to a camera which has captured an image input for the image segmentation. The relative location information may be converted into location information on the obstacle map so as to be applied to the obstacle map. One or both of the relative location information and the location information on the obstacle map may be used to update the obstacle map. Hereinafter, coordinates on an obstacle map are referred to as absolute coordinates.”)]. Park does disclose this limitation but Suresh also teaches this limitation with more clarity [see at least Suresh,, ¶ 0073 (“In an instance, images can be received by a trained neural network, such as an object detection network, image segmentation network, and object identification network. The neural network can be trained on the cloud. However, once trained, the neural network binary can be deployed to an offline system to identify objects across particular categories or classify the entire image. The neural network can provide a bounding box (e.g., x cross y) or other graphical shape (two-dimensional or three-dimensional) around an object in an image, which can size the object in the image using various methods, which may include using the diagonal length of the shape to infer size. The neural network also can provide a classification of the object in the bounding box with a confidence score.”)]. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Claim 5 Park, Suresh and Minear disclose/teach the system of Claim 4. Park further discloses the probability map is a two dimensional, 2D, point cloud visualization with positional information and class information [see at least Park, ¶ 0195]. Park does disclose this limitation but Suresh also teaches this limitation with more clarity [see at least Suresh, ¶ 0103 (“FIG. 4 is a diagram of an example situational awareness system 400 in accordance with an embodiment. At 401, a map may display a vessel in proportionate size and show all other objects around it. The map may have zoom capability using two-finger expansion and/or contraction. The map may have a grid overlay function with options for changing grid size in the graphical user interface. The depth of the matter represented in each pixel on the map may be color coded. For example, depth greater than 80 feet may be light blue and gradually progress to darker shades of blue until those areas less than 40 feet in depth are black. Land may be represented as yellow, the vessel navigating may be light green, and other vessels may be red. As such, a two-dimensional grid map 402 is enabled.”)]. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Claim 6 Park, Suresh and Minear disclose/teach the system of Claim 4. Park further discloses the probability map is a three dimensional, 3D, point cloud visualization with positional information and class information [see at least Park, ¶ 0195]. Claim 7 Park, Suresh and Minear disclose/teach the system of Claim 1. Park further discloses the classification is done with a projection-based method for semantic classification of a three dimensional, 3D, point cloud [see at least Park, ¶ 0137 (“Obtaining labelling data including distance information will be described in further detail. Distance information may be obtained using a depth camera. The depth camera may be a stereo type, a structured pattern type, a TOF type, or the like or may be a combination of two or more thereof. It is possible to generate one piece of labelling data by obtaining distance information of each pixel in an image from the depth camera. In addition to this, various methods for obtaining distance information may be used.”)]. Claim 8 Park, Suresh and Minear disclose/teach the system of Claim 1. Park further discloses each point in the visualizations is coloured with a colour of a class object [see at least Park, ¶ 0252 (“Also, it is possible to output obstacle characteristics including the distance, speed, danger, size, and collision probability of an obstacle. Obstacle characteristics may be output using color, which may vary according to the distance, speed, danger, size, and collision probability of an obstacle.”); 0255 (“Alternatively, the black region 610 may present a region with high danger, the grey region 630 may present a region with medium danger, and a white region 650 may present a region with little danger.”)]. 9. (Canceled) Claim 10 Park, Suresh and Minear disclose/teach the system of Claim 1. Park further discloses the control unit is arranged to make decisions in such a way that: a. if there is another marine vessel within a predetermined distance, the control unit automatically lowers the speed of the marine vessel below a predetermined speed to avoid getting too close to the other marine vessel while, b. If instead a dock is registered, the marine vessel is allowed to drive faster than the predetermined speed when approaching the dock, since a dock is not a movable object compared to the other marine vessel [see at least Park, ¶ 0210 (“ A weight may vary according to the type information of an object. As an example, when a ship and a buoy are detected as obstacles, the weight of the ship may be set to be smaller than the weight of the buoy. As another example, when a stationary object and a moving object are detected, the weight of the stationary object may be set to be smaller than the weight of the moving object.”); 0214 (“As another example, distance information obtained through image segmentation may include only the minimum distance to an object. When type information of the object is additionally used, it is possible to obtain the maximum distance as well as the minimum distance, and weights may be set on the basis of the distance information.”): 0223; 0235 (“FIG. 29 is a block diagram relating to an obstacle map update operation according to an exemplary embodiment of the present invention. Referring to FIG. 29, it is possible to convert location information of an obstacle into absolute coordinates using the location of the ship and/or the camera calculated with the GPS installed in the ship, the position information of the ship obtained by the IMU, and the camera position information based on the camera position information (S3100). A weight may be set using the converted object information, weather environment information, and sailing regulations (S3300), and a final obstacle map of a current frame may be output (S3500) using the obstacle map of the previous frame and an update region set on the obstacle map (S3600).”); 0242 (“In the path-following operation, a control signal is generated so that the ship may follow the planned path. FIG. 32 is a block diagram relating to a path-following operation according to an exemplary embodiment of the present invention. Referring to FIG. 32, in the path-following operation S5000, a following path, ship status information, weather environment information, sailing regulations, etc. may be input, and a control signal for the ship may be output. The input information may include all or only some of the aforementioned information and may also include information other than the aforementioned information.”)]. Park does disclose this limitation but Suresh also teaches control of a vessel with more clarity [see at least Suresh, Claim 1 (“1. A method comprising: training, using a processor, an object detection network with training images to identify and classify objects in images from a sensor system disposed on a maritime vessel; identifying objects in the images using the processor in an offline mode; classifying the objects in the images using the processor in the offline mode generating heat maps in the offline mode; and sending instructions regarding operation of the maritime vessel, using the processor, based on the objects that are identified, wherein the instructions include a speed or a heading.”)]. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Claim 11 Park, Suresh and Minear disclose/teach the system of Claim 1 Park further discloses a marine vessel comprising the marine surround sensing system [see at least Park, ¶ 0052 (“marine image”); 0524 (“ a ship-oriented obstacle map, an existing path, and an obstacle-avoiding path may be output in a bird's eye view.”)]. Park does disclose this limitation but Suresh also teaches a marine vessel [see at least Suresh, Abstract (“maritime vessel”)]. Therefore, it would be obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify/combine, with a reasonable expectation of success, the situation awareness techniques of Park, with the object detection network with images to identify and classify objects in the images of Suresh. Providing a more effective, efficient and safer technique to help a vessel avoid obstacles and other hazards at sea and in ports. Claim 12 Claim 12 has similar limitations to claim 1, therefore claim 12 is rejected with the same rationale as claim 1. Claim 13 Claim 13 has similar limitations to claim 3, therefore claim 13 is rejected with the same rationale as claim 3. Claim 14 Claim 14 has similar limitations to claim 4, therefore claim 14 is rejected with the same rationale as claim 4. Claim 15 Claim 15 has similar limitations to claim 5, therefore claim 15 is rejected with the same rationale as claim 5. Claim 16 Claim 16 has similar limitations to claim 6, therefore claim 16 is rejected with the same rationale as claim 6. Claim 17 Claim 17 has similar limitations to claim 7, therefore claim 17 is rejected with the same rationale as claim 7. 18. (Canceled) Claim 19 Claim 19 has similar limitations to claim 1, therefore claim 19 is rejected with the same rationale as claim 1. Claim 20 Claim 20 has similar limitations to claim 1, therefore claim 20 is rejected with the same rationale as claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOAN T GOODBODY whose telephone number is (571) 270-7952. The examiner can normally be reached on M-TH 7-3 (US Eastern time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form.html. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVEK KOPPIKAR, can be reached at (571) 272-5109. The Fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspot.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from the USPTO Customer Serie Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000. /JOAN T GOODBODY/ Examiner, Art Unit 3664 (571) 270-7952
Read full office action

Prosecution Timeline

Nov 27, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595032
SYSTEMS AND METHODS FOR MONITORING BATTERY RANGE FOR AN ELECTRIC MARINE PROPULSION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12586461
CLOUD-BASED MODEL DEPLOYMENT AND CONTROL SYSTEM (CMDCS) FOR PROVIDING AUTOMATED DRIVING SERVICES
2y 5m to grant Granted Mar 24, 2026
Patent 12560444
JOINT ROUTING OF TRANSPORTATION SERVICES FOR AUTONOMOUS VEHICLES
2y 5m to grant Granted Feb 24, 2026
Patent 12532794
SYSTEM AND METHOD FOR CONTROLLING AN AGRICULTURAL SYSTEM BASED ON SLIP
2y 5m to grant Granted Jan 27, 2026
Patent 12525134
METHODS OF A MOBILE EDGE COMPUTING (MEC) DEPLOYMENT FOR UNMANNED AERIAL SYSTEM TRAFFIC MANAGEMENT (UTM) SYSTEM APPLICATIONS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
49%
Grant Probability
89%
With Interview (+39.7%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 199 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month