Prosecution Insights
Last updated: April 19, 2026
Application No. 18/087,500

Device and System to Identify a Water-Based Vessel using Acoustic Signatures

Final Rejection §103
Filed
Dec 22, 2022
Examiner
ARMSTRONG, JONATHAN D
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Anno.ai, Inc.
OA Round
3 (Final)
52%
Grant Probability
Moderate
4-5
OA Rounds
3y 9m
To Grant
54%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
218 granted / 415 resolved
+0.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
63 currently pending
Career history
478
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
18.4%
-21.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-14, and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shams (US 2018/0210065 A1), Garnier (US 2009/0207020 A1), and Song (2021, Applied Acoustics). Regarding claims 1, 10, and 20, Shams teaches a system and a method comprising: a water-based platform [[0048] infrasound microphones 14 in such an embodiment may be infrasonic hydrophones configured to transmit signals wirelessly to the DAS 50, which in turn may be located in a control room on land or on a buoy floating on the surface 30] having an acoustic sensor and a transmitter [[0030] acoustic signals (arrows S1, S2, and S3) are transmitted to the DAS 50 by respective first (1), second (2), and third (3) infrasound microphones 14 in each of the microphone arrays 12A and 12B.], the acoustic sensor structured to capture marine acoustic data indicative of a marine vessel [[0003] common manmade sources of infrasound include aircraft wave vortices, supersonic flight, wind turbine rotation, space vehicle launches, and explosive detonations, as well as the operation of propulsion systems of surface or submerged marine vessels.; [0042] pair of microphones … microphone array], the transmitter structured to transmit a water-based platform data that includes the marine acoustic data [[0010] transmitting an electronic control signal from the DAS to a remote device indicative of the estimated properties]; a data hub configured to: receive a [data] indicative of an identity and location of the marine vessel [[abstract] recognizing the infrasound source using the coherence and a time history of the detected signals; [0024-0025; 0031] the system 10 of FIG. 1 is able not only to detect and track the location of the infrasound source 16, but also to execute preemptive measures; [0032] microphone array 12 may be in communication with a Global Positioning System (GPS) receiver 20, e.g., a weather station GPS receiver located in close proximity to the microphone array 12. The DAS 50 is in communication with the microphone array(s) 12 via network connections 120, e.g., hardwired transfer conductors or wireless communications channels [0036] part of recognizing the infrasound source 16 within the scope of the disclosure is identifying the geolocation of the infrasound source 16 as closely as possible using the Time History blocks 150 and the signal coherence]; receive the water-based platform data [[0033] calculate a time history of the collected acoustic data from each infrasound microphone 14 at corresponding Time History Blocks 150. Acoustic data may be recorded at a desired sampling rate, e.g., 200-500 samples per second, and then processed in suitable time blocks]; and generate a data driven model based on a labeling of the water-based platform data using the [data] [[0053] Using pattern recognition such as neural networks or simple acoustic signature comparison, the DAS 50 may be used to quickly identify the infrasound source 16 … signatures of aircraft, tornadoes, and other natural or manmade sources of infrasound can be collected over time and used to fine-tune the accuracy of identification of the infrasound source 16; [0030] a server or Data Acquisition System (DAS) 50]. Shames does not explicitly teach and yet Garnier teaches satellite data [[0003] sensors … threats … electro-acoustic … a buoy or a satellite … report intelligence data collected from sources; [0029] first part of the data is constant and entered manually, such as: the Maritime Mobile Service Identity (MMSI)—a 9 digits unique identifier of on board RF equipments, IMO number, call sign and name, length and beam, location of position fixing antenna on the ship. A second part of the data is variable input and is collected automatically by the AIS, mostly from Global Navigation Satellite System (GNSS) data: ship's position with accuracy indication and integrity status, position time stamp, course and speed over ground, heading, rate of turn, navigational status; [0036] vessel which may be exactly identified (e.g., name, flag, owner, crew) or identified by only a subset of these characteristics; [0062] this is a means to capture and learn the normal behaviour patterns and compare the actual behaviour of a track against the normal behaviour based on history; [0072] intelligence sources may be quite diverse … satellite images]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to modify the acoustic data as detected by Shames for tracking water vessels, with the satellite data including satellite images and AIS transponders for tracking ships using names and locations as taught by Garnier so that level of threats may be learned especially when vessels deviate from expected tracks (Garnier) [[abstract]; [0003]; [0004]]. Shames does not explicitly teach and yet Song teaches wherein the data hub is configured to label the water-based platform data as a result of the acoustic data of the marine vessel being within a defined signal to noise ratio (SNR) of the water-based platform [[abstract] we classified underwater noise using the support vector machine (SVM) and convolutional neural networks (CNN) methods and verified the results using the original data from five classes of typical underwater noise and noise-added data with different signal-to-noise ratios (SNRs).; sec. 3.3 classification method Gaussian white noises of different intensities were generated through simulation and added to the original samples to construct noise-added samples with signal-to-noise ratios (SNRs) of approximately -20 dB to 10 dB, with an interval of 5 dB, to examine the relationship between the SNR and the classification result.; sec. 4 Automatic Identification System (AIS) was used to record the ship distribution. Therefore, according to the mainly different noise sources, the data can be labeled as five classes. A is a fishing boat, but sometimes there are man-made impacts with a duration of 38 s. The remaining four are all nearby cargo ships, but B and C look more alike, and D and E look more alike.]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to modify the acoustic data as detected by Shames for tracking water vessels, with the labeling of Automatic Identification System (AIS) identified ships as a function of SNR as taught by Song because different classifiers were relatively more accurate at different SNRs (Song) [[sec. 4.1] effect of input features on classification]. (Claim 10 is a method form and Claim 19 is a non-transitory computer readable medium form of the system recited in claim 1 and is therefore rejected for similar reasons). Regarding claim 2, Shames teaches the system of claim 1, wherein the water-based platform includes a plurality of water-based platforms distributed in a maritime operating area, and wherein the data hub is configured to receive water-based platform data from each of the plurality of water- based platforms [[0025] the system 10 includes a plurality of microphone arrays 12A and 12B (labeled Array 1 and Array 2, respectively); [0032] multiple such arrays]. Regarding claims 3 and 12, Shames teaches the system and method of claims 2 and 11, wherein the water-based platform data from each of the plurality of water-based platforms also includes a location data of each water-based platform, the location data of each water-based platform paired with the marine acoustic data of each water-based platform [[0031] detect and track the location of the infrasound source 16; [0032] microphone array 12 may be in communication with a Global Positioning System (GPS) receiver 20, e.g., a weather station GPS receiver located in close proximity to the microphone array 12]. Regarding claims 4 and 13, Shames teaches the system and method of claims 2 and 11, wherein the water-based platform data from each of the plurality of water-based platforms also includes a time data, the time data of each water- based platform paired with the marine acoustic data of each water-based platform [[0033] DAS 50 of FIG. 2A may calculate a time history of the collected acoustic data from each infrasound microphone 14 at corresponding Time History Blocks 150]. Regarding claims 5 and 14, Shames teaches the system of claims 4 and 13, wherein the [data] is indicative of an identity and location of a plurality of marine vessels [[0003] surface or submerged marine vessels], and data hub is configured to receive the satellite data and wherein the labeling includes labeling acoustic data of the water-based platform data with the plurality of marine vessels [[0007] recognizing and tracking mobile sources of infrasound; [0053] pattern recognition such as neural networks or simple acoustic signature comparison]. Regarding claims 7 and 16, Shames teaches the system and method of claims 4 and 13, wherein the [data] is used to curate the water-based platform data by labeling the water-based platform data when the marine vessel is within a predefined geometric distance of the water-based platform [[0026] because each microphone array 12A and 12B may have an effective infrasonic listening range of up to several hundred kilometers, a relatively extensive amount of geographical territory may be monitored for the presence of the infrasound event 16 using a relatively small number of microphone arrays 12A and 12B; [0036] identifying the geolocation of the infrasound source 16 as closely as possible using the Time History blocks 150 and the signal coherence in each possible pairing of the infrasound microphones 14 via the Coherence Calculation blocks 160]. Regarding claims 8 and 17, Shames teaches the system and method of claims 4 and 13, wherein the data driven model is also generated based on labeling the water-based platform data using derived data from the [data] [[0053] include cataloguing, e.g., in memory (M) of the DAS 50, a library or catalogue of infrasonic signatures from a number of previously-detected and recognized infrasound sources 16 over time. Using pattern recognition such as neural networks or simple acoustic signature comparison, the DAS 50 may be used to quickly identify the infrasound source 16]. Regarding claims 9 and 18, Shame teaches the system and method of claims 8 and 17, wherein the data derived from the [data] includes at least one of (1) distance between the marine vessel and the water-based platform [[0036] identifying the geolocation of the infrasound source 16]; (2) bearing of the marine vessel from the water-based platform [[0053] using the method 52 as set forth above with reference to FIGS. 2A-10B, the direction or heading of such an infrasound source 16]; and (3) orientation of the marine vessel relative to the water-based platform [[abstract] method may include estimating source properties via the DAS, including a magnitude, azimuth angle, and elevation angle, and executing a control action in response to the estimated properties; [0025] microphone arrays 12A and 12B are also shown as being a respective distance D16 and D′16 away from the infrasound source 16. As the infrasound source 16 is expected to move over time, the distances D16 and D′16 will vary over time. The infrasound source 16 is situated at an angle of elevation (θ1, θ2) with respect to each microphone array 12A and 12B, with the angles of elevation (θ1, θ2) described in further detail below with reference to FIG. 3]. Regarding claim 11, Shame teaches the method of claim 10, wherein the capturing includes capturing marine acoustic data of a plurality of water-based platforms in a maritime operating area, and transmitting water based platform data for each of the plurality of water-based platforms to the data hub [[[0025] the system 10 includes a plurality of microphone arrays 12A and 12B (labeled Array 1 and Array 2, respectively); [0030] The DAS 50 may be embodied as one or more computer devices having requisite memory (M) and a processor (P); [0032] multiple such array]. Claims 6, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shams (US 2018/0210065 A1), Garnier (US 2009/0207020 A1), and Song (2021, Applied Acoustics) as applied to claims 4, 13, and 19 above, and further in view of Englund (US 2020/0191613 A1). Regarding claims 6, 15, and 20, Shams does not explicitly teach and yet Englund teaches the system, method, and non-transitory computer readable medium of claims 4, 13, and 19, wherein the satellite data is indicative of an image including the location of a plurality of marine vessels [[0020] the non-acoustic sensing system may include at least one of a moving image capturing system, a machine vision system, a satellite imagery system, a closed-circuit television system, and a cellular signal based system], wherein the data hub is configured to receive the satellite data and wherein the labeling includes labeling acoustic data of each of the marine vessels of the plurality of marine vessels using the image [[0015] classifying the acoustic data includes the application of AI or machine learning based algorithms; [abstract] stores the datasets in parallel; [0010] method may include classifying the acoustic data by correlating it with acoustic signatures associated with each of the target classes or types.; [0013] in one aspect, the step of correlating the acoustic data with acoustic signatures includes applying acoustic signature-based filters to detect the acoustic targets.; [0017] method may include processing or representing the datasets together with surveillance data obtained from at least one non-acoustic sensing system; [0018] the method may include generating alert criteria associated with the respective acoustic signatures, and triggering an alarm or warning in the event of the alert criteria being triggered.]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the acoustic signature recognition as taught by Shams, with the acoustic signature detection of acoustic targets and satellite imagery system as taught by Englund so that targets may be classified in a dataset with time and location stamping so that they can be retrieved later for further processing (Englund) [[abstract]]. Response to Arguments Applicant’s arguments, see pg. 7, filed 10/17/2025, with respect to the rejection(s) of claim(s) 1 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Song (2021, Applied Acoustics). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN D ARMSTRONG whose telephone number is (571)270-7339. The examiner can normally be reached M - F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached on 571-272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN D ARMSTRONG/ Examiner, Art Unit 3645 /ISAM A ALSOMIRI/ Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Oct 28, 2024
Non-Final Rejection — §103
Mar 04, 2025
Response Filed
Jun 13, 2025
Non-Final Rejection — §103
Oct 17, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566264
ENHANCED RESOLUTION SPLIT APERTURE USING BEAM SEGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12535001
DOWNHOLE ACOUSTIC SYSTEM FOR DETERMINING A RATE OF PENETRATION OF A DRILL STRING AND RELATED METHODS
2y 5m to grant Granted Jan 27, 2026
Patent 12510644
Ultrasonic Microscope and Carrier for carrying an acoustic Pulse Transducer
2y 5m to grant Granted Dec 30, 2025
Patent 12504525
OBJECT DETECTION DEVICE
2y 5m to grant Granted Dec 23, 2025
Patent 12495789
ULTRASONIC GENERATOR AND METHOD FOR REPELLING MOSQUITO IN VEHICLE USING THE SAME
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
52%
Grant Probability
54%
With Interview (+1.5%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month