DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-24, 27, and 29-30 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1, 29, and 30 cover a method, system, and CRM for automated passive acoustic monitoring via machine learning. The specification, as originally filed fails to describe the machine learning model framework in sufficient detail so that one of ordinary skill in the art can reasonably conclude that applicant had possession of the claimed invention.
Specifically, the specification simply recites the intended function and desired results of machine learning sound source classification, however fails to provide sufficient disclosure regarding the machine learning framework and processing steps performed at each stage in the machine learning model required to achieve the intended function. For instance, Applicant’s Spec. at [0023]-[0024] and [0026] merely states a broad range of potential machine learning model types that may be utilized, as well as stating the desired capability of classifying sound sources belonging to marine mammals, however remains silent as to any concrete machine learning model framework and processing steps required to achieve the desired result. Further, Applicant’s Spec. at [0033] merely states that the method may utilize any number of types of machine learning models, and that they may accept feature vectors as input, which may be based on any number of parameters regarding a signal of interest in order to classify a predicted source, however remains silent as to any concrete machine learning model type, number of layers/nodes required, and any explicit processing steps performed by each layer required to achieve the desired result. Additionally, Applicant’s Spec. at [0040]-[0041] merely states that the machine learning model may be a CNN of an infinite number of configurations regarding its input layers, hidden layers, etc. and may accept an input, apply a weight or bias, which may yield a sound source classification, however remains silent as to any specific machine learning framework and processing step performed by each layer in order to yield the desired result. Applicant’s Spec. at [0042]-[0043] then goes on to describe how the machine learning models may be trained, however the training is recited at a high level of generality and no specific details regarding how training data is analyzed to train the model in order to achieve the desired result of classifying the sound sources. Finally, Applicant’s Spec. at [0064]-[0068] describes an embodiment where the machine learning model may be a CNN and may comprise an input layer for processing data, and that filtering may be applied based on a generically recited numeric weighting and/or biasing, and that multiple hidden layers may be present to identify more complex patterns within the audio data, however remains silent as to the processing steps taken to identify such further complex patterns. Whether one of ordinary skill in the art could device a way to accomplish the function is not relevant to the issue of whether the inventor has shown possession of the claimed invention (See Blackboard, 574 F.3d at 1385, 91 USPQ2d at 1493). Thus, the written description is inadequate for a person of ordinary skill in the art to conclude the applicant had possession of the claimed invention.
Claims 2-5 lacks written description support because there is no specific disclosure regarding what specific filtering processes are performed by the machine learning model.
claims 6-10 lacks written description support because there is no specific disclosure regarding the specific machine learning model framework and how it yields a predicted source.
Claims11-15 lacks written description support because there is no specific disclosure regarding the feature vector extraction/transform process and how it.
Claims 16 lacks written description support because there is no specific disclosure regarding how the presence detection is explicitly carried out by the machine learning model.
Claims 2-24 and 27 are further rejected due to their respective dependency upon a rejected base claim.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 6, and 29-30 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by O’Hara et al. (US 11790936 B1, “O’Hara”).
Regarding claim 1, O’Hara discloses a processor-implemented method for monitoring acoustic data comprising: accessing an acoustic sensor(Fig. 1 (25) and (15) illustrate a hydrophone array and a computing system where the hydrophone array forwards received data to the computing device), wherein the acoustic sensor includes an embedded acoustic controller (Fig. 1 (120)), wherein the embedded acoustic controller hosts a machine learning model, and wherein the acoustic sensor is coupled to one or more hydrophones; deploying, in a body of water, the acoustic sensor, wherein the acoustic sensor is submerged; receiving, by the one or more hydrophones, an underwater audio signal ([column 3, lines 1-6], computing device may be set up on board a vessel which then tows the hydrophone array in a marine environment in order to detect marine mammals); classifying, by the machine learning model, a predicted source of the underwater audio signal (column 8, lines 40-43], CNN may be trained on transformed spectrograms corresponding to received audio data in order to detect and classify sounds); and reporting, to a user by the acoustic sensor, the predicted source of the underwater audio signal, wherein the reporting is accomplished using a communications device ([column 9, lines 38-43], vessel operator may be notified of the presence of a marine mammal if detected)([column 13, lines 38-44], model may be configured to classify the species of mammal detected by allowing the model to learn various marine mammal species through training).
Regarding claim 6, O’Hara discloses the method of claim 1. O’Hara further discloses the predicted source comprises a marine mammal([column 4, lines 26-29], marine mammal classification models may infer the presence of a marine mammal based on vocalizations received from audio data).
Regarding claim 29, the claim is a CRM claim corresponding to claim 1 and is therefore rejected for the same reasons
Regarding claim 30, the claim is a system claim corresponding to claim 1 and is therefore rejected for the same reasons
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 2-5, 7, and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Williams et al. ("Enhancing automated analysis of marine soundscapes using ecoacoustic indices and machine learning." Ecological Indicators 140 (2022): 108986., “Williams”).
Regarding claim 2, O’Hara discloses the method of claim 1 wherein the classifying is based on filtering, by the embedded acoustic controller, the underwater audio signal for a first frequency band, wherein the first frequency band is associated with a first source of interest.
O’Hara may not explicitly disclose the classifying is based on filtering, by the embedded acoustic controller, the underwater audio signal for a first frequency band, wherein the first frequency band is associated with a first source of interest.
Williams teaches the classifying is based on filtering, by the embedded acoustic controller, the underwater audio signal for a first frequency band, wherein the first frequency band is associated with a first source of interest ([pg. 4], classification is performed based on three different frequency bands with various index values within each. Fish vocalizations were categorized as belonging to one band, whereas snapping shrimp were associated with another band).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, to include the filtering of Williams with a reasonable expectation of success, with the motivation of using ecoacoustic indices in order to monitor the health of marine habitats [abstract].
Regarding claim 3, O’Hara, as modified in view of Williams teaches the method of claim 2. Williams further teaches the filtering includes a second frequency band, wherein the second frequency band is associated with a second source of interest([pg. 4], classification is performed based on three different frequency bands with various index values within each. Fish vocalizations were categorized as belonging to one band, whereas snapping shrimp were associated with another band).
Regarding claim 4, O’Hara, as modified in view of Williams teaches the method of claim 3. Williams further teaches the underwater audio signal is filtered for the first frequency band and the second frequency band simultaneously([pg. 4], classification is performed based on three different frequency bands with various index values within each. Fish vocalizations were categorized as belonging to one band, whereas snapping shrimp were associated with another band).
Regarding claim 5, O’Hara, as modified in view of Williams teaches the method of claim 3. Williams further teaches the classifying incudes a first classifying, wherein the first classifying is based on the first frequency band, wherein the classifying includes a second classifying, wherein the second classifying includes the second frequency band, and wherein the first classifying and the second classifying occur simultaneously([pg. 4], classification is performed based on three different frequency bands with various index values within each. Fish vocalizations were categorized as belonging to one band, whereas snapping shrimp were associated with another band)(It is the examiner’s interpretation that depending on the content of the received acoustic signals, classification of frequency content is performed simultaneously, where the received signal is assigned to the respective band in which it falls).
Regarding claim 7, O’Hara, as modified in view of Williams teaches the method of claim 5. O’Hara further teaches the predicted source comprises a species of marine mammal([column 9, lines 37-43], model may be trained to further classify species of marine mammal to be detected through vocalization annotation).
Regarding claim 15, O’Hara, as modified in view of Williams teaches the method of claim 7. O’Hara further teaches generating an associated probability score, wherein the associated probability score predicts an accuracy of the classifying ([column 7, lines 5-23], audio data can be reviewed by biostatisticians to provide validation labels which may then be included in the model to assign confidence ratings to data regarding various types of sound sources).
Regarding claim 16, O’Hara, as modified in view of Williams teaches the method of claim 15 O’Hara further teaches alerting a vessel of the species of marine mammal ([column 4, lines 17-22], software aboard the vessel may run automatically and may determine the presence of a marine mammal to flag to the vessel).
Claim(s) 8 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Williams and Gillespie et al. ("Passive acoustic methods for tracking the 3D movements of small cetaceans around marine structures." PLoS One 15.5 (2020): e0229058., “Gillespie”).
Regarding claim 8, O’Hara, as modified in view of Williams teaches the method of claim 7. O’Hara, as modified in view of Williams may not explicitly teach the predicted source further comprises an individual animal within the species of marine mammal.
Gillespie teaches the predicted source further comprises an individual animal within the species of marine mammal (Fig. 8 illustrates localization of an individual porpoise along a track over a period of time)([pg. 4], signals were identified as porpoise clicks if they had a peak frequency within a designated band).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, as modified in view of Gillespie to include the individual animal identification of Gillespie with a reasonable expectation of success, with the motivation of tracking cetacean movements near man made structures (turbines)[abstract].
Regarding claim 10, O’Hara, as modified in view of Williams and Gillespie teaches the method of claim 8. Gillespie further teaches filtering, by the acoustic embedded controller, the underwater audio signal for a frequency band, wherein the individual animal is associated with the frequency band(Fig. 8 illustrates localization of an individual porpoise along a track over a period of time)([pg. 4], signals were identified as porpoise clicks if they had a peak frequency within a designated band).
Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Williams and Trawicki ("Multispecies discrimination of whales (cetaceans) using Hidden Markov Models (HMMS)." Ecological Informatics 61 (2021): 101223., “Trawicki”).
Regarding claim 11, O’Hara, as modified in view of Williams teaches the method of claim 7. O’Hara, as modified in view of Williams may not explicitly teach the classifying is accomplished using one or more feature vectors, wherein the one or more feature vectors are created by the embedded acoustic controller, and wherein the one or more feature vectors are based on the underwater audio signal.
Trawicki teaches teach the classifying is accomplished using one or more feature vectors, wherein the one or more feature vectors are created by the embedded acoustic controller, and wherein the one or more feature vectors are based on the underwater audio signal ([pg. 3], MFCCs are the classical features used in parametrization of vocalizations. According to the frame and step size, feature vectors are extracted to take advantage of stationarity. After applying a hamming window, a Fourier transform is computed in order to extract features that account for non-linearity in frequencies across the audio spectrum in order to more accurately approximate the vocalizations for recognition)([pg. 5], Table 5 illustrates a comparison of the number of MFCCs utilized and feature vectors extracted along with their correct results and accuracy).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, as modified in view of Williams to include the feature vector extraction of Trawicki with a reasonable expectation of success, with the motivation of accounting for non-linearity in the frequencies in order to more accurately approximate vocalizations for recognition purposes [pg. 4].
Regarding claim 12, O’Hara, as modified in view of Williams Trawicki teaches the method of claim 11. Trawicki further teaches the classifying includes transforming the one or more feature vectors, wherein the transforming is based on Mel-Frequency Cepstral Coefficients (MFCCs)([pg. 3], MFCCs are the classical features used in parametrization of vocalizations. According to the frame and step size, feature vectors are extracted to take advantage of stationarity. After applying a hamming window, a Fourier transform is computed in order to extract features that account for non-linearity in frequencies across the audior spectrum in order to more accurately approximate the vocalizations for recognition)
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Williams, Trawicki, and Bittle et al. ("A review of current marine mammal detection and classification algorithms for use in automated passive acoustic monitoring." Proceedings of Acoustics. Vol. 2013. Victor Harbor, SA: Australian Acoustical Society, 2013., “Bittle”).
Regarding claims 13, O’Hara, as modified in view of Williams and Trawicki teaches the method of claim 11. O’Hara, as modified in view of Williams and Trawicki may not explicitly teach the classifying includes transforming the one or more feature vectors, wherein the transforming is based on a Fast Fourier Transform (FFT).
Bittle teaches the classifying includes transforming the one or more feature vectors, wherein the transforming is based on a Fast Fourier Transform (FFT) ([pg. 4], feature extraction can be achieved by employing a click detector that utilizes a 512 point FFT in order to detect sperm whale clicks as part of a real time tracking system).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, as modified in view of Williams and Trawicki to include the FFT of Bittle with a reasonable expectation of success, with the motivation of accurately detecting clicks made by marine mammals in order to track them in real-time [pg.4].
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Williams, Trawicki, and Ibrahim et al. "A new approach for north atlantic right whale upcall detection." 2016 international symposium on computer, consumer and control (IS3C). IEEE, 2016., “Ibrahim”).
Regarding claim 14, O’Hara, as modified in view of Williams and Trawicki teaches the method of claim 11. O’Hara, as modified in view of Williams and Trawicki may not explicitly teach the classifying includes transforming the one or more feature vectors, wherein the transforming is based on a wavelet transform.
Ibrahim teaches the classifying includes transforming the one or more feature vectors, wherein the transforming is based on a wavelet transform ([pg. 2], detection algorithm consists of three steps including feature extraction based on a discrete wavelet transform followed by MFCCs calculation in order to detect NARW upcalls).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, as modified in view of Williams and Trawicki to include the wavelet transform of Ibrahim with a reasonable expectation of success, with the motivation of extracting feature vectors and MFCCs in order to detect calls originating from the North Atlantic Right Whale [pg.2].
Claim(s) 17-18 and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Toma et al. ("Smart embedded passive acoustic devices for real-time hydroacoustic surveys." Measurement 125 (2018): 592-605., “Toma”).
Regarding claim 17, O’Hara discloses the method of claim 1. O’Hara may not explicitly disclose the acoustic sensor includes a plurality of embedded acoustic controllers.
Toma teaches the acoustic sensor includes a plurality of embedded acoustic controllers ([pg. 2] A2 hydrophone array consists of 4 slave hydrophones that each have a controller).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, to include the multiple hydrophones with embedded controllers of Toma with a reasonable expectation of success, with the motivation of providing directional sound source information in hydroacoustic surveys [pg. 3].
Regarding claim 18, O’Hara, as modified in view of Toma teaches the method of claim 17. Toma further teaches each hydrophone in the one or more hydrophones is coupled to a unique embedded acoustic controller in the plurality of embedded acoustic controllers([pg. 2] A2 hydrophone array consists of 4 slave hydrophones that each have a controller).
Regarding claim 27, O’Hara discloses the method of claim 1. O’Hara may not explicitly teach the acoustic sensor is integrated with an unmanned underwater vehicle (UUV).
Toma teaches the acoustic sensor is integrated with an unmanned underwater vehicle (UUV) ([pg. 14], The A1 and A2 acoustic systems are designed for mobile platforms such as gliders/AUVs).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara to include the AUV integration of Toma with a reasonable expectation of success, with the motivation of conducting underwater PAM surveys for reasons such as monitoring underwater noise, marine mammal population, detection of fish reproduction areas, detection of greenhouse gases, etc. [pg. 1-2].
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Premus et al. ("A wave glider-based, towed hydrophone array system for autonomous, real-time, passive acoustic marine mammal monitoring." The Journal of the Acoustical Society of America 152.3 (2022): 1814-1828., “Premus”).
Regarding claim 19, O’Hara discloses the method of claim 1. O’Hara may not explicitly disclose serially coupling at least two hydrophones within the one or more hydrophones, wherein the serially coupling enables one or more customized array configurations.
Premus teaches coupling at least two hydrophones within the one or more hydrophones, wherein the serially coupling enables one or more customized array configurations([pg. 5] a thin cable connects all 32 of the hydrophones comprising the array) ([pg. 8], towed hydrophone array is linear and straight with each hydrophone being equally spaced. (The examiner interprets the linearity and spacing of hydrophones to implicitly indicate that the coupling between hydrophones is serial)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, to include the serially coupled hydrophones of Premus with a reasonable expectation of success, with the motivation of improving over the ship based arrays and single hydrophone systems for detecting marine mammals that vocalize in the same frequency bands as shipping noise [pg. 14].
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Premus and Lowenhar et al. ("Power Over Ethernet Daisy Chained Acoustic Emission System for Structure Health Monitoring." World Conference on Acoustic Emission. Cham: Springer International Publishing, 2017., “Lowenhar”).
Regarding claim 20, O’Hara, as modified in view of Premus teaches the method of claim 19. Premus further teaches the serially coupling enables distributed beamforming, wherein the distributed beamforming localizes the predicted source ([pg. 8], towed hydrophone array is linear and straight with each hydrophone being equally spaced. (The examiner interprets the linearity and spacing of hydrophones to implicitly indicate that the coupling between hydrophones is serial)([pg. 10], by beamforming with the hydrophone array, marine mammal vocalizations may be identified and localized despite nearby noise interference).
O’Hara, as modified in view of Premus may not explicitly teach the serially coupling includes a daisy chained power over ethernet protocol, and wherein the serially coupling enables distributed beamforming, wherein the distributed beamforming localizes the predicted source.
Lowenhar teaches teaches the serially coupling includes a daisy chained power over ethernet protocol ([pg. 5], AE system includes a power over ethernet injector or hub to transfer data between nodes as well as power subsequent nodes)(Fig. 3 illustrates an example of a daisy-chained multiple channel AE system).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara, as modified in view of Premus to include the daisy chained power over ethernet protocol of Lowenhar with a reasonable expectation of success, with the motivation of allowing the network cable to carry electrical power for the operation of sensors without using separated power cords [pg. 4].
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Bannoura et al. ("Acoustic wake-up receivers for home automation control applications." Electronics 5.1 (2016): 4., “Bannoura”).
Regarding claim 21, O’Hara discloses the method of claim 1. O’Hara may not explicitly disclose entering a sleep mode, by the acoustic sensor.
Bannoura teaches entering a sleep mode, by the acoustic sensor ([pg. 2-3], acoustic wake up receiver includes low power microcontroller that allows the receiver to operate in a low power (sleep) mode).
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara to include the sleep mode of Bannoura with a reasonable expectation of success, with the motivation of allowing sensors to enter a low-power state in order to reduce power consumption [abstract].
Claim(s) 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over O’Hara in view of Baumgartner et al. "Persistent near real‐time passive acoustic monitoring for baleen whales from a moored buoy: System description and evaluation." Methods in Ecology and Evolution 10.9 (2019): 1476-1489., “Baumgartner”).
Regarding claim 24, O’Hara discloses the method of claim 1. O’Hara may not explicitly discloses the acoustic sensor is coupled to a buoy.
Baumgartner teaches the acoustic sensor is coupled to a buoy. ([pg. 2], a moored buoy was designed to detect the presence of baleen whales through the use of a passive acoustic monitoring instrument)
Therefore, it would have been prima facie obvious to one of ordinary skill in the art of passive acoustic monitoring, before the effective filing date of the claimed invention, to modify the method of O’Hara to include the passive acoustic monitoring buoy of Baumgartner with a reasonable expectation of success, with the motivation of identifying the presence of baleen whales in the vicinity.
Allowable Subject Matter
Claims 9 and 22-23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, as well as overcoming any relevant 35 U.S.C. 112(a) rejections
.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 9, O’Hara, as modified in view of Williams and Gillespie teaches the method of claim 8. O’Hara, as modified in view of Williams and Gillespie may not explicitly teach training the machine learning model, wherein the training is based on one or more underwater audio signals from the individual animal within the species of marine mammal (Gillespie teaches localization of an individual porpoise along a track over a period of time (See Fig. 8), as well as the identification and determination of porpoise clicks if the received signals had a peak frequency within a designated band [pg. 4], however none of O’Hara, Williams, Gillespie, nor any other identified prior art teach the required limitation of training the machine learning model based on signals received from the identified individual marine mammal).
Regarding claim 22, O’Hara, as modified in view of Bannoura teaches the method of claim 21. O’Hara, as modified in view of Bannoura may not explicitly teach waking, from the sleep mode, the acoustic sensor, wherein the waking is based on an acoustic pressure threshold of the underwater audio signal (Bannoura teaches an acoustic wake-up receiver that is woken up from a low power (sleep mode) via a wake up signal sent from a transmitter or speaker [pg. 2-3], however none of O’Hara, Bannoura, nor any other identified prior art teaches the required limitation of the waking-up of the receiver being based on an acoustic pressure threshold of the underwater audio signal).
Regarding claim 23, the claim is indicated as containing allowable subject matter due to its respective dependence upon a claim that has been indicated as containing allowable subject matter.
Conclusion
Prior art made of record though not relied upon in the present basis of rejection are noted in the attached PTO 892 and include:
Salloum et al. (US 9651649 B1, “Salloum”) which discloses a system and method for passive acoustic detection, tracking, and classification
Beatty et al. (US 11808570 B2, “Beatty”) which discloses a sensor and telemetry unit for passive acoustic monitoring in a sonobuoy application
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER RICHARD WALKER whose telephone number is (571)272-6136. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at 571-270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER RICHARD WALKER/Examiner, Art Unit 3645
/YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645