Prosecution Insights
Last updated: April 19, 2026
Application No. 18/019,161

UNCONFIRMED SOUND EXTRACTION DEVICE, UNCONFIRMED SOUND EXTRACTION SYSTEM, UNCONFIRMED SOUND EXTRACTION METHOD, AND RECORDING MEDIUM

Final Rejection §101§103§112
Filed
Feb 01, 2023
Examiner
SATANOVSKY, ALEXANDER
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
NEC Corporation
OA Round
2 (Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
75%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
265 granted / 472 resolved
-11.9% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
53 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
3.2%
-36.8% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 472 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 112 Claim 26 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The limitation “outputting the unconfirmed sound information while withholding outputting at least one other portion of the sound data other than that of the unconfirmed sound information” is not described in the Specification. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 2, 4-16, 18, 19, 21, and 25-26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With regards to Claims 1 and 18, the limitation “an unconfirmed sound extractor configured to extract, from sound data being acquired by an optical fiber, the sound data being data relating to a sound at locations of the optical fiber, unconfirmed sound information based on determining that the unconfirmed sound information represents unconfirmed sound data being the sound data of the sound an occurrence cause of which is not estimated at a time and in the location of acquisition of the sound data” is indefinite. It is unclear how the extraction of the “unconfirmed sound information” could happen before the determination that the information is really “unconfirmed” “based on determining that the unconfirmed sound information represents unconfirmed sound data”. For the purpose of a compact prosecution, in this office action, the Examiner treated the extraction of “unconfirmed sound information” being a result of data processing that would not be able to match these sound-related data with any source-known sound data following the disclosure as published ([0040]: “sound data which are not capable of being classified due to an unknown occurrence cause are referred to as “unconfirmed sound data”). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 2, 4-16, 18, 19, 21, and 25-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. With regards to representative Claim 1 (and similarly, Claim 18), the claim recites: An unconfirmed sound extraction device comprising: an unconfirmed sound extractor configured to extract, from sound data being acquired by an optical fiber, the sound data being data relating to a sound in each of locations of the optical fiber, unconfirmed sound information based on determining that the unconfirmed sound information represents unconfirmed sound data being the sound data of the sound an occurrence cause of which is not estimated at a time and in the location of acquisition of the sound data; and an output configured to output the unconfirmed sound information. The above highlighted in bold limitations comprise a process (statutory subject matter category, Step 1, MPEP 2106.03) that, under its broadest reasonable interpretation, falls into abstract idea exceptions as identified by the courts. Specifically, under the 2019 Revised Patent Subject matter Eligibility Guidance, under the Step 2A, Prong One, it falls into the grouping of subject matter when recited as such in a claim limitation that covers mental processes – concepts performed in the human mind including an observation, evaluation, judgement, and/or opinion. These mental steps represent a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, “the sound data being data relating to a sound in each of locations of the optical fiber, unconfirmed sound information based on determining that the unconfirmed sound information represents unconfirmed sound data being the sound data of the sound an occurrence cause of which is not estimated at a time and in the location of acquisition of the sound data” in the context of this claim encompasses the user manually/mentally evaluating obtained sound data at each location and making a corresponding judgement that the cause and type of sound at that location of acquisition is unknown (“unconfirmed”), not similar to any known sound event (“information”) that a user is familiar with/able to recognize. The examiner concluded that this judicial exception is not integrated into a practical application (Step 2A analysis under Prong Two) based on the following analysis. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. However, in the above claims, there are no additional elements to integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. For example, steps of acquiring sound data by an optical fiber and providing an output configured to output the unconfirmed sound information are examples of insignificant extra-solution activity that does not meaningfully limits the exception. Both these steps are generically recited and not meaningful. The sound data acquisition step corresponds to mere data gathering. All uses of the judicial exception require such data to execute the abstract idea (MPEP 2106.05 (g)). Accordingly, these additional elements do not integrate the abstract idea into a practical application by imposing any meaningful limits on practicing the abstract idea. Therefore, the above claims are directed to an abstract idea. Further, under the Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. For example, acquiring sound data from fiber optic cable is well-understood and purely conventional or routine steps in the relevant art (please see prior art of record). According to MPEP 2106.05 (d): “If, however, the additional element (or combination of elements) is no more than well-understood, routine, conventional activities previously known to the industry, which is recited at a high level of generality, then this consideration does not favor eligibility”. Therefore, the claims are not patent eligible. With regards to the dependent claims 2, 4-16, 19, 21, 25, and 26, the claims are not patent eligible because they do not transform the abstract idea into a patent eligible application of the abstract idea. These dependent claims either just extend the abstract idea of the independent claim(s) or recite further additional elements that do not meaningfully limit the abstract idea, as explained above, and, therefore, comprise no additional elements that incorporate the abstract idea into a practical application (Step 2A) and/or do not include additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B) for substantially the same reasons as discussed above with regrds to Claim 1. For example, the new claim 25 recites further additional elements such as “optical coupler coupled to the optical fiber and configured to receive the sound data from the optical fiber” and “wherein the optical fiber is a submarine cable that is at least partly underwater, and the each of locations of the optical fiber are underwater”. These additional elements are not meaningful as recited in generality and are qualified for significantly more because they are conventional/well-understood as evidenced by the prior art of record (Hodgins and Kawazawa). They do not demonstrate a practical application because these elements tangentially related to the identified abstract idea (MPEP 2106.05(g): The term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim). The new claim 26, recites “extracting the unconfirmed sound information comprises determining that the unconfirmed sound information is not a known sound indicated as predetermined by the unconfirmed sound extraction device” which is an abstract idea (mental step-bolded) while the outputting step “outputting the unconfirmed sound information…” corresponds to insignificant extra-solution activity. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4-16, 18, 19, 21, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Mark Andrew Englund ((US 20200191613), hereinafter ‘Englund’ in view of KOJIMA TAKASHI (WO 2020240724), hereinafter ‘Takashi’. With regards to Claim 1, Englund discloses An unconfirmed sound extraction device (a system 100 for use in distributed acoustic sensing (DAS) [0066]) comprising: an unconfirmed sound extractor configured to extract, from sound data being acquired by an optical fiber, the sound data being data relating to a sound at locations of the optical fiber (an optical signal detector [0031]; returning optical signals scattered in a distributed manner over distance along the one or more of optical fibres, the scattering influenced by acoustic disturbances caused by the multiple targets within the observation period; demodulating acoustic data from the optical signals; processing the acoustic data and classifying it in accordance with the target classes or types to generate a plurality of datasets including classification, temporal and location-related data; and storing the datasets in parallel with raw acoustic data which is time and location stamped [0008]; an optical signal detector arrangement for receiving, during an observation period following each of the multiple instants, returning optical signals scattered in a distributed manner over distance along the one or more of optical fibres, the scattering influenced by acoustic disturbances caused by the multiple targets within the observation period [0046]; Steps 206 and 209, Fig. 2A), unconfirmed sound information based on determining that the unconfirmed sound information represents unconfirmed sound data being the sound data of the sound an occurrence cause of which is not estimated at a time and in the location of acquisition of the sound data (returning optical signals scattered in a distributed manner over distance along the one or more of optical fibres, the scattering influenced by acoustic disturbances caused by the multiple targets within the observation period; demodulating acoustic data from the optical signals [0008]; The sound producing targets may include sound producing objects, sound producing events or combinations of sound producing objects and events [0016]; At step 209, raw or unfiltered acoustic data is fed in parallel from demodulation step 206 and stored in the storage unit 215, which may include cloud-based storage 215A. It is similarly time and location stamped, so that it can be retrieved at a later stage to be matched at 213 with symbols stored in a digital symbol index database for allowing additional detail to be extracted where possible to supplement the symbol data [0075]; Figs. 1 and 2A; classification of objects is fed back as correct or incorrect based on other means of detecting the objects (i.e. video and machine vision) … At 210.3 if the comparison is correct the resultant correctly classified symbol is stored in the digital symbol index database at 210.4. If not the classification process is repeated until the image of the object/event and the sound image/event match [0107], i.e. when the comparison is incorrect, the sound data of the target is determined to represent “unconfirmed sound data”, emphasis added) and an output configured to output the unconfirmed sound information (output from the reflectometer 102 to processing unit, Fig.1). However, Englund does not specifically disclose the sound data being data relating to a sound in each of locations of the optical fiber. Takashi discloses relating the sound data to a sound in each of location of the optical fiber (the specific unit 23 compares the intensity of the sound detected at the position corresponding to the distance for each distance of the optical fiber 10 from the conversion unit 21, and based on the comparison result, the position where the sound is generated, p.3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Englund in view of Takashi to relate the sound data to a sound in each of location of the optical fiber to improve identification of the sound source/occurrence cause of sound (the specifying unit 23 can identify the type of sound source of the sound that is the source of the acoustic data by analyzing the dynamic change of the pattern of the acoustic data, Takashi, p.4). With regards to Claim 2, Englund further discloses the unconfirmed sound extractor is configured to perform collation with a previously-stored classification condition (a classification condition is present with a strong correlation in the form in FIG. 7. The known sound classification unit 124 executes the determination processing, for example, by calculating a general mutual correlation coefficient [0098]; The unconfirmed sound information processing unit 120 previously stores a classification condition for finding and classifying a known sound from RAW data being input from the acquisition processing unit 101. A classification condition includes, as a detection condition, a feature unique to a known sound [0039]; The sound of a type of interest to a monitoring person is stored in a known sound detection information storage unit 136 [0044]; The data stored in the unconfirmed sound detection information storage unit 137 and the known sound detection information storage unit 136 are transmitted to the output processing unit 125 and are output [0045]; and extracting, as the unconfirmed sound data, the sound data not relevant to a sound of a known type (a data portion not having a possibility of a peculiar sound is excluded and a total data amount is decreased, and thereby a load on the following data processing is reduced [0050]). With regards to Claim 3, Englund further discloses outputting, from sound data relevant to a sound of the known type, sound data of a previously-determined type together with the type (further processing and matched with the corresponding datasets to provide both real time and historic data [0008]; he surveillance data can relate to real-time acoustic data for monitoring targets. Alternatively or additionally, the surveillance data relates to historic acoustic data for later retrieval and searching. In general, “targets” include any acoustic objects that vibrate and therefore generate detectable acoustic signals, such as vehicles (generating tyre/engine noise), pedestrians (generating footsteps), trains (generating rail track noise), building operations (generating operating noise), and road, track or infrastructure works (generating operating noise). They also include events caused by targets, such as car crashes, gunshots caused by a handgun or an explosion caused by explosives (generating high-pressure sound waves and reverberation) [0061]). With regards to Claim 4, Englund further discloses whether to be relevant to a sound of the known type in the unconfirmed sound extractor is determined based on analogy determination via collation with a previously-stored classification condition, by using one feature or more as a key (time and location stamped so that it can be retrieved for further processing and matched with the corresponding datasets to provide both real time and historic data [0008]; FIG. 2A can include a number of training sub-steps in which sound objects and events that have been classified at 210.1 are compared with object/event images at 210.2. At 210.3 if the comparison is correct the resultant correctly classified symbol is stored in the digital symbol index database at 210.4. If not the classification process is repeated until the image of the object/event and the sound image/event match [0107]). With regards to Claims 5 and 6, Englund further discloses classifying types of signals (i.e. “relevance determination for a sound of the known type”, added) using frequency domain (The series of different software-based correlation filters 14A-14D is provided for each classification type above (each correlation filter is tuned to particular characteristics in the acoustic time series and acoustic frequency domain) and once the output of one of these software based filters reaches a threshold, a detection and classification event is triggered in the system [0077]) as well as temporal characteristics (acoustic method of providing spatial and temporal classification of a range of different types of sound producing targets [0008]) while performing the relevance determination includes at least any one of a frequency, a temporal change of a frequency, and a temporal change of an intensity envelope with respect to a sound (the datasets raw acoustic data which is time and location stamped so that it can be retrieved for further processing and matched with the corresponding datasets [0012]). Englund also discloses using a plurality of (temporal) bins in selection of signals [0065]. However, Englund does not specifically disclose wherein relevance determination for a sound of the known type in the unconfirmed sound extractor is performed after the sound data are divided into a plurality of frequency bands. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Englund in view of Takashi to divide sound data into a plurality of bins in determination of signal relevance because different sound sources are characterized by different frequency bands similarly to different time periods’ bins usage of temporal classification as discussed above. With regards to Claim 7, Englund further discloses the unconfirmed sound extractor is configured to discriminate a sound emitted from a same sound source, from among sounds detected in a plurality of the locations of the optical fiber (the optical data including temporal and location related data; a communications interface and processor for receiving a search request including temporal and location related filters or parameters, and retrieving the optical data based on said parameters for processing it into acoustic data [0046]; A fairly broad pedestrian detection filter may be applied to efficiently locate all pedestrians within an area and then a much more specific set of filters could be applied to classify foot wear type (sole type—rubber, leather, metal), gait of walk by ratio'ing number of steps for given distance along a path to estimate height of person, speed of walk, estimated weight of person from low frequency pressure amplitudes generated by footsteps on pavement. As previously noted these filters are generally initially applied to the acoustic data at the time of collection, so as to enable the storage of symbols representative of object and activity type, though for higher resolution raw acoustic or optical data may be retrieved and reprocessed [0102]; In this way relevant segments of the stored optical data may be extracted and processed in a targeted way, covering areas of interest or those requiring additional coverage by virtue of their location away from the installed fibre optic cable [0100]). With regards to Claim 8, Englund further discloses the unconfirmed sound extractor is configured to monitor, by increasing sensitivity in a predetermined direction, sounds detected in a plurality of the locations of the optical fiber, the sounds being used as sensor array output (These beamforming techniques may result in several intersecting narrow scanning beams that may yield direction of the acoustic source and its location relative to the fibre-optic sensing cable in two or three dimensions in order to selectively monitor different zones in the acoustic field with improved array gain range and enhanced detection capabilities [0072]; This plurality of beams may have different spatial positions (i.e. which subset of sensors from the total sensor array are selected corresponding to a different geographical location in the system), angular orientation (which angle or angles relative to the local length axis of the fiber) and/or directivity (aspect ratio of the sensing beams—ie. how sharp or obtuse are the beam spatial shapes) properties around the system to achieve higher level sensing functions in the system that include long range detection, localization, classification and tracking of acoustic sources in a 2D or 3D coordinate system [0097]; FIGS. 2E and 8 illustrate how a stored optical data may be effectively used to generate phased array sensing beams to locate a target/sound source 800 which is spaced from a fibre optic cable 802 [0098]). With regards to Claim 9, Englund further discloses the optical fiber is included in an optical cable (the fibre-optic sensing cable [0072]). With regards to Claims 10 and 11, Englund further discloses the unconfirmed sound extractor is configured to execute, based on information of an installation construction method relevant to installation of the optical cable, processing of reducing, from the sound data, an influence on sensitivity due to a difference in the installation construction method and configured to execute, based on information representing a cable type of the optical cable, processing of reducing, from the sound data, an influence on sensitivity due to a difference in the cable type (Alternatively or additionally, as illustrated in FIG. 4B, the optical fibres (405F to 405H) can be installed with zig-zag patterns to provide spatial resolution with fewer but longer optical fibres. In general, the disclosed system and method is expected to achieve about 10 metre resolution or better. This can be achieved by virtue of an existing fibre infrastructure covering most major roads in a city in a first deployment step. As a second step fibre will be deployed at a more granular level over most streets and roads in a city so as to achieve comprehensive coverage in the form of a 2D grid, again with acoustic channels every 10 m on every street and road [0088]). With regards to Claim 12, Englund discloses the claim limitations as discussed above with regards to Claims 10 and 11. Englund further discloses using a reference sound propagating in a wide range of the optical cable, a degree of a difference in the sound data due to the location where the sound data are acquired (Reference to acoustic data in this disclosure should be read as including any propagating wave or signal that imparts a detectable change in the optical properties of the sensing optical fibre. These propagating signals detected in the system may include signal types in addition to acoustics such as seismic waves, vibrations, and slowly varying signals that induce for example localized strain changes in the optical fibre [0064]; Each fanned out optical fibre can extend into two or more optical fibres to increase spatial resolution as the optical fibres fan further out [0088]) and, based on information of the degree of the difference, changing a difference in sensitivity due to the location where the sound data are acquired, or select a location for acquiring the sound data (This plurality of beams may have different spatial positions (ie. which subset of sensors from the total sensor array are selected corresponding to a different geographical location in the system), angular orientation (which angle or angles relative to the local length axis of the fiber) and/or directivity (aspect ratio of the sensing beams—ie. how sharp or obtuse are the beam spatial shapes) properties around the system to achieve higher level sensing functions in the system [0097]; This configuration permits determination of an acoustic signal (amplitude, frequency and phase) at every distance along the fibre-optic sensing cable 205 [0118]). However, Englund does not specifically disclose reducing, from the sound data, a difference in sensitivity due to the location where the sound data are acquired, or select a location for acquiring the sound data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Englund in view of Takashi to reduce, from the sound data, a difference in sensitivity due to the location where the sound data are acquired, or select a location for acquiring the sound data similar to increasing sensitivity (increase spatial resolution [0088]) due to location where the sound data are acquired when the location is further away because of the lower sensitivity benefits while providing sufficient sensitivity (resolution) corresponding to such location (the lower resolution of acoustic data has considerable advantages in terms of bandwidth and storage requirements [0128]). With regards to Claim 13, Englund discloses an optical fiber core wire is divided or a wavelength is divided, whereby the optical cable is shared with another application (The one or more unused spectral channels may include wavelengths outside the wavelength range used in the optical fibres for communications purposes. For example, if all optical fibres in the fibre-optic bundle are lit, and the communications wavelengths in the optical fibres span the C band (between approximately 1530 nm and approximately 1563 nm) and the L band (between approximately 1575 nm and approximately 1610 nm) for communications purposes, one or more unused wavelengths at outside the C band or the L band nm may be utilized for obtaining surveillance information according to the present disclosure. The particular selection of the one or more unused wavelengths may be based on the gain spectrum of any existing erbium-doped fibre amplifiers (EDFAs) deployed in the communications network for extending its reach. Where existing EDFAs are deployed, selecting the one or more unused wavelengths from discrete wavelengths at 1525 nm, 1569 nm and 1615 nm (i.e. just outside the C and L bands) enables amplification without the need for additional EDFAs to extend the reach of interrogation signals. In another arrangement, the network may include a dedicated network for acoustic sensing purposes, operating in conjunction with an established network for fibre-optic communications, to extend the reach of acoustic sensing. The major advantage of using an existing communications network is that no dedicated cables have to be deployed at an additional and very significant cost [0085]; It will be appreciated that the fibre optic network may be made up of a number of different fibre optic cables in which case segments from different cables may be “stitched” together to create a number of virtual dedicated sensing and monitoring networks for each of a number of entities in a typically urban environment where there is a high density of installed fibre optic cable [0114]). With regards to Claim 14, Englund discloses the claim limitations as discussed above with regards to Claims 8 and 9. Englund additionally discloses fibre-optic sensing cable 205 [0118]. With regards to Claim 15, Englund discloses the optical fiber sensing is distributed acoustic sensing (the presently disclosed system and method of distributed acoustic sensing may be used with phased array processing and beam forming techniques [0118]). With regards to Claim 16, Englund discloses an acquisition processor configured to acquire the sound data by the optical fiber and transmit the acquired sound data to the unconfirmed sound extractor (Fig.1, processing unit 114; Fig. 2A, Steps 204-206). With regards to Claims 18 and 19, Englund discloses the claimed limitations as discussed above with regrds to Claim 1. In addition, Englund discloses a recording medium recording an unconfirmed sound extraction program causing a computer to execute a program (The disclosure extends to a computer readable storage medium storing one or more programs, the one or more programs comprising instructions [0030]). With regards to Claim 20, Englund discloses using the sound data of the sound the occurrence cause of which is classified based on at least any one of the location, the time, and a frequency of the sound (At step 228 stored raw optical data is retrieved from cloud storage using time and location filters [0099]; a particular FIR filter may be used to enhance the frequency components associated with footsteps (e.g. 2-10 Hz), initially focusing only at the shop location [0105]). However, Englund is silent on excluding the sound data of the sound the occurrence cause of which is classified based on at least any one of the location, the time, and a frequency of the sound. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Englund in view of Takashi, while selecting useful/relevant data, to exclude the (remaining) sound data of the sound the occurrence cause of which is classified based on at least any one of the location, the time, and a frequency of the sound to reduce a processing load. With regards to Claim 21, Englund discloses correcting sound data (provide new acoustic data that can change beamforming performance by adjusting channel spacing and frequency range, for example [0076]). However, Englund does not specifically disclose correct the sound data, based on correction sound data being data relating to a sound separately acquired. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Englund in view of Takashi to correct the sound data, based on correction sound data being data relating to a sound separately acquired such as, for example, a background noise separately acquired (very faint acoustic signatures amongst high noise backgrounds [0106]) or noise from other sources that affect sound of interest as known in the art (the scattering influenced by acoustic disturbances caused by the multiple targets within the observation period [0046]). With regards to Claim 26, Englund discloses extracting the unconfirmed sound information comprises determining that the unconfirmed sound information is not a known sound indicated as predetermined by the unconfirmed sound extraction device and outputting the unconfirmed sound information as discussed in Claim 1. However, Englund does not specifically disclose outputting the unconfirmed sound information while withholding outputting at least one other portion of the sound data other than that of the unconfirmed sound information. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Englund in view of Takashi to output the unconfirmed sound information while withholding outputting at least one other portion of the sound data other than that of the unconfirmed sound information as information not of interest of the invention to save on computer resources/reduce processing load. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Englund in view of TAKASHI, and further in view of Toshio Kawazawa et al. (US 6377373), hereinafter ‘Kawazawa’. Englund discloses optical fiber configured to receive the sound data from the optical fiber as discussed in Claim 1. However, Englund does not specifically disclose an optical coupler coupled to the optical fiber, wherein the optical fiber is a submarine cable that is at least partly underwater, and the each of locations of the optical fiber are underwater. Kawazava discloses an optical coupler coupled to the optical fiber, wherein the optical fiber is a submarine cable that is at least partly underwater, and the each of locations of the optical fiber are underwater (The upward optical fiber 52c of the optical fiber cable 52 is connected to the input port X0 of the optical coupler 58L, Col.6, Lines 20-21; There are two different optical submarine cable systems for connecting branch stations to trunk lines, namely, the simple double landing system serially connecting main stations, and branch stations and the simple underwater branching system providing one branching apparatus for each branch station in a trunk line, Col.1, Lines 11-16; Fig.34). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Englund in view of Takashi, and Kawazawa to employ an optical coupler coupled to the optical fiber, wherein the optical fiber is a submarine cable that is at least partly underwater, and the each of locations of the optical fiber are underwater as known in the art, while using optical fiber to receive the sound data from the optical fiber as discussed in Englund above . Response to Arguments Applicant's arguments filed 8/12/2025 have been fully considered but they are not persuasive. 35 U.S.C. 101 The Applicant argues (p.9-10): …nothing in the specification suggests that any of those claim features would ever be performed in the human mind, and so, the rejection's assertions exceeded a BRI as the rejection's "mental process" assertions are not consistent with the specification. It is also questionable whether one of ordinary skill in the art would have every actually thought that the claim represents anything that might be done in the human mind, which is mentioned in reference to the other requirements that a BRI must conform to according to MPEP 2111 above, but since at least it is clear that the rejection's "mental process" assertions are inconsistent with the specification, those assertions or the rejection's should be withdrawn. The Examiner submits that according to the 2019 Guidance on SME: “If a claim, under its broadest reasonable interpretation, covers performance in the mind but for the recitation of generic computer components, then it is still in the mental processes category unless the claim cannot practically be performed in the mind. See Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’); Mortg. Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d. 1314, 1324 (Fed. Cir. 2016)(holding that computer-implemented method for ‘‘anonymous loan shopping’’ was an abstract idea because it could be ‘‘performed by humans without a computer’’); Versata Dev. Grp. v. SAP Am., Inc., 793 F.3d 1306, 1335 (Fed. Cir. 2015) (‘‘Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind.’’).” As explained in the rejection, a human being is capable to evaluate/identify sound and determine that he/she is unfamiliar with portion of it based on sound characteristics such as frequency, temporal characteristic, loudness, etc., which will make that sound “unconfirmed” to a person. The Examiner additionally submits that according to MPEP 2106.07(a): “For Step 2A Prong One, the rejection should identify the judicial exception by referring to what is recited (i.e., set forth or described) in the claim and explain why it is considered an exception”. The Examiner followed this guidance by relying on the claim language and not the Specification as argued. The Applicant argues (p.9-12): It is also requested that the rejection be withdrawn as the claims represent an improvement according to MPEP 2106.04(d)(1) …In that light, the background of Applicant's Specification, such as originally-filed paragraphs [0002]-[0011], describes technical problems in monitoring sounds that might not produce sufficient vibration relative to a sensor to be detected and that those problems may be alleviated and improved by using a fiber optic detector … viewing Applicant's Fig. 3 and related descriptions and figures, additional technical problems of handling "sound of unknown cause" are also reflected as improved by claim features which regard extracting and outputting "unconfirmed sound information" like in claim 1 and the other independent claims … And besides the independent claims, the dependent claims 10 and 11 also recite "processing of reducing, from the sound data, an influence on sensitivity" features which clearly reflect additional "Improvements .. The Examiner respectfully disagrees. The 2/1/2023 claims did not recite features used in the argument above. The steps of “extracting” and outputting “unconfirmed sound information are not qualified for improvement to technology as these additional elements are recited in generality and represent insignificant extra-solution activity as explained in the rejection. The Examiner also submits while the Specification discusses (in background art section) difficulties of detecting abnormal events, the claims did not recite any technological features besides fiber-optic cable. The invention is directed to “ease monitoring of a sound an occurrence cause of which is an event having a small appearance frequency” (instant application [0011]; “observation performance can be improved” [0070]) and the claims do not recite any technical features how to “handle” the argued/stated problem of abnormality detection. The improvement, therefore, is in the abstract idea only which is not a qualified improvement according to MPEP 2106.05(a).II: “it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology … the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology”. Claims 10 and 11 do not recite meaningful additional elements to indicate the alleged improvement. The Applicant requests (p.13): In the event a rejection is made, it is a best practice for the examiner to consult the specification to determine if there are elements that could be added to the claim to make it eligible. If so, the examiner should identify those elements in the Office action and suggest them as a way to overcome the rejection. As a quick note, if features of Claim 25 would be further modified to reflect how these limitations relate/change/act upon analyzing unconfirmed sound and/or help address “technical problems of handling "sound of unknown cause" as argued above, it may lead to a practical application subject to Specification disclosure of future claimed features. 35 U.S.C. 103 The Applicant requests (p.14): viewing those cited portions of the rejection's references and viewing those references overall, the references do not reasonably suggest the claimed "determining that the unconfirmed sound information represents unconfirmed sound data being the sound data of the sound an occurrence cause of which is not estimated at a time and in the location of acquisition of the sound data" and therefore also do not reasonably suggest the claimed "an unconfirmed sound extractor configured to extract ... unconfirmed sound information based on" those features. The Examiner notes that no specific arguments are presented to support this opinion. The Examiner submits that new amended limitation is now addressed in this office action. The Examiner additionally submits that Englund discloses “determining that the unconfirmed sound information represents unconfirmed sound data being the sound data of the sound an occurrence cause of which is not estimated at a time and in the location of acquisition of the sound data” by obtaining raw sound data that from multiple targets that are inherently unknown/unclassified at the time and location of data acquisition such as from optical cables installed underground, Englund [0090] (“At step 209, raw or unfiltered acoustic data is fed … It is similarly time and location stamped” [0075]). Additionally/alternatively the “unconfirmed sound information” may be represented by data that are being processed but not correctly recognized as discussed in [0107] and Fig. 2B as data that are “incorrect” (not fitting any known data). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. FARHADIROUSHAN MAHMOUD et al. (CA 2838433) discloses a method and a system are provided, in which acoustic signals received by distributed acoustic sensors are processed in order to determine the position of a source or sources of the acoustic signals. A distributed optical fibre sensor acts like a string of discrete acoustic sensors. Martin G. Hodgins et al. (US 4436365) disclose optical coupler coupled to the optical fiber and configured to receive the sound data from the optical fiber” and “wherein the optical fiber is a submarine cable that is at least partly underwater, and the each of locations of the optical fiber are underwater. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER SATANOVSKY whose telephone number is (571)270-5819. The examiner can normally be reached on M-F: 9 am-5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Rastovski can be reached on (571) 270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER SATANOVSKY/ Primary Examiner, Art Unit 2863
Read full office action

Prosecution Timeline

Feb 01, 2023
Application Filed
May 07, 2025
Non-Final Rejection — §101, §103, §112
Aug 12, 2025
Response Filed
Aug 22, 2025
Final Rejection — §101, §103, §112
Nov 20, 2025
Applicant Interview (Telephonic)
Nov 20, 2025
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596202
DIRECT WELL-TIE METHOD FOR DEPTH-DOMAIN LOGGING AND SEISMIC DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12590830
SYSTEMS AND METHODS FOR SCALE CALIBRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12580079
PATIENT INVARIANT MODEL FOR FREEZING OF GAIT DETECTION BASED ON EMPIRICAL WAVELET DECOMPOSITION
2y 5m to grant Granted Mar 17, 2026
Patent 12578477
METHOD FOR PROCESSING TELEMETRY DATA FOR ESTIMATING A WIND SPEED
2y 5m to grant Granted Mar 17, 2026
Patent 12566276
METHOD FOR DETERMINING WIND SPEED COMPONENTS BY MEANS OF A LASER REMOTE SENSOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
75%
With Interview (+18.6%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 472 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month