Prosecution Insights
Last updated: April 19, 2026
Application No. 18/055,429

DYNAMIC CONTEXT-BASED UNMANNED AERIAL VEHICLE AUDIO GENERATION ADJUSTMENT

Non-Final OA §102§103
Filed
Nov 15, 2022
Examiner
COOLEY, CHASE LITTLEJOHN
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
116 granted / 173 resolved
+15.1% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
46 currently pending
Career history
219
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 173 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 of US Application No. 18/055,429, filed on 11/15/2022, are currently pending and have been examined. Information Disclosure Statement The information Disclosure Statement filed on 11/15/2022 has been considered. An initialed copy of form 1449 is enclosed herewith. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 8, 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Beaurepaire et al. (US 2021/0012669 A1, “Beaurepaire”). Regarding claims 1, 8, and 15, Beaurepaire discloses method and apparatus for routing an aerial vehicle based on a relative noise impact and teaches: A computer system, the computer system comprising: (FIG. 9 illustrates a computer system 900 upon which an embodiment of the invention may be implemented – See at least ¶ [0099]) one or more processors, (A processor 902 performs a set of operations on information as specified by computer program code related to providing environmental noise source mapping and noise driven routing – See at least ¶ [0101]) one or more computer-readable memories, one or more computer-readable tangible storage media, and program instructions stored on at least one of the one or more tangible storage media for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising: (Computer system 900 also includes a memory 904 coupled to bus 910. The memory 904, such as a random access memory (RAM) or other dynamic storage device, stores information including processor instructions for providing environmental noise source mapping and noise driven routing. Dynamic memory allows information stored therein to be changed by the computer system 900. RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses . The memory 904 is also used by the processor 902 to store temporary values during execution of processor instructions. The computer system 900 also includes a read only memory ( ROM ) 906 or other static storage device coupled to the bus 910 for storing static information, including ins instructions, that is not changed by the computer system 900 – See at least ¶ [0102]) capturing contextual information of an environment surrounding an unmanned aerial vehicle (UAV); (More specifically, in one embodiment, the system 100 (e.g. , via an aerial vehicle routing platform 111) creates digital noise map data (e.g., stored in a geographic database 113) of ground level noise sources 107. In one embodiment, the system 100 deploys on site microphones at different areas to collect ground level noise data. FIG. 2A is a diagram illustrating example ground noise sources in a noise map 200 that an aerial vehicle flying over to reach a destination 201, according to one embodiment. The system 100 analyzes the noise data collected by the microphones and determines four clusters/groups of ground level noise sources 203a-203d in the area where the destination 201 is located – See at least ¶ [0030]) generating an environmental model using a cluster of machine learning techniques (The above-discussed embodiments combine different technologies (sensors, 3D routing, ground noise mapping, altitude computations, real-time aerial vehicle noise modelling, aerial vehicle positioning capabilities, probability computation, risk computation, machine learning, big data analysis, etc.) to provide least noise impact routing recommendations via a noise map – See at least ¶ [0067]) based on the captured contextual information; (In one embodiment (e.g., in step 401), the mapping module 301 retrieves environmental noise map data for a geographic area. The environmental noise map data indicates existing noise levels measured in the geographic area. It is contemplated that the mapping module 301 can use any means or data source for existing noise levels in the geographic area. Many sources can be used to estimate/report/model environmental noise levels on the ground, such as road network categorization, life traffic data, people density, microphone measurements, events calendar, etc., based on a time of the day, a day of the week, and/or seasonality. In one embodiment, the mapping module 301 retrieves environmental noise data and/or environmental noise maps compiled in existing databases, such as transportation noise map data from databases of aviation authorities and/or the high way authorities, etc – See at least ¶ [0036]) identifying one or more dominant sounds within a soundscape of the captured contextual information; (FIG. 2B is a diagram illustrating example ground noise sources in a noise map 220, according to one embodiment. FIG. 2B depicts contours/isolines 221a-221d of different noise levels, such as 80, 75, 70, 65 dB at a specific time point, corresponding to the clusters / groups of ground level noise sources 203a-203d in Fig. 2A – See at least ¶ [0031] ) calculating an impact of an operation of the UAV on one or more activities within the environment based on the generated environmental model and the one or more identified dominant sounds; and (In step 405, the routing module 305 generates a route for the aerial vehicle over the geographic area based on a relative noise impact of the aerial vehicle while operating over the geographic area. It is contemplated that the relative noise impact is computed based on the vehicle noise characteristic relative to the existing noise levels of the environmental noise map data for portions of the geographic area under the route of the aerial vehicle – See at least ¶ [0046]) in response to determining the calculated impact affects an activity within the one or more activities, modifying the operation to minimize the impact on the soundscape. (In one embodiment, the routing module 305 minimizes the relative noise impact by generating the route to fly over the portions of the geographic area where the existing noise levels are greater than the vehicle noise characteristic by a threshold value. In another embodiment, the routing module 305 minimizes the relative noise impact by generating the route to fly at an altitude at which the existing noise levels are greater than the vehicle noise characteristic as heard at the ground level by a threshold value – See at least ¶ [0047]) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 5, 7, 9, 12, 14, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Beaurepaire, as applied to claims 1, 8, and 15, and in further view of Pratt et al. (US 2018/0284773 A1, “Pratt”). Regarding claims 2, 9, and 16, Beaurepaire does not explicitly teach registering a UAV to a central repository; evaluating soundwaves generated by the UAV using an audible pitch monitoring apparatus; generating an acoustics model for the UAV based on the evaluated soundwaves. However, Pratt discloses acoustic monitoring system and teaches: registering a UAV to a central repository; (In another example , the acoustic repository of acoustic profiles may be stored locally on the storage system 214 of the drone 200 and/or the storage system 314 of the drone docking station 300 to provide a local acoustic repository, and/or stored remotely and managed at the service platform 130/400 to provide a remote acoustic repository…The portion of the local acoustic repository stored in cache may include acoustic profiles that are frequently used and/or have a priority over other acoustic profiles. For example, the drone 200 may store acoustic profiles associated with itself in the cache so as to ignore acoustic energy generated by its propellers, engines, and the like – See at least ¶ [0064]; Here, the system keeps an acoustic profile of the UAV in a repository. This record is acting as a registration of the UAV in the repository. The repository may be local or remote, i.e., a central repository.) evaluating soundwaves generated by the UAV using an audible pitch monitoring apparatus; (An acoustic profile may be a digital summary of an audio signal such as an acoustic fingerprint that can be used to identify an audio sample of the audio signal. The acoustic profile may include feature vectors that define characteristics of an audio signal such as an average zero-crossing rate, average spectrum prominent tones across a set of frequency bands, estimated tempo, spectral flatness, bandwidth, and/or other audio signal features suitable for identifying audio signals. Each acoustic profile may be associated with an apparent source identifier that identifies an apparent source that provides the acoustic profile. The acoustic profile may also be configured such that any audio compression and/or encoding techniques (e.g., AAC, MP3, WMA, Vorbis, and other audio compression and/or encoding techniques) performed on the audio signal allow the acoustic analysis engine 206/306 to identify the audio signal based on the acoustic profile – See at least ¶ [0062] Pitch is equivalent to frequency, thus the system uses a pitch monitoring device to determine frequency bands of the profile, e.g., the UAV profile.) generating an acoustics model for the UAV based on the evaluated soundwaves. (For example, the drone 200 may store acoustic profiles associated with itself in the cache so as to ignore acoustic energy generated by its propellers, engines, and the like – See at least ¶ [0064]) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for routing an aerial vehicle based on a relative noise impact of Beaurepaire to provide for acoustic monitoring system, as taught in Pratt, to provide a light weight and energy efficient drone, enhanced autonomous controls, reduction in response time to initiate an action or alert, and controls for use in low visibility situations when compared to drones that have autonomous capabilities based on visual data alone. (At Pratt ¶ [0005]) Regarding claims 5, 12, and 19, Beaurepaire does not explicitly teach, but Pratt further teaches: wherein identifying the one or more dominant sounds further comprises identifying a location of a source of the one or more dominant sounds, (In the specific example illustrated in FIG. 2, the drone controller 204 is configured to provide an acoustic analysis engine 206 that performs apparent source of the acoustic energy identification and location functionality as well as the functionality discussed below – See at least ¶ [0043]) a type of sound of the one or more dominant sounds, (The acoustic sensor may capture the acoustic energy as a first audio signal and the acoustic monitoring system may computationally process the first audio signal against a repository of acoustic profiles. An acoustic profile may be a digital summary of an audio signal such as an acoustic fingerprint that can be used to identify an audio sample of the audio signal. In various examples, the repository may include exclude-type entries (e.g., a whitelist of acoustic profiles) that are to be ignored when detected and/or include include-type entries (e.g., a blacklist of acoustic profiles) that are to be investigated when detected – See at least ¶ [0033]) and a dominant frequency of the one or more dominant sounds. (The method 600 then proceeds to block 608 where the audio signal is computationally processed against a repository of acoustic profiles. In an embodiment, at block 608 the acoustic analysis engine 206/306 of the drone 105/200 and/or drone docking station 110/300 may computationally process the audio signals received by the acoustic sensors 115a-d. The acoustic analysis engine 206/306 may determine whether the audio signal has substantial correspondence with an acoustic profile stored in an acoustic repository such as a whitelist (e.g., whitelist 216, 316, and/or 412) and/or a blacklist (e.g., the blacklist 218, 318, and/or 414). An acoustic profile may be a digital summary of an audio signal such as an acoustic fingerprint that can be used to identify an audio sample of the audio signal. The acoustic profile may include feature vectors that define characteristics of an audio signal such as an average zero-crossing rate, average spectrum prominent tones across a set of frequency bands, estimated tempo, spectral flatness, bandwidth, and/or other audio signal features suitable for identifying audio signals – See at least ¶ [0062]) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for routing an aerial vehicle based on a relative noise impact of Beaurepaire to provide for acoustic monitoring system, as taught in Pratt, to provide a light weight and energy efficient drone, enhanced autonomous controls, reduction in response time to initiate an action or alert, and controls for use in low visibility situations when compared to drones that have autonomous capabilities based on visual data alone. (At Pratt ¶ [0005]) Regarding claims 7 and 14, Beaurepaire does not explicitly teach, but Pratt further teaches: wherein the contextual information is any information relevant to the environment as captured by one or more visual sensors, one or more audio sensors, one or more location sensors, and one or more motion sensors. (For example, the payload unit may include one or more sensors, such as one or more cameras and/or other imaging sensors 112, one or more environmental sensors (e.g., such as one or more temperature sensors, pressure sensors, humidity sensors, gas sensors, altitude sensors, location sensors and the like) and/or any other sensor. In the illustrated embodiment, the drone 105 may include an acoustic sensor 115a (e.g., a microphone, a microphone array, a directionally-discriminating acoustic sensor/transducer, and other acoustic sensors for detecting acoustic energy) – See at least ¶ [0036]) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for routing an aerial vehicle based on a relative noise impact of Beaurepaire to provide for acoustic monitoring system, as taught in Pratt, to provide a light weight and energy efficient drone, enhanced autonomous controls, reduction in response time to initiate an action or alert, and controls for use in low visibility situations when compared to drones that have autonomous capabilities based on visual data alone. (At Pratt ¶ [0005]) Claim(s) 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Beaurepaire, as applied to claims 1, 8, and 15, and in further view of Pratt and Dame et al. (US 2021/0020168 A1, “Dame”). Regarding claims 3, 10, and 17, Beaurepaire does not explicitly teach wherein the cluster comprises a convolutional neural network, a recurrent neural network, and a support vector machine. However, Pratt further teaches: wherein the cluster comprises a [deep belief neural network] and [machine learning algorithms]. (The acoustic analysis engine 206, 306, and/or 406 may be configured with one or more machine learning algorithms to perform supervised machine learning, unsupervised machine learning (e.g., deep belief networks, neural networks, statistical pattern recognition, rule-based artificial intelligence, etc.) semi-supervised learning, reinforcement learning, deep learning, and other machine learning algorithms when updating whitelist, blacklist and/or any other acoustic repository) The combination of Beaurepaire and Pratt does not explicitly teach wherein the cluster comprises a convolutional neural network, a recurrent neural network, and a support vector machine. However, Dame discloses voice activity detection and dialogue recognition for air traffic control and teaches: wherein the cluster comprises a convolutional neural network, a recurrent neural network, (Neural networks can be stacked to create deep networks. After training one neural net, the activities of its hidden nodes can be used as training data for a higher level, thereby allowing stacking of neural networks. Such stacking makes it possible to efficiently train several layers of hidden nodes. Examples of stacked networks include deep belief networks (DBN), deep Boltzmann machines (DBM), convolutional neural networks (CNN), recurrent neural (RNN), and spiking – See at least ¶ [0083]) and a support vector machine. (During supervised learning the values for the output are provided along with the training data (labeled dataset) for the model building process. The algorithm, through trial and error, deciphers the patterns that exist between the input training data and the known output values to create a model that can reproduce the same underlying rules with new data. Examples of supervised learning algorithms include regression analysis, decision trees, k-nearest neighbors, neural networks, and support vector machines – See at least ¶ [0067]) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for routing an aerial vehicle based on a relative noise impact of Beaurepaire to provide for acoustic monitoring system, as taught in Pratt, to provide a light weight and energy efficient drone, enhanced autonomous controls, reduction in response time to initiate an action or alert, and controls for use in low visibility situations when compared to drones that have autonomous capabilities based on visual data alone. (At Pratt ¶ [0005]) In summary, Pratt discloses using machine learning techniques and neural networks, e.g., deep belief neural networks, to determine acoustic profiles and models. Pratt does not explicitly teach that the machine learning technique is a support vector machine or that deep belief neural network is a CNN and RNN network. However, Dame discloses voice activity detection and dialogue recognition for air traffic control and teaches that the machine learning techniques used to determine audio profiles includes support vector machines. Dame further teaches that a deep neural network, like those used in Pratt, are created by stacking multiple neural networks together and that these stacks consist of at least CNNs and RNNs. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for routing an aerial vehicle based on a relative noise impact of Beaurepaire and Pratt to provide for Machine learning and neural networks, as taught in Dame, to learn simpler representations and then composing more complex ones. (At Dame ¶ [0085]) Claim(s) 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Beaurepaire, as applied to claims 1, 8, and 15, and in further view of Tran (From Fourier Transforms to Wavelet Analysis: Mathematical Concepts and Examples, “Tran”). Regarding claims 4, 11, and 18, Beaurepaire does not explicitly teach wherein identifying the one or more dominant sounds further comprises performing Fourier transform and wavelet transform on the contextual information. However, Tran discloses Fourier transforms and Wavelet analysis to determine dominant sounds and teaches: wherein identifying the one or more dominant sounds further comprises performing Fourier transform and wavelet transform on the contextual information. (Fourier transforms approximate a function by decomposing it into sums of sinusoidal functions, while wavelet analysis makes use of mother wavelets. Both methods are capable of detecting dominant frequencies in the signals; however, wavelets are more efficient in dealing with time-frequency analysis – See at least Abstract) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for routing an aerial vehicle based on a relative noise impact of Beaurepaire to provide for the Fourier transform and Wavelet analysis, as taught in Tran, because the basic Fourier transform gives a global picture of a data set’s spectrum, whereas wavelet transforms offer a more flexible way to examine a signal, a function or an image. (At Tran pg. 33) Claim(s) 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Beaurepaire, as applied to claims 1, 8, and 15, and in further view of Ansy (How A-weighting Reflects What We Hear, “Ansy”). Regarding claims 6, 13, and 20, Beaurepaire does not explicitly teach wherein calculating the impact further comprises utilizing A-weighting, Lp(A)eq, sound mapping, and acoustic modeling. However, Ansy discloses how A-weighting reflects what we hear and teaches: wherein calculating the impact further comprises utilizing A-weighting, Lp(A)eq, sound mapping, and acoustic modeling. (Converting acoustic simulated data into A-weighted levels helps engineers compare different designs and make more accurate decisions based on values that are closer to the human perception of the sound intensity. Because A-weighting readings reflect the sensitivity of the human ear, they are commonly used to assess potential hearing damage caused by loud noises (such as aircraft or trains) and are used globally to evaluate environmental hearing damage risks – See at least pg. 3 and 7) Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for routing an aerial vehicle based on a relative noise impact of Beaurepaire to provide for the A-weighting, as taught in Ansy, because it’s the most common type of weighting system used to analyze noise measurements. (At Ansy pg. 1) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHASE L COOLEY whose telephone number is (303)297-4355. The examiner can normally be reached Monday-Thursday 7-5MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at 571-270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.L.C./Examiner, Art Unit 3662 /ANISS CHAD/Supervisory Patent Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Nov 15, 2022
Application Filed
Feb 08, 2024
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §102, §103
Mar 23, 2026
Interview Requested
Mar 31, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Response Filed
Apr 04, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592154
CONTROL DEVICE, MONITORING SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12570125
TRIP INFORMATION CONTROL SCHEME
2y 5m to grant Granted Mar 10, 2026
Patent 12545274
PEER-TO-PEER VEHICULAR PROVISION OF ARTIFICIALLY INTELLIGENT TRAFFIC ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Patent 12545302
SYSTEM, METHOD AND DEVICES FOR AUTOMATING INSPECTION OF BRAKE SYSTEM ON A RAILWAY VEHICLE OR TRAIN
2y 5m to grant Granted Feb 10, 2026
Patent 12539858
APPARATUS AND METHOD FOR DETERMINING CUT-IN OF VEHICLE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
88%
With Interview (+20.4%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 173 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month