Prosecution Insights
Last updated: April 17, 2026
Application No. 18/822,186

INTEGRATED MICROPHONE TO MONITOR ENVIRONMENTAL CONDITIONS OF BATTERY-OPERATED ASSET TRACKER

Non-Final OA §103
Filed
Aug 31, 2024
Examiner
ARMSTRONG, JONATHAN D
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
3y 9m
To Grant
54%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
218 granted / 415 resolved
+0.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
63 currently pending
Career history
478
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
18.4%
-21.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 1 is objected to because of the following informalities: while the Examiner seems to understand the intent is to claim a plurality of battery-operated asset trackers, the reciting of each asset tracker of a cluster of asset trackers followed by a battery-operated asset tracker could be confusing. Appropriate correction is required. Claim 1 is objected to because of the following informalities: line 6 recites received form but should state received from. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Wild (US 9,689,958 B1), Mannion (US 2018/0203112 A1), and Vijayalingam (US 2020/0379108 A1). Regarding claim 1, Wild teaches a computerized method of an integrated microphone to monitor environmental conditions of battery-operated asset tracker comprising: integrating one or more microphones into each asset tracker of a cluster of asset trackers [[fig. 12a] merchandise #1208 to be protected with positioning hard tag #1202 including microphone #1204; [col. 6:60-67] mobile devices include mobile phones … asset tracking tags, and/or the like; [col. 21:5-15] nodes may be used … for calculating a position of the hard tags]; using the one or more microphones to monitor an environmental condition of a battery‐operated asset tracker [[col. 2:1-10] a microphone is used on the receiving end to measure the time of flight from each of these speakers. Using the time of flight and known speaker; [col. 8:35-45] temperature sensor 210 is configured to measure the temperature of the environment and/or the positioning node 40 110. The temperature may be transmitted to mobile devices via the RF transmitter 209. The temperature may be used by the mobile devices 130 during the position calculation to estimate a speed of acoustic waves based on the temperature; [col. 24:1-10] in various embodiments positioning nodes, proximity nodes, and/or mobile devices are battery powered]; correlate a set of sound data received form the one or more microphones with another set of data from other sensors of each asset tracker of the cluster of asset trackers [[col. 13:10-40] filter bank 406, each matched filter may be matched to each of the ranging signals 302 transmitted by speakers 207, 208, and/or 226, and/or received by microphones 227, 228, and/or 206. Filters may be selected based on received protocol RF information. The matched filter output signal may have a large amplitude indicating arrival of an acoustic ranging signal … once the acoustic ranging signals are detected using the matched filter bank a correlation processor or phase module may generate … range module may generate a calculated time of flight]. Wild does not explicitly teach and yet Mannion teaches using an IMU (Inertial Management Unit) to determine an orientation in space of each asset tracker [[0110] the Holocam Orb further includes an inertial measurement unit (IMU) 51. Generally, an inertial measurement unit (IMU) is an electronic device that measures velocity, orientation, and gravitational forces; [0111] a microphone array 53 is provided to record sound from the scene being observed. Preferably, the microphones of microphone array 53 are positioned to pickup surround sound from the scene]; with the data of the one or more microphones and the orientation in space of each asset tracker, determining a direction of a sound in a proximity of the cluster of asset trackers [[0129] time-of-flight Sensor (TOF) 33 gathers a two dimensional array of distance data from objects in its field-of-view to the Holocam Orb; [0188] the Holocam Orbs to generate a 3D structural model of the scene (e.g. a polygon model, such as a (triangular) mesh construct/model). The Sound Source Separation module 83 analyzes the sound data from the multiple (preferably all) microphone arrays 55 and isolates individual source(s) of sound; [abstract] sound sources are identified by comparing results of a blind audio source localization algorithm, with the spatial 3D model provided by the Holocam Orb; [0189] blind separation of sources … neural networks … neural computation]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to asset tag with microphone as taught by Wild, with the inertial sensor and microphone array as taught by Mannion so that sound sources may be localized with triangulation in a 3D model (Mannion) [[abstract][0083]]. Wild does not explicitly teach and yet Vijayalingam teaches providing a pre‐learned machine learning sound identification model to identify a sound source of the sound [[abstract] particular acoustic cluster of the one or more acoustic clusters is selected based on signal processing of the one or more acoustic clusters. A particular object is associated with the particular acoustic cluster. An acoustic fingerprint of the particular object is generated based on the particular acoustic cluster; [0025] localizing multiple acoustic sources; [0125] acoustic sensors 1304 of the AV 100 receive acoustic waves from the one or more objects 1008 … microphone; [0144] transmits the acoustic fingerprint to the machine learning module 1312, which is trained to receive an acoustic fingerprint and generate determine a size, or a make or a model of a vehicle 193; [0171] trained using the feature vector to make predictions or decisions without being explicitly programmed; [0172] neural networks, or a convolutional neural network (CNN) is used]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to asset tag with microphone as taught by Wild, with acoustic sensors/microphones and machine learning as taught by so that sound source objects may be localized by positions, velocities, or headings (Vijayalingam) [[0129]]. Regarding claim 3, Wild does not explicitly teach and yet Mannion teaches the computerized method of claim 1, wherein the cluster of asset trackers cooperate to triangulate a source location of the sound [[0083] geometry thus forms the basis for triangulation; [0084] means that the three dimensional position of 3D point PO can be calculated from the 2D coordinates of the two projection points PL and PR. This process is called triangulation; [0129] time-of-flight Sensor (TOF) 33 gathers a two dimensional array of distance data from objects in its field-of-view to the Holocam Orb]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to asset tag with microphone as taught by Wild, with the inertial sensor and microphone array as taught by Mannion so that sound sources may be localized with triangulation in a 3D model (Mannion) [[abstract][0083]]. Regarding claim 13, Wild does not explicitly teach and yet Vijayalingam teaches the computerized method of claim 12, wherein the set of sound files is converted in a spectrogram [[0044] the AV uses a spectrogram or a time-frequency graph to segregate the acoustic waves into segments over time for object recognition; [0045] AV generates an acoustic fingerprint of the particular object based on the particular acoustic cluster. For example, the AV uses a zero crossing rate, an average spectrum, a spectral flatness, prominent tones across a set of frequency bands, or bandwidth tracking to generate the acoustic fingerprint]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the acoustic fingerprinting so that the spectrum may be used to segregate acoustic waves into segments over time for object recognition (Vijayalingam) [[0044]]. Regarding claim 14, Wild does not explicitly teach and yet Vijayalingam teaches the computerized method of claim 13, wherein the spectrogram is input them into a convolution neural network (CNN) plus Linear Classifier model to produce at least one prediction about a class to which a vehicle sound detected by the one or more microphones belongs [[0172] support vector machine method trains the machine learning module 1312 to assign new examples to one category or the other, making it a non-probabilistic binary linear classifier; [0172] in other embodiments, different machine learning techniques such as linear support vector machine (linear SVM), boosting for other algorithms (e.g., AdaBoost), logistic regression, naïve Bayes, memory-based learning, random forests, bagged trees, boosted trees, boosted stumps, neural networks, or a convolutional neural network (CNN) is used]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the acoustic fingerprinting with machine learning as taught by Vijayalingam so that a training dataset made to identify objects may be created (Vijayalingam) [[0172]]. Claims 2-4 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Wild (US 9,689,958 B1), Mannion (US 2018/0203112 A1), and Vijayalingam (US 2020/0379108 A1) as applied to claim 1 above, and further in view of Sharif (2021, Association for Computing Machinery). Regarding claim 2, Wild does not explicitly teach and yet Sharif teaches the computerized method of claim 1 further comprising: providing another pre‐learned machine learning sound identification model to identify a sound of different vehicles [[title] role of machine listening to detect vehicle using sound acoustics; [abstract] classify unlabeled test data files on a pre-trained model; [pg. 195, col. 1] aim of the research is to use machine listening technique over the collected data from acoustic device and categories them into … classify different type of vehicle cars, motorcycles, trams, trucks, buses. Although, such a process required various computations like fetching audio data, pre-processing over fetched audio data, extracting features from the files, training the model over the extracted features, and predicting the output categories]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the pre-trained machine learning model using sound acoustics so that different type of vehicles may be identified (Sharif) [[pg. 195, col. 1]]. Regarding claim 3, Wild does not explicitly teach and yet Sharif teaches the computerized method of claim 2, wherein a method of transportation of the cluster of asset trackers is determined based on the sound of different vehicles [[pg. 198, col. 1] after classifying the data about vehicles into categories, city experts can extract which type of and how many vehicles pass by, which of these types produce the highest noise levels, etc. Finally using this extracted information, specialists from different areas can take suitable measures to control noise pollution in the region … as a future extension, a clustering algorithm like k-means clustering can be used to separate skewed or noisy data and non-noisy data before starting the training process.]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the classifying of vehicles into categories so that the type and amount of vehicles passing by a road way can be determined (Sharif) [[pg. 198, col. 1]]. Regarding claim 4, Wild does not explicitly teach and yet Sharif teaches the computerized method of claim 3, wherein the one or more microphones are continuously listening to monitor for an event [[pg. 194, col. 2] sound acoustic sciences have made significant advances in software for collecting and analyzing both archived and real-time systems]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the classifying of vehicles into categories so that the type and amount of vehicles passing by a road way can be determined (Sharif) [[pg. 198, col. 1]]. Regarding claim 10, Wild does not explicitly teach and yet Sharif teaches the computerized method of claim 4 further comprising: obtaining a set of sound files [[abstract] aim of this early-stage research is to present a methodology that will read the labeled audio files, extract features from them, feed features to a sequential model. Moreover, the model will have the ability to classify these audio files of vehicles based on their input feature(s) and then further categorize]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the classifying of vehicles into categories so that the type and amount of vehicles passing by a road way can be determined (Sharif) [[pg. 198, col. 1]]. Regarding claim 11, Wild does not explicitly teach and yet Sharif teaches the computerized method of claim 10, wherein the set of sound files comprises a set of vehicle sound files [[abstract] audio files of vehicles]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the classifying of vehicles into categories so that the type and amount of vehicles passing by a road way can be determined (Sharif) [[pg. 198, col. 1]]. Regarding claim 12, Wild does not explicitly teach and yet Sharif the computerized method of claim 11, wherein the set of sound files comprises a set of road construction sounds, traffic congestion sounds, and free flowing traffic sounds [[pg. 194, col. 2] kind of pollution largely originates from combustion engines powering today’s traffic. Pollution by sound, however, also known as noise pollution, is currently much less regarded, though vehicles’ sound emissions are proved to cause multiple diseases as we]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the classifying of vehicles into categories so that the type and amount of vehicles passing by a road way can be determined (Sharif) [[pg. 198, col. 1]]. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Wild (US 9,689,958 B1), Mannion (US 2018/0203112 A1), Vijayalingam (US 2020/0379108 A1), and Sharif (2021, ACM) as applied to claim 4 above, and further in view of Leard (US 2008/0147356 A1). Regarding claim 5, Wild does not explicitly teach and yet Leard teaches the computerized method of claim 4, wherein the event comprises an equipment failure of an equipment associated with the cluster of asset trackers [[0008] present inventor has recognized than many pieces of equipment in facilities such as those described above make distinctive sounds when they are operating, and those sounds change in observable manners in various circumstances, for example, when the pieces of equipment are no longer operating properly or are about to experience a failure. The present inventor has further recognized that, at least in some circumstances, it would be possible by way of one or more acoustic sensors to detect such sound changes and determine that a piece of equipment was experiencing a failure or about to experience a failure. The present inventor has further recognized that, where many pieces of equipment are employed in a large facility, an array of such acoustic sensors positioned at various locations around the facility can serve to determine the occurrence of failures, or imminence of failures, of numerous pieces of equipment situated around the facility]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the acoustic sensor network as taught by Leard so that equipment failures can be detected using sound changes (Leard) [[0008]]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Wild (US 9,689,958 B1), Mannion (US 2018/0203112 A1), Vijayalingam (US 2020/0379108 A1), and Sharif (2021, ACM) as applied to claim 4 above, and further in view of Huffman (US 2010/0158431 A1). Regarding claim 6, Wild does not explicitly teach and yet Huffman teaches the computerized method of claim 4, wherein the event comprises an automatic airplane detection [[0034] when used at airport locations, the detection capability may be extended to include airplane approach areas where, even though the plane has not landed, the engines provide sufficient acoustic energy to be detected by underground or surface fiber deployed in accordance with the disclosed technology]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the acoustic sensor network for detecting airplane landings as taught by Huffman so that a surveillance system may be calibrated to detect an appropriate source or sources of acoustic energy (Huffman) [[0034]]. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Wild (US 9,689,958 B1), Mannion (US 2018/0203112 A1), Vijayalingam (US 2020/0379108 A1), and Sharif (2021, ACM) as applied to claim 4 above, and further in view of Gersten (US 20180053394 A1). Regarding claim 7, Wild does not explicitly teach and yet Gersten teaches the computerized method of claim 4 further comprising: providing a pre-identified acoustic fingerprint to save processing time and conserve battery of each asset tracker [[0005] audio data of the first danger observation data may be an acoustic fingerprint of the interval of digitized environmental sound; [0121] memory 206 may contain a number of audio samples of different types of dangerous events 262, while in other embodiments the memory 206 may contain audio fingerprints of those events. The use of audio fingerprints may facilitate a quick analysis that requires minimal processing power; [0088] performing the relevance determination on the mitigation devices 234 rather than the server 214 may be advantageous, as it would not require the devices 234 to constantly update the server 214 with their location, saving battery power and easing any privacy concerns a user 274 may have about constantly being tracked, in addition to listened to]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the acoustic fingerprinting to identify acoustic events as taught by Gersten so that less processing power is required for quick analysis of sounds (Gersten) [[0121]]. Regarding claim 8, Wild does not explicitly teach and yet Gersten teaches the computerized method of claim 7, wherein the acoustic fingerprint comprises a condensed digital summary [[0005] acoustic fingerprint … status notification may comprise a summary of the dangerous event, the timestamp, the event location, and a current status of the user; [0047] additional functionality, such as but not limited to creation of metadata tags pinpointing the distance and direction of the gunshot; [0142] audio data 500 may further include an acoustic fingerprint 502, as is known in the art. In some embodiments, the audio data 500 may be modified by the danger monitoring device 202 in light of a determined sound modifier (e.g. sound amplified to compensate for being in a pocket, etc.). In other embodiments, the audio data 500 may be transmitted unmodified, as the monitoring device 202 recorded it, and the data 212 may further include a description of the observed sound modifier, either in summary (e.g. “pocket”, “purse”, “table”, etc.), in detail (e.g. providing the raw sensor data, etc.), or both.]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the acoustic fingerprinting including metadata as taught by Gersten so that raw transmitted data may optionally be restricted in order to assuage privacy concerns (Gersten) [[0088]]. Regarding claim 9, Wild does not explicitly teach and yet Gersten teaches the computerized method of claim 8, wherein the condensed digital summary is deterministically generated from an audio signal that can be used to identify an audio sample [[0005] [0005] acoustic fingerprint … status notification may comprise a summary of the dangerous event, the timestamp, the event location, and a current status of the user; [0047][0142]]. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the invention to combine the machine learning as taught by Wild, with the acoustic fingerprinting including metadata as taught by Gersten so that raw transmitted data may be restricted in order to assuage privacy concerns (Gersten) [[0088]]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN D ARMSTRONG whose telephone number is (571)270-7339. The examiner can normally be reached M - F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached at 571-272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN D ARMSTRONG/ Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Aug 31, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566264
ENHANCED RESOLUTION SPLIT APERTURE USING BEAM SEGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12535001
DOWNHOLE ACOUSTIC SYSTEM FOR DETERMINING A RATE OF PENETRATION OF A DRILL STRING AND RELATED METHODS
2y 5m to grant Granted Jan 27, 2026
Patent 12510644
Ultrasonic Microscope and Carrier for carrying an acoustic Pulse Transducer
2y 5m to grant Granted Dec 30, 2025
Patent 12504525
OBJECT DETECTION DEVICE
2y 5m to grant Granted Dec 23, 2025
Patent 12495789
ULTRASONIC GENERATOR AND METHOD FOR REPELLING MOSQUITO IN VEHICLE USING THE SAME
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
54%
With Interview (+1.5%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month