Prosecution Insights
Last updated: April 19, 2026
Application No. 18/622,894

METHOD FOR LOCATING A SOUND SOURCE

Final Rejection §102
Filed
Mar 30, 2024
Examiner
BAILEY, JOHN D
Art Unit
3747
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Daimler Truck AG
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
292 granted / 375 resolved
+7.9% vs TC avg
Strong +17% interview lift
Without
With
+17.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
21 currently pending
Career history
396
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
44.4%
+4.4% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to the claims have been carefully considered, however, they are considered moot due to a new grounds of rejection as explained below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 5-6 and 8-9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Soltanian et al. (U.S. 20200031337). In re claim 1, Soltanian teaches a method comprising: receiving and recording, by a plurality of microphones arranged on an autonomous vehicle, a sound generated by a sound source (fig. 3; step 75, detect, exterior sound (e.g. siren or sound of a motorcycle, etc.); a method can detect and recognize sounds from other objects outside of the vehicle such as motorcycles or people or weather events such as rain and can cause autonomous or assisted driving functionality to be implemented in response to the detection of such exterior sounds; [0005]); determining, based on the recorded sound, that the sound source is a two- wheeled vehicle (as explained above); determining a direction or position of the sound source relative to the autonomous vehicle using a delay in reception of the sound at one of the plurality of microphones (a circular array or other arrays of microphones on the exterior of the vehicle can be used to determine the direction of the sound source (such as an angle relative to the direction of travel of the vehicle) which can then be used to determine the direction of travel of the emergency vehicle; [0029]) relative to another of the plurality of microphones (the processing system is to determine a delay in sounds at different sounds sensors arranged around the exterior of the vehicle to determine a direction of another vehicle; [claim 11]); determining the autonomous vehicle is in a traffic jam or slow-moving traffic and the two-wheeled vehicle is moving at a higher relative speed than other vehicles in the traffic jam or the slow-moving traffic (Motorcycles produce known sounds that can be recognized by known sound recognition systems and/or algorithms, and the recognition of the sound of a motorcycle can be used in the embodiments of assisted driving described herein. For example, many times when cars are stopped along a road because of a traffic jam, motorcycles will attempt to drive between the cars in order to avoid the traffic jam. This can present a hazard to both the cars and the motorcycles when a driver of a car attempts to make a lane change as the motorcycle approaches. The detection of the sounds of the motorcycle or other vehicle can be combined with images from one or more cameras and data from a LIDAR sensor to confirm the presence of the motorcycle or other vehicle. The images can be processed using known algorithms to recognize a rider on a motorcycle which can confirm the presence of the motorcycle detected via its sounds; [0030]); and adapting, responsive to the determination that the autonomous vehicle is in the traffic jam or in the slow-moving traffic and that the two-wheeled vehicle is moving at a higher relative speed than the other vehicles in the traffic jam or the slow-moving traffic, driving behavior of the autonomous vehicle by moving the autonomous vehicle in a transverse direction to allow the two-wheeled vehicle to pass the autonomous vehicle (fig. 4a, step 111, optionally provide autonomous vehicle control (e.g. pull vehicle over to side of road and stop); [0035]) In re claim 2, Soltanian teaches the method of claim 1, further comprising: checking or correcting the determined direction or position taking into account a world model created using data recorded by an environment sensor system of the autonomous vehicle or map information from a digital map and which contains information about objects on which the sound is reflected (The detection of the sounds of the motorcycle or other vehicle can be combined with images from one or more cameras and data from a LIDAR sensor to confirm the presence of the motorcycle or other vehicle. Similarly, the detection of exterior sounds from people in the vicinity of the vehicle can cause other components of the vehicle (e.g., cameras, radar, or LIDAR) to perform assisted driving functions or features such as attempting to detect the location of the people and visually detect the people making the sounds in order to prevent a collision between the people and the vehicle; [0005]; the warning can include a street map that shows the location on the street map of the vehicle and the other vehicle, and the street map can be similar to the view shown in FIG. 1 which allows the user of the vehicle to see the approach of the other vehicle such as the emergency vehicle 16 shown in FIG. 1. A warning with such a street map can be provided when the set of sensors (such as an array of microphones, a set of one or more cameras, and a LIDAR sensor and/or radar sensor) provide enough data to the set of one or more processing systems to enable the set of one or more processing systems to locate the other vehicle relative to the street map and place the other vehicle on the street map which also shows the vehicle having the set of sensors. The street map can be updated over time as the locations of the vehicle and the other vehicle change. Typically, radar sensors and a LIDAR can provide enough data in many instances to allow a location determination relative to a street map assuming the navigation system of the vehicle has determined the location and path of travel of the vehicle relative to the street map; [0038] and as suggested in [0034]; Here, the combination of camera, radar, lidar data along with sound data, and a street map constitute a world model, wherein position/direction data is then updated overtime (which includes checking and correcting the position/direction data in the model, otherwise the model would remain static and thus unable to be updated over time)). In re claim 5, Soltanian teaches an autonomous vehicle comprising: a plurality of microphones configured to receive and record a sound generated by a sound source (as indicated in fig. 6 and explained in claim 1 above; note: four microphones are shown in fig. 6); and a controller (as shown in fig. 5-6; data processing system(s) 161, assisted driving processor(s) 177) coupled to the plurality of microphones and configured to determine, based on the recorded sound, that the sound source is a two- wheeled vehicle (as explained in claim 1 above); determine a direction or position of the sound source relative to the autonomous vehicle using a delay in reception of the sound at one of the plurality of microphones relative to another of the plurality of microphones (as explained in claim 1 above); determine the autonomous vehicle is in a traffic jam or slow-moving traffic and the two-wheeled vehicle is moving at a higher relative speed than other vehicles in the traffic jam or the slow-moving traffic (as explained in claim 1 above); and adapt, responsive to the determination that the autonomous vehicle is in the traffic jam or the slow-moving traffic and that the two-wheeled vehicle is moving at a higher relative speed than the other vehicles in the traffic jam or the slow-moving traffic, driving behavior of the autonomous vehicle by moving the autonomous vehicle in a transverse direction to allow the two-wheeled vehicle to pass the autonomous vehicle (as explained in claim 1 above). In re claim 6, Soltanian teaches the autonomous vehicle of claim 5, wherein the autonomous vehicle is a commercial vehicle (other motorized vehicle as suggested in [0039]), passenger car (The vehicle can be a car or SUV or other motorized vehicle; [0039]), or a bus (other motorized vehicle as suggested in [0039]). In re claim 8, Soltanian teaches the autonomous vehicle of claim 5, wherein the plurality of microphones include at least four microphones (as indicated in fig. 6, fig. 8c; data processing system 175 can include a set of microphones arranged on the exterior surface of the vehicle. In the example of FIG. 6 there are four microphones 179, 181, 182, and 183 arranged on the exterior surface of the vehicle; [0040]; from the top view it can be seen that there are four microphones 310, 312, 314, and 316 arranged around the exterior surface of the vehicle 301 as shown in FIG. 8C; [0043]; note: four microphones shown). In re claim 9, Soltanian teaches the autonomous vehicle of claim 8, wherein a first two of the at least four microphones are arranged on a left side of a cabin of the autonomous vehicle and a second two of the at least four microphones are arranged on a right side of the cabin of the autonomous vehicle (as shown in fig. 8c and explained in claim 8 above). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN D BAILEY whose telephone number is (571)272-5692. The examiner can normally be reached M-F 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Logan Kraft can be reached at 571-270-5625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN D BAILEY/Examiner, Art Unit 3747 /LOGAN M KRAFT/Supervisory Patent Examiner, Art Unit 3747
Read full office action

Prosecution Timeline

Mar 30, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §102
Dec 01, 2025
Response Filed
Mar 07, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583432
VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12559085
TARGET STEERING CONTROL SYSTEM AND METHOD USING BIASED BRAKING POWER IN CASE OF STEERING SYSTEM FAILURE
2y 5m to grant Granted Feb 24, 2026
Patent 12552213
SYSTEMS AND METHODS FOR VEHICLE LOAD MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12545303
VEHICLE PARKING CONTROL METHOD AND APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12545070
SUSPENSION SYSTEM AND CONTROLLER
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
95%
With Interview (+17.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month