Prosecution Insights
Last updated: April 19, 2026
Application No. 18/483,688

Method and Device for Providing Training Data for Training a Data-Based Object Classification Model for an Ultrasonic Sensor System

Final Rejection §102§103
Filed
Oct 10, 2023
Examiner
ARMSTRONG, JONATHAN D
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
54%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
218 granted / 415 resolved
+0.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
63 currently pending
Career history
478
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
18.4%
-21.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 3 is objected to because of the following informalities: the phrase adopted selected in line 2 appears to be a typo. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 3-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Urtasun (US 2020/0160151 A1). Regarding claim 1, Urtasun discloses a method for providing training datasets for training an object classification model for object classification in an ultrasonic sensor system, comprising: facilitating a survey scenario in which at least one surrounding object within a collection range of the ultrasonic sensor system is moved along a specified trajectory relative to the ultrasonic sensor system [[0048] as part of the training process … repeated classification of the same plurality of training objects; [0064] localized state of the source object … trajectory; [0104] source representation can be based at least in part on, or can include, one or more sensor outputs from one or more sensors including at least one of: one or more light detection and ranging devices (LiDAR), one or more sonar devices, one or more radar devices, and/or one or more cameras; [0174] prediction system 126 can generate prediction data 132 associated with each of the respective one or more objects proximate to the vehicle 108. The prediction data 132 can be indicative of one or more predicted future locations of each respective object. The prediction data 132 can be indicative of a predicted path (e.g., predicted trajectory) of at least one object within the surrounding environment of the vehicle 108]; collecting, during the survey scenario, the ultrasonic signals reflected by the at least one surrounding object at chronologically successive collection situations, each respective collection situation of the chronologically successive collection situations being characterized by different distances or orientations of the at least one surrounding object relative to the ultrasonic sensor system [[0048] part of the training process, differences in correct classification output between a machine-learned model (that outputs the one or more classified object labels) and a set of classified object labels associated with a plurality of training objects that have previously been correctly identified (e.g., ground-truth labels), can be processed using an error loss function that can determine a set of probability distributions based on repeated classification of the same plurality of training objects. As such, the accuracy (e.g., the proportion of correctly identifies objects) of the machine-learned model can be improved over time; [0051] plurality of features classified by the one or more machine-learned models can be based at least in part on one or more sensor outputs from one or more sensors that have captured the plurality of training objects (e.g., the actual objects used to train the machine-learned model) from various angles and/or distances in different environments (e.g., urban areas, suburban areas, rural areas, heavy traffic, and/or light traffic) and/or environmental conditions (e.g., bright daylight, rainy days, darkness, snow covered roads, inside parking structures, in tunnels, and/or under streetlights)]: deriving, for each respective collection situation of the chronologically successive collection situations, respective collection features from the ultrasonic signals collected during a respective collection situation [[0005] target data can include a compressed target feature representation of the environment. The compressed target feature representation can be based at least in part on compression of a target feature representation of the environment produced by one or more machine-learned feature extraction models; [0048] processed using an error loss function that can determine a set of probability distributions based on repeated classification of the same plurality of training objects. As such, the accuracy (e.g., the proportion of correctly identifies objects) of the machine-learned model can be improved over time; [0157] state of objects external to a vehicle (e.g., the physical dimensions, velocity, acceleration, heading, location, shape, and/or appearance of objects external to the vehicle)]; forming, for each respective collection situation of the chronologically successive collection situations, a respective candidate training dataset by associating a respective classification vector specified by the survey scenario with the respective collection features, the respective classification vector having elements that each indicate an object property of the at least one surrounding object [[0057] source representation and/or the target representation can include information associated with one or more images of the environment. The one or more images can include various raster (e.g., bitmap), vector, and/or voxel image formats. Further, the one or more images can include a two-dimensional representation of an environment (e.g., a two-dimensional overhead aerial map of an environment) or a three-dimensional representation of an environment (e.g., a three-dimensional LiDAR point cloud)]; and determining whether to include each respective candidate training dataset in a set of training datasets to be used to train the object classification model, depending on the relative distance from the at least one surrounding object from the ultrasonic sensor system and the relative distances of the at least one surrounding object from the ultrasonic sensor system during previously measured collection situations of determined candidate training datasets [[0163] vehicle 108 can provide data indicative of the state of the one or more objects (e.g., physical dimensions, velocity, acceleration, heading, location, and/or appearance of the one or more objects) within a predefined distance of the vehicle 108 to the operations computing system 104, which can store an indication, record, and/or other data indicative of the state of the one or more objects within a predefined distance of the vehicle 108 in one or more memory devices associated with the operations computing system 104 (e.g., remote from the vehicle); [0343] one or more loss determination units 2012 can be configured to determine the loss based at least in part on an accuracy of the localized state of the source object with respect to the ground-truth state of the source object, wherein the accuracy is inversely correlated with the loss and a distance of the localized state of the source object from the ground-truth state of the source object]. Regarding claim 3, Urtasun teaches the method according to claim 1, wherein a candidate training dataset is adopted selected to be included in the set of training datasets to be used to train the object classification model only if the distance from the respective collection situation lies within a distance range within which no training dataset has yet been identified using the survey scenario [[0100] accuracy can be associated with a distance of the localized state of the source object from the ground-truth state of the source object; [0163] vehicle 108 can provide data indicative of the state of the one or more objects (e.g., physical dimensions, velocity, acceleration, heading, location, and/or appearance of the one or more objects) within a predefined distance of the vehicle 108 to the operations computing system 104, which can store an indication, record, and/or other data indicative of the state of the one or more objects within a predefined distance of the vehicle 108]. Regarding claim 4, Urtasun teaches the method according to claim 3, wherein: candidate training datasets of the survey scenario selected to be included in the set of training datasets to be used to train the object classification model if in each case the distance from the respective collection situation lies within a distance range in which at least one training dataset has already been identified using the survey scenario [[0100]; [0146] additionally, the one or more machine-learned models can readily revised as new training data becomes available or new uses for the one or more machine-learned models are envisioned; [0163]]. Regarding claim 5, Urtasun teaches the method according to claim 1, wherein: each respective candidate training dataset is provided with a weighting as a training dataset, and the weighting is determined depending on a relative velocity of the surrounding object and/or depending on an age of the deriving of the respective collection features in the respective collection situation [[0239] parameters that decrease the loss can be weighted more heavily (e.g., adjusted to increase their contribution to the loss), and the one or more parameters that increase the loss can have their weighting reduced (e.g., adjusted to reduce their contribution to the loss); [0251] method 1000 can include determining a loss based at least in part on based at least in part on an accuracy of the localized state (e.g., the estimated position, location, orientation, velocity, or heading) of the source object with respect to the ground-truth state (e.g., the actual position, location, orientation, velocity, or heading) of the source object. The accuracy can be inversely correlated with the loss (e.g., a greater accuracy is associated with a lower loss)]. Regarding claim 6, Urtasun teaches the method according to claim 1 further comprising: training the object classification model using the set of training datasets [[0146] using a machine-learned model that is trained using training datasets. Further, the one or more machine-learned models in the disclosed technology can be trained using relevant training data (e.g., LiDAR data and maps), which can be done on a massive scale. Additionally, the one or more machine-learned models can readily revised as new training data becomes available or new uses for the one or more machine-learned models are envisioned]. Regarding claim 7, Urtasun teaches the method according to claim 1 further comprising: normalizing the respective collection features of the set of training datasets [[0084] loss function … normalized matching score … a target feature representation or a source feature representation ; [0237] loss function can be used to maximize the accuracy of the localized state source object]. Regarding claim 8, Urtasun teaches a device for performing the method according to claim 1 [[0049] computing system … computing device]. Regarding claim 9, Urtasun teaches a computer program product comprising instructions that, when the program is executed by at least one data processing apparatus, prompt the latter to perform the steps of the method according to claim 1 [[0006][0049]]. Regarding claim 10, Urtasun teaches a machine-readable storage medium comprising instructions that, when executed by at least one data processing apparatus, prompt the at least one data processing apparatus to perform the method according to claim 1 [[0006] one or more tangible non-transitory computer-readable media storing computer-readable instructions that when executed by one or more processors cause the one or more processors to perform operations]. Regarding claim 11, Urtasun teaches the method according to claim 1, wherein the method is an at least partially computer-implemented method [[0008] computer-implemented method]. Regarding claim 12, Urtasun the method according to claim 1 further comprising: training the object classification model using the set of training datasets taking into account the weighting [[0146][0239]]. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Urtasun (US 2020/0160151 A1) as applied to claim 1 above, and further in view of Seccamonte (US 2020/0133280 A1). Regarding claim 2, Urtasun does not explicitly teach and yet Seccamonte teaches the method according to claim 1, wherein the representative candidate training dataset is selected to be included in the set of training datasets to be used to train the object classification model depending on a density of collection situations with respect to a distance between the surrounding object and the ultrasonic sensor system [[0014] one or more sensors include a monocular video camera, a stereo video camera, a visible light camera, an infrared camera, a thermal imager, a LiDAR, a radar, an ultrasonic sensor, a time-of-flight (TOF) depth sensor; [0112] using one or more sensors 121, e.g., as also illustrated in FIG. 1. The objects are classified (e.g., grouped into types such as pedestrian, bicycle, automobile, traffic sign, etc.) and data representing the classified objects 416 is provided to the planning module 404; [0121] a physical object 706 identified in the image 702 is also identified among the data points 704. In this way, the AV 100 perceives the boundaries of the physical object based on the contour and density of the data points 704; [0152] planning module 1328 performs spatiotemporal clustering based on time and distance to identify the plurality of spatiotemporal locations on the trajectory 1356, then extracts the plurality of spatiotemporal locations using identification of density peaks; [0164] machine learning model is constructed from training data that contains the inputs (for example, features extracted from the one or more objects 1340 and the threshold distance to the trajectory) and the desired outputs (for example, a particular safe or desired lateral clearance)]. It would have been obvious to a persona having ordinary skill in the art prior to the effective filing date of the invention to combine the object recognition as taught by Urtasun, with the distance clustering as taught by Seccamonte so that groups of objects may be identified based on density of detected points (Seccamonte) [[0121]]. Response to Arguments Applicant’s arguments, see pgs. 7-9, filed 11/21/2025, with respect to the rejection(s) of claim(s) 1 under 35 U.S.C. 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Urtasun (US 2020/0160151 A1). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN D ARMSTRONG whose telephone number is (571)270-7339. The examiner can normally be reached M - F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached at 571-272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN D ARMSTRONG/Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Oct 10, 2023
Application Filed
Sep 08, 2025
Non-Final Rejection — §102, §103
Nov 21, 2025
Response Filed
Feb 04, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566264
ENHANCED RESOLUTION SPLIT APERTURE USING BEAM SEGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12535001
DOWNHOLE ACOUSTIC SYSTEM FOR DETERMINING A RATE OF PENETRATION OF A DRILL STRING AND RELATED METHODS
2y 5m to grant Granted Jan 27, 2026
Patent 12510644
Ultrasonic Microscope and Carrier for carrying an acoustic Pulse Transducer
2y 5m to grant Granted Dec 30, 2025
Patent 12504525
OBJECT DETECTION DEVICE
2y 5m to grant Granted Dec 23, 2025
Patent 12495789
ULTRASONIC GENERATOR AND METHOD FOR REPELLING MOSQUITO IN VEHICLE USING THE SAME
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
54%
With Interview (+1.5%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month