Prosecution Insights
Last updated: April 19, 2026
Application No. 18/086,192

INTELLIGENT SPEED CHECK

Non-Final OA §101§103
Filed
Dec 21, 2022
Examiner
MILLER, PRESTON JAY
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Cariad SE
OA Round
5 (Non-Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
3y 1m
To Grant
75%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
28 granted / 50 resolved
+4.0% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
39 currently pending
Career history
89
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 50 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims 2. This office action is in response to Amendments and Remarks filed on 01/23/2026 for application number 18/086,192 filed on 12/21/2022, in which claims 1-21 were previously presented for examination. 3. Claim(s) 2, and 6 has/have been canceled, and claim(s) 1, 11, and 21 has/have been amended. Accordingly, claim(s) 1-5, 7-15, and 18-21 is/are currently pending. Examiner Notes 4. The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. The prompt development of a clear issue requires that the replies of the Applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure (see MPEP §2163.06). Applicant is reminded that the Examiner is entitled to give the Broadest Reasonable Interpretation (BRI) of the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims. SEE MPEP 2141.02 [R-07.2015] VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS: A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Response to Arguments 5. Applicant’s arguments and amendments have been addressed in the new rejection outlined below. 6. Applicant’s arguments with respect to claim(s) 01/23/2026 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 7. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 8. Claim(s) 1-5, 7-15, and 17-21 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 9. The determination of whether a claim recites patent ineligible subject matter is a 2 step inquiry. STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), see MPEP 2106.03, or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: see MPEP 2106.04 STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? see MPEP 2106.04(II)(A)(1) STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? see MPEP 2106.04(II)(A)(2) and 2106.05(a) thru (d) for explanations. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? see MPEP 2106.05 101 Analysis – Step 1 10. Claim(s) 1-5, and 7-10 is/are directed to a method (i.e. a process). Therefore, claim(s) 1-5, and 7-10 is/are within at least one of the four statutory categories. 11. Claim(s) 11-15, and 17-20 is/are directed to a system (i.e. an apparatus). Therefore, claim(s) 11-15, and 17-20 is/are within at least one of the four statutory categories. 12. Claim(s) 21 is/are directed to a non-transitory computer readable media (i.e. an article of manufacture). Therefore, claim(s) 21 is/are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I 13. Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. see MPEP 2106(A)(II)(1) and MPEP 2106.04(a)-(c). 14. Independent claim(s) 1, 11, and 21 include(s) limitations that recite an abstract idea (emphasized below [with the category of abstract idea in brackets]). The additional limitations beyond the abstract idea are annotated (where the underlined portions are the “additional limitations” [with a description of the additional limitations in brackets], which are analyzed in Step 2A, Prong II below. Claim 11 will be used as a representative claim for the remainder of the 101 rejection. Claim 11 recites: A system, comprising: one or more computing processors [applying the abstract idea using generic computing module, Apply it 2106.05(f)]; one or more non-transitory computer readable media storing a program of instructions that is executable by the one or more computing processors to perform [applying the abstract idea using generic computing module, Apply it 2106.05(f)]: collecting two or more sets of velocity data originating from physical sensors deployed in two or more vehicles traversing a road segment, each set of velocity data in the two or more sets of velocity data corresponding to a respective vehicle in the two or more vehicles [pre-solution activity (data gathering), 2106.05(g) using generic sensors]; analyzing the two or more sets of velocity data to generate speed check analytical data for the road segment [mental process/step], wherein one or more trained machine-learning (ML) algorithms are applied to the two or more sets of velocity data [generic linking to technical field, 2106.05(h)] to generate predictions of speed check zones with one or more levels of confidence as a part of the speed check analytical data [mental process/step], wherein the one or more trained ML algorithms are implemented with one or more artificial neural networks trained with training data in a training phase [generic linking to technical field, 2106.05(h)]; in response to identifying, based at least in part on speed changes in the speed check analytical data, a speed check zone on the road segment [mental process/step], causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone [insignificant post-solution activity (outputting results of the mental process) 2106.05(g)]. 15. The Examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers steps that could be carried out in the human mind. For example, “analyzing the two or more sets of velocity data to generate speed check analytical data for the road segment,” “generate predictions of speed check zones with one or more levels of confidence as a part of the speed check analytical data,” and “identifying, based at least in part on speed changes in the speed check analytical data, a speed check zone on the road segment” step(s) encompass(es) a user, such as a driver of a vehicle, making observation, evaluation or judgement about the traffic flow around the vehicle, could all be carried out in one’s mind. The same user looking at the data collected, could form a simple judgement and conclude whether the changes in the velocity and slowing down of the vehicles on a road segment is an indication of a police car or a speed trap. Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II 16. Regarding Prong II of the Step 2A analysis, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. see MPEP 2106.04(II)(A)(2) and MPEP 2106.04(d)(2). It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” 17. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. 18. Regarding the additional limitations of “one or more computing processors,” and “one or more non-transitory computer readable media storing a program of instructions that is executable by the one or more computing processors to perform” the examiner submits that these limitations are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function executing instructions) such that it amounts no more than mere instructions to apply the exception using a generic computer component. 19. Regarding the additional limitation(s) of “wherein one or more trained machine-learning (ML) algorithms are applied to the two or more sets of velocity data,” and “wherein the one or more trained ML algorithms are implemented with one or more artificial neural networks trained with training data in a training phase” the Examiner submits that these limitation(s) is/are an attempt to generally link additional element(s) to a technological environment. 20. Regarding the additional limitation(s) of “collecting two or more sets of velocity data originating from physical sensors deployed in two or more vehicles traversing a road segment, each set of velocity data in the two or more sets of velocity data corresponding to a respective vehicle in the two or more vehicles” and “causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone” the examiner submits that this/these limitation(s) is/are insignificant extra-solution activities that merely use a computer to perform the process. In particular, the “collecting two or more sets of velocity data originating from physical sensors deployed in two or more vehicles traversing a road segment, each set of velocity data in the two or more sets of velocity data corresponding to a respective vehicle in the two or more vehicles” step(s) is/are recited at a high level of generality (i.e. as a general means of gathering data for use in the evaluating step), and amounts to mere data gathering, which is a form of insignificant extra-solution activity. The “causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone” step(s) is/are also recited at a high level of generality (i.e. as a general means of providing the evaluation result from the evaluating step), and amounts to mere post solution activity, which is a form of insignificant extra-solution activity. In this regard, relevant paragraph [0112] of Applicant’s specification states: “In an embodiment, the system is further configured to cause a vehicle - e.g., through user perceivable warnings on a map or navigation application, etc. - to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone.” Accordingly, causing a vehicle other than the two or more vehicles to adjust its speed is no more than providing the evaluation result, and amounts to mere post solution activity. 21. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception. see MPEP § 2106.05. Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B 22. Regarding Step 2B of the Revised Guidance, representative independent claim 11 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “one or more computing processors,” and “one or more non-transitory computer readable media storing a program of instructions that is executable by the one or more computing processors to perform” amounts to nothing more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Regarding the additional limitation(s) of “wherein one or more trained machine-learning (ML) algorithms are applied to the two or more sets of velocity data,” and “wherein the one or more trained ML algorithms are implemented with one or more artificial neural networks trained with training data in a training phase” the Examiner submits that these limitation(s) is/are an attempt to generally link additional element(s) to a technological environment. As discussed above, in regard to the additional limitations of “collecting two or more sets of velocity data originating from physical sensors deployed in two or more vehicles traversing a road segment, each set of velocity data in the two or more sets of velocity data corresponding to a respective vehicle in the two or more vehicles” and “causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone” the examiner submits that these limitations are insignificant extra-solution activities. 23. As established above claim 11 is representative of all independent claims and therefore claim(s) 1, and 21 is/are rejected for the same reason. 24. Dependent claim(s) 2-5, 7-10, 12-15, and 17-20 does/do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-5, 7-10, 12-15, and 17-20 are not patent eligible under the same rationale as provided for in the rejection of 11. 25. Therefore, claim(s) 1-5, 7-15, and 17-21 is/are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 26. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 27. Claim(s) 1-2, 5, 7-9, 11-12, 15, 17-19, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Matus et al. (US-20190122543-A1) in view of Ferguson et al. (US-20140142799-A1). In regard to claim 1 , Matus discloses a method comprising (See at least Fig. 1, and [0009]: a method 100 for improving vehicular traffic-related communications between devices): collecting two or more sets of velocity data originating from physical sensors deployed in two or more vehicles traversing a road segment, each set of velocity data in the two or more sets of velocity data corresponding to a respective vehicle in the two or more vehicles (See at least Fig. 1 and [0027]: Block S110 includes collecting a movement dataset, corresponding to at least one of a location sensor and a motion sensor [i.e., physical sensors] and/or a supplementary dataset from a first device (e.g., smartphone) associated with a first user. Block S110 function collects traffic-related data for use in characterizing one or more traffic-related events. Movement datasets preferably describe at least one or more of position, velocity [i.e., collecting two or more sets of velocity data, especially when the traffic-related data is collected from two or more vehicles], and/or acceleration (PVA) of one or more vehicles [i.e., two or more vehicles traversing a road segment, especially when the traffic-related data is collected from two or vehicles], other user devices (e.g., smartphones, laptops, tablets, smart watches, smart glasses, virtual reality devices, augmented reality devices, aerial devices such as drones, medical devices, etc.), users (e.g., a vehicle driver, vehicle passenger, pedestrian, etc.)); analyzing the two or more sets of velocity data to generate speed check analytical data for the road segment, wherein one or more trained machine-learning (ML) algorithms are applied to the two or more sets of velocity data to generate predictions of speed check zones (See at least Figs. 1-2, and [0019 & 0051-0057]: conventional systems enable drivers or other observers to report traffic events (e.g., accidents, speed traps, slowdowns, construction blockages, road debris, etc.) that have already occurred or are otherwise apparent. Block S130 includes determining a traffic-related event from processing the movement dataset and/or supplementary dataset with a traffic event model [i.e., analyzing the two or more sets of velocity data to generate speed check analytical data]. Block S130 functions to detect, predict, and/or otherwise determine traffic-related events for facilitating initiation of appropriate traffic-related responses in real-time. Traffic-related events includes accident related events, such as: occurrence of a vehicular accident, prediction of a future vehicular accident, indications of historic vehicular accidents (e.g., associated with a proximal driver, associated with a proximal vehicular path component, etc.), risk of vehicular accidents (e.g., risk score; etc.), parameters associated with accident related events (e.g., timing, users involved, vehicles involved, type of vehicular accident, severity, etc.) and/or any other suitable characterizations of vehicular accidents. Vehicular accidents related to traffic-related events can include: collisions (e.g., single-vehicle collisions, multi-vehicle collisions, pedestrian collisions, etc.), vehicle failures (e.g., mechanical breakdowns, electrical failures, software failures, issues with battery, wheel change, fuel, puncture, charging, clutch, ignition, cooling, heating, ventilation, brakes, engine, etc.), and/or any other suitable type of vehicular accident. Traffic-related events also include vehicle events that are accident adjacent or unrelated to a collision or accident, such as: hard braking events (e.g., deceleration/acceleration above or below a threshold value designating hard braking), swerving (e.g., horizontal acceleration above a threshold value, lateral motion above a threshold value, etc.), moving violations (e.g., entering an intersection against the prevailing traffic signal, running a stop sign, speeding, etc.), and/or any other suitable type of traffic-related event. Traffic-related events additionally or alternatively include any other suitable events related to vehicular operations. Block S130 includes determining any number of traffic-related events using any number of traffic event models, based on any suitable number and types of datasets collected from any suitable number and types of data collection components. Traffic event models are preferably machine-trained classification models [i.e., one or more trained machine-learning (ML) algorithms] that classify a plurality of signal inputs [i.e., the two or more sets of velocity data] as corresponding to a traffic-related event of a particular type or class, with associated properties. Examiner notes, as mentioned above, speed traps are traffic events. As such, Block S130 which includes determining a traffic-related event encompasses determining traffic events, such as detecting a speed trap. Furthermore, for a machine-trained classification model to operate properly, it must be necessarily trained with training data in a training phase); in response to identifying, based at least in part on speed changes in the speed check analytical data, a speed check zone on the road segment, causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone (See at least Fig. 1, and [0053 & 0064]: Block S130 preferably includes determining traffic-related events based on movement data. Block S140 includes transmitting a traffic-related communication from the remote computing system to a second device associated with a second user [i.e., a vehicle other than the two or more vehicles] in response to determining the traffic-related event. S140 functions to inform users of traffic-related information and/or initiate traffic-related actions. The Messages is displayed on the display of the personal navigation device. Examiner notes, in regard to the limitation of “causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone,” relevant paragraph [0112] of Applicant’s specification states: “in an embodiment, the system is further configured to cause a vehicle - e.g., through user perceivable warnings on a map or navigation application, etc. - to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone.” Accordingly, the mentioned limitation was interpreted as sending a message or a warning regarding an event, such as the existence of a speed trap along the vehicle’s route. Initiating traffic-related actions encompass adjusting speed to be at or under a maximum speed). Matus is silent on with one or more levels of confidence. However, Ferguson teaches the confidence level could represent a static (unchanging) probability of an occurrence of the given predicted behavior. Alternatively, the confidence level [i.e., with one or more levels of confidence] is adjusted based on using sensor data in a machine learning algorithm. Thus, the confidence level could be calculated dynamically based on real-world operating conditions and operating experiences (See at least [0098]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify the invention of Matus, by incorporating the teachings of Ferguson, with a reasonable expectation of success, as both inventions are directed to the same field of endeavor – vehicle systems, such that the machine learning algorithm generates and adjusts a confidence level based on the velocity data from the sensors. The motivation to modify is that, as acknowledged by Ferguson, controlling, using the computer system, the vehicle in the autonomous mode based on the predicted behavior, the confidence level, the current state of the vehicle, and the current state of the environment of the vehicle (See at least [0004]) which one of ordinary skill would have recognized allows the vehicle to be controlled with a higher level of certainty. In regard to claim 2 , Matus, as modified by Ferguson, teaches the method of Claim 1, wherein the two or more sets of velocity data are derived from sensor data with physical sensors deployed in the two or more vehicles within a specific time window (See at least Fig. 1 and [0027 & 033]: Block S110 includes collecting a movement dataset, corresponding to at least one of a location sensor and a motion sensor [i.e., physical sensors]. Block S110, data processing conditions includes temporal conditions. Datasets collected from a plurality of devices within a time period (e.g., 30 second window) [i.e., within a specific time window] are processed together). In regard to claim 5 , Matus, as modified by Ferguson, teaches the method of Claim 1, further comprising: collecting two or more sets of image related data generated in the two or more vehicles traversing the road segment, wherein the two or more sets of image related data are analyzed to generate second speed check analytical data for the road segment; wherein the speed check zone is identified based further on the second speed check analytical data (See at least Fig. 1, and [0028-0031 & 0055]: related to Block S110, supplementary datasets include any one or more of: user data, audio data, optical data collected such as imagery and video including internal vehicle-facing optical data of users and external vehicle-facing optical data of route, vehicle data including vehicle operation data and vehicle camera data, traffic data, biometric data, environmental data, and/or any other suitable data for facilitating determination of traffic-related events, responding to traffic-related events, and/or for performing other portions of the method 100. Supplementary datasets are used to corroborate the determination of a traffic-related event. Data collection components include microphones and/or cameras at intersections. Block S130 includes extracting one or more movement features and/or supplementary features from movement data and/or supplementary data. Processing traffic-related data in relation to Block S130 and/or other suitable portions of the method 100 include any one or more of: performing pattern recognition on data, fusing data from multiple sources, combination of values, compression, conversion, performing statistical estimation on data, wave modulation, normalization, updating, ranking, weighting, validating, filtering, noise reduction, smoothing, filling, aligning, model fitting, binning, windowing, clipping, transformations, mathematical operations, data association, multiplexing, demultiplexing, interpolating, extrapolating, clustering, image processing techniques (e.g., for optical data, image filtering, image transformations, histograms, structural analysis, shape analysis, object tracking, motion analysis, feature detection, object detection, stitching, thresholding, image adjustments, etc.), other signal processing operations, other image processing operations, visualizing, and/or any other suitable processing operations. Examiner notes, as mentioned above, the vehicles use camera to capture images or optical data which is collecting two or more sets of image related data generated in the two or more vehicles traversing the road segment, especially when two or more vehicles capture the data. Furthermore, the images are analyzed by using image processing techniques which is generating generate second speed check analytical data, especially when the images are used to identify a traffic-related event such as existence of a speed trap in the vicinity of the vehicle. That is, identifying a speed check zone based on the second speed check analytical data). In regard to claim 7 , Matus, as modified by Ferguson, teaches the method of Claim 1, further comprising: collecting two or more sets of braking behavior data generated in the two or more vehicles traversing the road segment (See at least Figs. 1-2, and [0051]: Block S130 includes determining a traffic-related event from processing the movement dataset and/or supplementary dataset with a traffic event model. Block S130 functions to detect, predict, and/or otherwise determine traffic-related events for facilitating initiation of appropriate traffic-related responses in real-time. Traffic-related events includes accident related events, such as: occurrence of a vehicular accident, prediction of a future vehicular accident, indications of historic vehicular accidents, risk of vehicular accidents, parameters associated with accident related events and/or any other suitable characterizations of vehicular accidents. Traffic-related events also include vehicle events that are accident adjacent or unrelated to a collision or accident, such as: hard braking events (e.g., deceleration/acceleration above or below a threshold value designating hard braking) [i.e., two or more sets of braking behavior data, especially when the data is collected from two or more vehicles], swerving, moving violations, and/or any other suitable type of traffic-related event. Traffic-related events additionally or alternatively include any other suitable events related to vehicular operations. Examiner notes, braking a vehicle changes the speed and acceleration of the vehicle which are included in the movement dataset. As described by claim 1, the movement dataset is used for determining a traffic-related event, such as a speed trap. That is, collecting two or more sets of braking behavior data). In regard to claim 8 , Matus, as modified by Ferguson, teaches the method of Claim 7, wherein the two or more sets of braking behavior data are analyzed to generate second speed check analytical data for the road segment; wherein the speed check zone is identified based further on the second speed check analytical data (See at least Figs. 1-2, and [0027 & 0051]: Block S110 function collects traffic-related data for use in characterizing one or more traffic-related events. Movement datasets preferably describe at least one or more of position, velocity, and/or acceleration (PVA) of one or more vehicles. Block S130 includes determining a traffic-related event from processing the movement dataset and/or supplementary dataset with a traffic event model. Block S130 functions to detect, predict, and/or otherwise determine traffic-related events for facilitating initiation of appropriate traffic-related responses in real-time. Traffic-related events includes accident related events, such as: occurrence of a vehicular accident, prediction of a future vehicular accident, indications of historic vehicular accidents, risk of vehicular accidents, parameters associated with accident related events and/or any other suitable characterizations of vehicular accidents. Traffic-related events also include vehicle events that are accident adjacent or unrelated to a collision or accident, such as: hard braking events (e.g., deceleration/acceleration above or below a threshold value designating hard braking), swerving, moving violations, and/or any other suitable type of traffic-related event. Traffic-related events additionally or alternatively include any other suitable events related to vehicular operations. Examiner notes, braking a vehicle changes the speed and acceleration of the vehicle which are included in the movement dataset. As described by claim 1, the movement dataset is used for determining a traffic-related event, such as a speed trap. That is, identifying a speed check zone based on the second speed analytical data, where the second analytical data is the movement dataset that includes the braking of the vehicle). In regard to claim 9 , Matus, as modified by Ferguson, teaches the method of Claim 1, further comprising: providing warning data that identifies the speed check zone to at least one vehicle that is to traverse the road segment (See at least Fig. 1, and [0053 & 0064]: Block S130 preferably includes determining traffic-related events [i.e., speed check zone] based on movement data. Block S140 includes transmitting a traffic-related communication from the remote computing system to a second device associated with a second user in response to determining the traffic-related event. S140 functions to inform users of traffic-related information and/or initiate traffic-related actions. The Messages is displayed on the display of the personal navigation device [i.e., providing warning data]). In regard to claim 11 , Matus discloses a system, comprising (See at least Fig. 2, and [0011]: system 200 [i.e., a system] functions to coordinate communications between devices): one or more computing processors (See at least [0085]: the instructions are executed by computer-executable components [i.e., one or more computing processors] integrated with the system); one or more non-transitory computer readable media storing a program of instructions that is executable by the one or more computing processors to perform (See at least [0085]: the system and method and variations thereof are implemented at least in part as a machine configured to receive a computer-readable medium [i.e., one or more non-transitory computer readable media] storing computer-readable instructions [i.e., storing a program of instructions]. The instructions are executed by computer-executable components [i.e., one or more computing processors] integrated with the system): collecting two or more sets of velocity data originating from physical sensors deployed in two or more vehicles traversing a road segment, each set of velocity data in the two or more sets of velocity data corresponding to a respective vehicle in the two or more vehicles (See at least Fig. 1 and [0027]: Block S110 includes collecting a movement dataset, corresponding to at least one of a location sensor and a motion sensor [i.e., physical sensors] and/or a supplementary dataset from a first device (e.g., smartphone) associated with a first user. Block S110 function collects traffic-related data for use in characterizing one or more traffic-related events. Movement datasets preferably describe at least one or more of position, velocity [i.e., collecting two or more sets of velocity data, especially when the traffic-related data is collected from two or more vehicles], and/or acceleration (PVA) of one or more vehicles [i.e., two or more vehicles traversing a road segment, especially when the traffic-related data is collected from two or vehicles], other user devices (e.g., smartphones, laptops, tablets, smart watches, smart glasses, virtual reality devices, augmented reality devices, aerial devices such as drones, medical devices, etc.), users (e.g., a vehicle driver, vehicle passenger, pedestrian, etc.)); analyzing the two or more sets of velocity data to generate speed check analytical data for the road segment, wherein one or more trained machine-learning (ML) algorithms are applied to the two or more sets of velocity data to generate predictions of speed check zones a training phase (See at least Figs. 1-2, and [0019 & 0051-0057]: conventional systems enable drivers or other observers to report traffic events (e.g., accidents, speed traps, slowdowns, construction blockages, road debris, etc.) that have already occurred or are otherwise apparent. Block S130 includes determining a traffic-related event from processing the movement dataset and/or supplementary dataset with a traffic event model [i.e., analyzing the two or more sets of velocity data to generate speed check analytical data]. Block S130 functions to detect, predict, and/or otherwise determine traffic-related events for facilitating initiation of appropriate traffic-related responses in real-time. Traffic-related events includes accident related events, such as: occurrence of a vehicular accident, prediction of a future vehicular accident, indications of historic vehicular accidents (e.g., associated with a proximal driver, associated with a proximal vehicular path component, etc.), risk of vehicular accidents (e.g., risk score; etc.), parameters associated with accident related events (e.g., timing, users involved, vehicles involved, type of vehicular accident, severity, etc.) and/or any other suitable characterizations of vehicular accidents. Vehicular accidents related to traffic-related events can include: collisions (e.g., single-vehicle collisions, multi-vehicle collisions, pedestrian collisions, etc.), vehicle failures (e.g., mechanical breakdowns, electrical failures, software failures, issues with battery, wheel change, fuel, puncture, charging, clutch, ignition, cooling, heating, ventilation, brakes, engine, etc.), and/or any other suitable type of vehicular accident. Traffic-related events also include vehicle events that are accident adjacent or unrelated to a collision or accident, such as: hard braking events (e.g., deceleration/acceleration above or below a threshold value designating hard braking), swerving (e.g., horizontal acceleration above a threshold value, lateral motion above a threshold value, etc.), moving violations (e.g., entering an intersection against the prevailing traffic signal, running a stop sign, speeding, etc.), and/or any other suitable type of traffic-related event. Traffic-related events additionally or alternatively include any other suitable events related to vehicular operations. Block S130 includes determining any number of traffic-related events using any number of traffic event models, based on any suitable number and types of datasets collected from any suitable number and types of data collection components. Traffic event models are preferably machine-trained classification models [i.e., one or more trained machine-learning (ML) algorithms] that classify a plurality of signal inputs [i.e., the two or more sets of velocity data] as corresponding to a traffic-related event of a particular type or class, with associated properties. Examiner notes, as mentioned above, speed traps are traffic events. As such, Block S130 which includes determining a traffic-related event encompasses determining traffic events, such as detecting a speed trap. Furthermore, for a machine-trained classification model to operate properly, it must be necessarily trained with training data in a training phase); in response to identifying, based at least in part on speed changes in the speed check analytical data, a speed check zone on the road segment, causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone (See at least Fig. 1, and [0053 & 0064]: Block S130 preferably includes determining traffic-related events based on movement data. Block S140 includes transmitting a traffic-related communication from the remote computing system to a second device associated with a second user [i.e., a vehicle other than the two or more vehicles] in response to determining the traffic-related event. S140 functions to inform users of traffic-related information and/or initiate traffic-related actions. The Messages is displayed on the display of the personal navigation device. Examiner notes, in regard to the limitation of “causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone,” relevant paragraph [0112] of Applicant’s specification states: “in an embodiment, the system is further configured to cause a vehicle - e.g., through user perceivable warnings on a map or navigation application, etc. - to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone.” Accordingly, the mentioned limitation was interpreted as sending a message or a warning regarding an event, such as the existence of a speed trap along the vehicle’s route. Initiating traffic-related actions encompass adjusting speed to be at or under a maximum speed). Matus is silent on with one or more levels of confidence. However, Ferguson teaches the confidence level could represent a static (unchanging) probability of an occurrence of the given predicted behavior. Alternatively, the confidence level [i.e., with one or more levels of confidence] is adjusted based on using sensor data in a machine learning algorithm. Thus, the confidence level could be calculated dynamically based on real-world operating conditions and operating experiences (See at least [0098]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify the invention of Matus, by incorporating the teachings of Ferguson, with a reasonable expectation of success, as both inventions are directed to the same field of endeavor – vehicle systems, such that the machine learning algorithm generates and adjusts a confidence level based on the velocity data from the sensors. The motivation to do so is the same as acknowledged by Ferguson in regard to claim 1. In regard to claim 12 , Matus, as modified by Ferguson, teaches the system of Claim 11. Claim 12 recites a system having substantially the same features of claim 2 above, therefore claim 12 is rejected for the same reasons as claim 2. In regard to claim 15 , Matus, as modified by Ferguson, teaches the system of Claim 11. Claim 15 recites a system having substantially the same features of claim 5 above, therefore claim 15 is rejected for the same reasons as claim 5. In regard to claim 17 , Matus, as modified by Ferguson, teaches the system of Claim 11. Claim 17 recites a system having substantially the same features of claim 7 above, therefore claim 17 is rejected for the same reasons as claim 7. In regard to claim 18 , Matus, as modified by Ferguson, teaches the system of Claim 17. Claim 18 recites a system having substantially the same features of claim 8 above, therefore claim 18 is rejected for the same reasons as claim 8. In regard to claim 19 , Matus, as modified by Ferguson, teaches the system of Claim 11. Claim 19 recites a system having substantially the same features of claim 9 above, therefore claim 19 is rejected for the same reasons as claim 9. In regard to claim 21 , Matus discloses one or more non-transitory computer readable media storing a program of instructions that is executable by one or more computing processors to perform (See at least [0085]: the system and method and variations thereof are implemented at least in part as a machine configured to receive a computer-readable medium [i.e., one or more non-transitory computer readable media] storing computer-readable instructions [i.e., storing a program of instructions]. The instructions are executed by computer-executable components [i.e., one or more computing processors] integrated with the system): collecting two or more sets of velocity data originating from physical sensors deployed in two or more vehicles traversing a road segment, each set of velocity data in the two or more sets of velocity data corresponding to a respective vehicle in the two or more vehicles (See at least Fig. 1 and [0027]: Block S110 includes collecting a movement dataset, corresponding to at least one of a location sensor and a motion sensor [i.e., physical sensors] and/or a supplementary dataset from a first device (e.g., smartphone) associated with a first user. Block S110 function collects traffic-related data for use in characterizing one or more traffic-related events. Movement datasets preferably describe at least one or more of position, velocity [i.e., collecting two or more sets of velocity data, especially when the traffic-related data is collected from two or more vehicles], and/or acceleration (PVA) of one or more vehicles [i.e., two or more vehicles traversing a road segment, especially when the traffic-related data is collected from two or vehicles], other user devices (e.g., smartphones, laptops, tablets, smart watches, smart glasses, virtual reality devices, augmented reality devices, aerial devices such as drones, medical devices, etc.), users (e.g., a vehicle driver, vehicle passenger, pedestrian, etc.)); analyzing the two or more sets of velocity data to generate speed check analytical data for the road segment, wherein one or more trained machine-learning (ML) algorithms are applied to the two or more sets of velocity data to generate predictions of speed check zones (See at least Figs. 1-2, and [0019 & 0051-0057]: conventional systems enable drivers or other observers to report traffic events (e.g., accidents, speed traps, slowdowns, construction blockages, road debris, etc.) that have already occurred or are otherwise apparent. Block S130 includes determining a traffic-related event from processing the movement dataset and/or supplementary dataset with a traffic event model [i.e., analyzing the two or more sets of velocity data to generate speed check analytical data]. Block S130 functions to detect, predict, and/or otherwise determine traffic-related events for facilitating initiation of appropriate traffic-related responses in real-time. Traffic-related events includes accident related events, such as: occurrence of a vehicular accident, prediction of a future vehicular accident, indications of historic vehicular accidents (e.g., associated with a proximal driver, associated with a proximal vehicular path component, etc.), risk of vehicular accidents (e.g., risk score; etc.), parameters associated with accident related events (e.g., timing, users involved, vehicles involved, type of vehicular accident, severity, etc.) and/or any other suitable characterizations of vehicular accidents. Vehicular accidents related to traffic-related events can include: collisions (e.g., single-vehicle collisions, multi-vehicle collisions, pedestrian collisions, etc.), vehicle failures (e.g., mechanical breakdowns, electrical failures, software failures, issues with battery, wheel change, fuel, puncture, charging, clutch, ignition, cooling, heating, ventilation, brakes, engine, etc.), and/or any other suitable type of vehicular accident. Traffic-related events also include vehicle events that are accident adjacent or unrelated to a collision or accident, such as: hard braking events (e.g., deceleration/acceleration above or below a threshold value designating hard braking), swerving (e.g., horizontal acceleration above a threshold value, lateral motion above a threshold value, etc.), moving violations (e.g., entering an intersection against the prevailing traffic signal, running a stop sign, speeding, etc.), and/or any other suitable type of traffic-related event. Traffic-related events additionally or alternatively include any other suitable events related to vehicular operations. Block S130 includes determining any number of traffic-related events using any number of traffic event models, based on any suitable number and types of datasets collected from any suitable number and types of data collection components. Traffic event models are preferably machine-trained classification models [i.e., one or more trained machine-learning (ML) algorithms] that classify a plurality of signal inputs [i.e., the two or more sets of velocity data] as corresponding to a traffic-related event of a particular type or class, with associated properties. Examiner notes, as mentioned above, speed traps are traffic events. As such, Block S130 which includes determining a traffic-related event encompasses determining traffic events, such as detecting a speed trap. Furthermore, for a machine-trained classification model to operate properly, it must be necessarily trained with training data in a training phase); in response to identifying, based at least in part on speed changes in the speed check analytical data, a speed check zone on the road segment, causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone (See at least Fig. 1, and [0053 & 0064]: Block S130 preferably includes determining traffic-related events based on movement data. Block S140 includes transmitting a traffic-related communication from the remote computing system to a second device associated with a second user [i.e., a vehicle other than the two or more vehicles] in response to determining the traffic-related event. S140 functions to inform users of traffic-related information and/or initiate traffic-related actions. The Messages is displayed on the display of the personal navigation device. Examiner notes, in regard to the limitation of “causing a vehicle other than the two or more vehicles to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone,” relevant paragraph [0112] of Applicant’s specification states: “in an embodiment, the system is further configured to cause a vehicle - e.g., through user perceivable warnings on a map or navigation application, etc. - to adjust its speed to be at or under a maximum speed for that road segment when entering the speed check zone.” Accordingly, the mentioned limitation was interpreted as sending a message or a warning regarding an event, such as the existence of a speed trap along the vehicle’s route. Initiating traffic-related actions encompass adjusting speed to be at or under a maximum speed). Matus is silent on with one or more levels of confidence. However, Ferguson teaches the confidence level could represent a static (unchanging) probability of an occurrence of the given predicted behavior. Alternatively, the confidence level [i.e., with one or more levels of confidence] is adjusted based on using sensor data in a machine learning algorithm. Thus, the confidence level could be calculated dynamically based on real-world operating conditions and operating experiences (See at least [0098]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify the invention of Matus, by incorporating the teachings of Ferguson, with a reasonable expectation of success, as both inventions are directed to the same field of endeavor – vehicle systems, such that the machine learning algorithm generates and adjusts a confidence level based on the velocity data from the sensors. The motivation to do so is the same as acknowledged by Ferguson in regard to claim 1. 28. Claim(s) 3-4, and 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Matus et al. (US-20190122543-A1) in view of Ferguson et al. (US-20140142799-A1) and further in view of Rock et al. (US-20030052797-A1). In regard to claim 3 , Matus, as modified by Ferguson, teaches the method of Claim 1, accordingly the rejection of claim 1 is incorporated. While Matus discloses conventional systems enable drivers or other observers to report traffic events (e.g., accidents, speed traps, slowdowns, construction blockages, road debris, etc.) that have already occurred or are otherwise apparent (See at least [0019]) Matus, as modified by Ferguson, does not explicitly teaches further comprising: collecting two or more sets of manual input data is derived from user interaction data generated in the two or more vehicles traversing the road segment. However, Rock teaches it is also possible add a speed trap to the database contained in the speed trap detection and warning system manually. The user is required to press a button [i.e., collecting two or more sets of manual input data, especially when more than two drivers/user input the data manually] on the speed trap detection and warning system which is in the vehicle 9 as the vehicle 9 is driven past a speed trap site which is not already stored in the first database store 49 (See at least Fig. 1, and [0272]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify the invention of Matus, as already modified by Ferguson, by incorporating the teachings of Rock, with a reasonable expectation of success, as all inventions are directed to the same field of endeavor – vehicle warning systems, such that the system indicates the speed trap based on user’s manual input. The motivation to modify is that, as acknowledged by Rock, to provide the warning well in advance of the speed trap and enabling the driver to reduce speed progressively, thereby increasing safety for the driver and other road users (See at least [0055]) which one of ordinary skill would have recognized allows to prevent possible accidents due to high speed of the vehicles. In regard to claim 4 , Matus, as modified by Ferguson and Rock, teaches the method of Claim 3, wherein the two or more sets of manual input data are analyzed to generate second speed check analytical data for the road segment; wherein the speed check zone is identified based further on the second speed check analytical data. Further, Rock teaches it is also possible add a speed trap to the database contained in the speed trap detection and warning system manually. The user is required to press a button [i.e., two or more sets of manual input data, especially when more than two drivers/user input the data manually] on the speed trap detection and warning system which is in the vehicle 9 as the vehicle 9 is driven past a speed trap site which is not already stored in the first database store 49 (See at least Fig. 1, and [0272]). Examiner notes, storing speed trap data in a database, is generating second speed check analytical data based on the manual input and identifying a speed check based on the second speed check analytical data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify the invention of Matus, as modified by Ferguson and Rock, by further incorporating the teachings of Rock, with a reasonable expectation of success, as both inventions are directed to the same field of endeavor – vehicle warning systems, such that a speed trap is identified based on manual input. The motivation to do so is the same as acknowledged by Rock in regard to claim 3. In regard to claim 13 , Matus, as modified by Ferguson, teaches the system of Claim 11. Claim 13 recites a system having substantially the same features of claim 3 above, therefore claim 13 is rejected for the same reasons as claim 3. In regard to claim 14 , Matus, as modified by Ferguson and Rock, teaches the system of Claim 13. Claim 14 recites a system having substantially the same features of claim 4 above, therefore claim 14 is rejected for the same reasons as claim 4. 29. Claim(s) 10, and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Matus et al. (US-20190122543-A1) in view of Ferguson et al. (US-20140142799-A1) and further in view of Rupp et al. (US-20110301802-A1). In regard to claim 10 , Matus, as modified by Ferguson, teaches the method of Claim 1, accordingly the rejection of claim 1 is incorporated. Matus, as modified by Ferguson, is silent on wherein the speed check analytical data relates to one or more of: one or more speed distributions at one or more locations of the road segment, one or more speed histories of the one or more vehicles traversing the road segment, or a vehicle diversion and stopping outside driving lanes of the road segment. However, Rupp teaches at least one historical speed profile is generated for each road segment A-F, the speed profile based upon data collected from probe vehicles that have previously driven over the segment [i.e., one or more speed histories of the one or more vehicles traversing the road segment] (See at least [0018]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the application to modify the invention of Matus, as modified by Ferguson, by incorporating the teachings of Rupp, with a reasonable expectation of success, as all inventions are directed to the same field of endeavor – vehicle system, such that the speed history of vehicles is collected from vehicles that have previously driven over a segment. The motivation to modify is that, as acknowledged by Rupp, to establish a desired speed based on historical speed data for a particular segment of road (See at least [0002]) which one of ordinary skill would have recognized allows to determine realistic average road way speeds for different times of day and different days of the week. In regard to claim 20 , Matus, as modified by Ferguson, teaches the system of Claim 11. Claim 20 recites a system having substantially the same features of claim 10 above, therefore claim 20 is rejected for the same reasons as claim 10. Conclusion 30. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Levine et al. (US-20140249735-A1) teaches traffic mapping and reporting, wherein members of the service transmit their location and other data to a central server. Fowe et al. (US-20170032667-A1) teaches a method, apparatus, and computer program for providing state classification for a travel segment with multi-modal speed profile 31. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Preston J Miller whose telephone number is (703)756-1582. The examiner can normally be reached Monday through Friday 7:30 AM - 4:30 PM EST. 32. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. 33. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramya P Burgess can be reached at (571) 272-6011. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. 34. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /P.J.M./Examiner, Art Unit 3661 /MATTHIAS S WEISFELD/Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Dec 21, 2022
Application Filed
Nov 03, 2024
Non-Final Rejection — §101, §103
Jan 10, 2025
Response Filed
Feb 27, 2025
Final Rejection — §101, §103
Apr 29, 2025
Request for Continued Examination
Apr 30, 2025
Response after Non-Final Action
Jun 29, 2025
Non-Final Rejection — §101, §103
Sep 30, 2025
Response Filed
Oct 15, 2025
Final Rejection — §101, §103
Jan 23, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Mar 17, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12559091
CONTROL DEVICE FOR CONTROLLING SAFETY DEVICE IN VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12490678
VEHICLE LOCATION WITH DYNAMIC MODEL AND UNLOADING CONTROL SYSTEM
2y 5m to grant Granted Dec 09, 2025
Patent 12466388
Method for Operating a Motor Vehicle Drive Train and Electronic Control Unit for Carrying Out Said Method
2y 5m to grant Granted Nov 11, 2025
Patent 12454806
WORK MACHINE
2y 5m to grant Granted Oct 28, 2025
Patent 12447827
Electric Vehicle Control Device, Electric Vehicle Control Method, And Electric Vehicle Control System
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
75%
With Interview (+18.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 50 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month