Prosecution Insights
Last updated: April 19, 2026
Application No. 18/292,391

METHOD FOR PROVIDING MEDIA CONTENT WHICH IS ADAPTED TO THE MOVEMENT OF A VEHICLE, AND VEHICLE

Final Rejection §101§103
Filed
Jan 26, 2024
Examiner
REIDY, SEAN PATRICK
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mercedes-Benz Group AG
OA Round
4 (Final)
36%
Grant Probability
At Risk
5-6
OA Rounds
3y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
35 granted / 98 resolved
-16.3% vs TC avg
Strong +36% interview lift
Without
With
+36.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
40 currently pending
Career history
138
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
27.8%
-12.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 98 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Status of Claims This Office Action is in response to the Applicant’s Response dated 9/26/2025. Claims 13-24 are presently pending and are presented for examination. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. All pending claims therefore have an effective filing date of 7/27/2021. Response to Arguments Applicant's arguments, see pages 6-7 of 9, filed 9/26/2025, have been fully considered but they are not persuasive. The Applicant has argued that the claims as recited are not directed to an abstract idea, however the Examiner respectfully disagrees. Reducing the effects of kinetosis represents an intended result of the recited series of steps, but that intended result does not represent an integration into a practical application as those series of steps merely recite extra-solution activities and generic functions of machines/computers (apply it). No vehicle controls are being implemented such as speed adjustments in response to the transmission of data, but rather the mere display of information (post-solution activity). The Examiner notes that the claims imply the “…transmitting, by the central computing unit, the video recorded by the first vehicle to the second vehicle…” appears to be in response to the identification limitations recited directly above. Clarifying that the transmission is triggered directly in response to the identification(s) may lend towards subject matter eligibility. Applicant’s arguments, see pages 7-8 of 9, filed 9/26/2025, with respect to claim 13 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim 13 is now rejected under 35 U.S.C. 103 as being unpatentable over Clark et al. (US-2020/0228950; hereinafter Clark; already of record) in view of Grigsby et al. (US-2009/0231431; hereinafter Grigsby; already of record) and Malla et al. (US-2021/0129871; hereinafter Malla; already of record), and further in view of Kuffner et al. (US-10,147,324; hereinafter Kuffner; already of record). A detailed rejection follows below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 13-24 are rejected under 35 U.S.C. 101, because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis: Step 1 Independent claim 13 is directed towards a method. Therefore, independent claim 13 and the corresponding dependent claims 14-24 are directed to a statutory category of invention under Step 1. 101 Analysis: Step 2A, Prong 1 Regarding Prong 1 of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 13 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 13 recites: A method for providing media content adapted to movement of a vehicle, the method comprising: detecting, by a first vehicle, current driving dynamic information of the first vehicle, wherein the current driving dynamic information of the first vehicle includes a current speed, a current acceleration, and a current steering angle of the first vehicle; estimating, by the first vehicle, future driving dynamic information of the first vehicle, wherein the future driving dynamic information of the first vehicle includes a future speed, future acceleration, and a future steering angle of the first vehicle; recording, by a vehicle camera of the first vehicle, video of current surroundings of the first vehicle, wherein the vehicle camera of the first vehicle is aligned parallel to a transverse axis of the first vehicle; transmitting, by the first vehicle to a central computing unit, the current and future driving dynamic information of the first vehicle and the video recorded by the vehicle camera of the first vehicle; detecting, by a second vehicle, current driving dynamic information of the second vehicle, wherein the current driving dynamic information of the second vehicle includes a current speed, a current acceleration, and a current steering angle of the second vehicle; estimating, by the second vehicle, future driving dynamic information of the second vehicle, wherein the future driving dynamic information of the second vehicle includes a future speed, a future acceleration, and a future steering angle of the second vehicle; transmitting, by the second vehicle to the central computing unit, the current and future driving dynamic information of the second vehicle; and identifying, by the central computing unit, that the current driving dynamic information of the second vehicle corresponds, within a tolerance limit, to the current driving dynamic information of the first vehicle, or the future dynamic driving information of the second vehicle corresponds, within the tolerance limit, to the future driving dynamic information of the first vehicle; and transmitting, by the central computing unit, the video recorded by the first vehicle to the second vehicle and outputting the video recorded by the first vehicle on a display device of the second vehicle, wherein the display device of the second vehicle is arranged on a back of a driver seat, on a back of a front passenger seat, or is viewable by a front-seat passenger. These limitations, as drafted, are a method that, under broadest reasonable interpretation, covers performance of the limitation as a mental concept. That is, nothing in the claim elements preclude the steps from practically being performed as a mental process. For example, “detecting…current driving dynamic information of the first vehicle…” may be interpreted as mentally detecting a first vehicle’s motion parameters, “estimating…future driving dynamic information of the first vehicle …” may be interpreted as mentally estimating the first vehicle’s motion parameters at a future instance in time, “…detecting…current driving dynamic information of the second vehicle …” may be interpreted as mentally detecting a second vehicle’s motion parameters, “estimating…future driving dynamic information of the second vehicle …” may be interpreted as mentally estimating the second vehicle’s motion parameters at a future instance in time, “identifying…that the current driving dynamic information…within a tolerance limit…” may be interpreted as mentally comparing current vehicle motion parameters of two different vehicles and determining if they share any similarities, and “identifying…that…the future dynamic information…within the tolerance limit…” may be interpreted as mentally comparing future vehicle motion parameters of two different vehicles and determining if they share any similarities. Therefore, the claims are directed towards reciting an abstract idea. 101 Analysis: Step 2A, Prong 2 Regarding Prong 2 of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a "practical application.” In the present case, the additional elements beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional elements” while the bolded portions continue to represent the “abstract idea”): A method for providing media content adapted to movement of a vehicle, the method comprising: detecting, by a first vehicle, current driving dynamic information of the first vehicle, wherein the current driving dynamic information of the first vehicle includes a current speed, a current acceleration, and a current steering angle of the first vehicle; estimating, by the first vehicle, future driving dynamic information of the first vehicle, wherein the future driving dynamic information of the first vehicle includes a future speed, future acceleration, and a future steering angle of the first vehicle; recording, by a vehicle camera of the first vehicle, video of current surroundings of the first vehicle, wherein the vehicle camera of the first vehicle is aligned parallel to a transverse axis of the first vehicle; transmitting, by the first vehicle to a central computing unit, the current and future driving dynamic information of the first vehicle and the video recorded by the vehicle camera of the first vehicle; detecting, by a second vehicle, current driving dynamic information of the second vehicle, wherein the current driving dynamic information of the second vehicle includes a current speed, a current acceleration, and a current steering angle of the second vehicle; estimating, by the second vehicle, future driving dynamic information of the second vehicle, wherein the future driving dynamic information of the second vehicle includes a future speed, a future acceleration, and a future steering angle of the second vehicle; transmitting, by the second vehicle to the central computing unit, the current and future driving dynamic information of the second vehicle; and identifying, by the central computing unit, that the current driving dynamic information of the second vehicle corresponds, within a tolerance limit, to the current driving dynamic information of the first vehicle, or the future dynamic driving information of the second vehicle corresponds, within the tolerance limit, to the future driving dynamic information of the first vehicle; and transmitting, by the central computing unit, the video recorded by the first vehicle to the second vehicle and outputting the video recorded by the first vehicle on a display device of the second vehicle, wherein the display device of the second vehicle is arranged on a back of a driver seat, on a back of a front passenger seat, or is viewable by a front-seat passenger. For the following reason(s), the examiner submits that the above identified additional elements do not integrate the above-noted abstract idea into a practical application. Regarding the additional elements of “a first vehicle,” “a vehicle camera of the first vehicle,” “a central computing unit,” “a second vehicle,” and “a display device [and associated location]” are merely generic components which allow the abstract idea to be applied (MPEP § 2106.05(f)(2)). The Examiner submits that these elements are mere computers or other machinery used as a tool to perform the existing process. The limitations of “recording…video of current surroundings…”, “transmitting…the current and future driving dynamic information…and the video recorded by the vehicle camera…”, “transmitting…the current and future driving dynamic information…”, and “transmitting…the video…and outputting the video…” are directed towards insignificant extra-solution activity that is data transmission and data output, which does not add any meaningful limits on the claim. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. 101 Analysis: Step 2B Regarding Step 2B in the 2019 PEG, independent claim 13 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed, the additional elements of “a first vehicle,” “a vehicle camera of the first vehicle,” “a central computing unit,” “a second vehicle,” and “a display device [and associated location]” amounts to mere instructions to apply the exception (using additional elements such as “a central computing unit”). Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). As discussed above, the additional elements of “recording…video of current surroundings…”, “transmitting…the current and future driving dynamic information…and the video recorded by the vehicle camera…”, “transmitting…the current and future driving dynamic information…”, and “transmitting…the video…and outputting the video…” amounts to extra-solution activity (see below). Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well understood, routine, conventional activity in the field. The additional limitations of “recording…video of current surroundings…”, “transmitting…the current and future driving dynamic information…and the video recorded by the vehicle camera…”, “transmitting…the current and future driving dynamic information…”, and “transmitting…the video…” are well-understood, routine, and conventional activities because the background recites that the vehicular computing unit(s) may be a generic computer or control device, and that the central computing unit may be computational device such as a cloud server. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. The additional limitation of “…outputting the video…” is a well-understood, routine, and conventional activity because the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function. Hence, the claim is not patent eligible. Dependent claims 14-24 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Claims 14, 16, 18, 20-22, and 24 recite additional mental processes, data gathering, data manipulation, and data transfer, to be processed by the generic component which is the “central computing unit”. Claim 15 recites details pertaining to the additional information, which is abstract (a data point used in making determination, which can be performed by a human). Claims 17 and 23 recite details pertaining to factors that are used to determine road layout or traffic situations, which is abstract (analysis of maps can be performed by a human). Claim 19 recites details pertaining to machine learning techniques, wherein the previously identified abstract concept is “applied by” a machine learning device, which is also indicative of a field of use. Therefore, dependent claims 14-24 are not patent eligible under the same rationale as provided for in the rejection of independent claim 13. Therefore, claims 13-24 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Clark et al. (US-2020/0228950; hereinafter Clark; already of record) in view of Grigsby et al. (US-2009/0231431; hereinafter Grigsby; already of record) and Malla et al. (US-2021/0129871; hereinafter Malla; already of record), and further in view of Kuffner et al. (US-10,147,324; hereinafter Kuffner; already of record). Regarding claim 13, Clark discloses … detecting, by a first vehicle, current driving dynamic information of the first vehicle (see Clark at least [0030] “...The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status...”) … estimating, by the first vehicle, future driving dynamic information of the first vehicle (see Clark at least [0030] “...A predicted travelling direction/route from both vehicles may be transmitted to the server 186...”) … … transmitting, by the first vehicle to a central computing unit, the current and future driving dynamic information of the first vehicle (see Clark at least [0030] “...The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status... A predicted travelling direction/route from both vehicles may be transmitted to the server 186. The predicted travelling route may be received from the navigation controller 126 of each vehicle...”) … detecting, by a second vehicle, current driving dynamic information of the second vehicle (see Clark at least [0030] “...The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status...”) … estimating, by the second vehicle, future driving dynamic information of the second vehicle (see Clark at least [0030] “...A predicted travelling direction/route from both vehicles may be transmitted to the server 186...”) … transmitting, by the second vehicle to the central computing unit, the current and future driving dynamic information of the second vehicle (see Clark at least [0030] “...The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status... A predicted travelling direction/route from both vehicles may be transmitted to the server 186. The predicted travelling route may be received from the navigation controller 126 of each vehicle...”); and identifying, by the central computing unit, that the current driving dynamic information of the second vehicle corresponds, within a tolerance limit, to the current driving dynamic information of the first vehicle (see Clark at least [0030] “...After analyzing the vehicle information and predicted route, the server 186 may predict that the first vehicle 102a and the second vehicle 102b will both travel on eastbound of the road 306 within a geo-fence 308 defined by the transmission range 310 of each vehicle for a period of time...”), or the future dynamic driving information of the second vehicle corresponds, within the tolerance limit, to the future driving dynamic information of the first vehicle (see Clark at least [0030] “...After analyzing the vehicle information and predicted route, the server 186 may predict that the first vehicle 102a and the second vehicle 102b will both travel on eastbound of the road 306 within a geo-fence 308 defined by the transmission range 310 of each vehicle for a period of time...”); and transmitting, by the central computing unit, [data] recorded by the first vehicle to the second vehicle (see Clark at least [0026] "...The first vehicle 102a and the second vehicle 102b are in communication with the fleet server 186 configured to coordinate and facilitate the V2V data sharing between the vehicles..." [0027] "...At operation 212, with pre-defined rules, the server 186 detects the V2V connection opportunity between the vehicles 102a and 102b to share the software data and in response, generate a connection message for each vehicle to instruct to connect and share data. The server 186 sends the respective connection messages to the first vehicle 102a and the second vehicle 102b at operations 214 and 216..." [0028] "...Responsive to detecting the designated vehicle within a connection range supported by the wireless transceivers, the first vehicle 102a and the second vehicle 102b establishes the wireless connection 196 via DSRC at operation 222. At operation 224, the vehicles 102a and 102b verify information received from the server 186 is accurate. For instance, the vehicles 102a and 102b may verify the software versions of each other matches the information contained in the connection message received at operations 214 and 216 so that the V2V data transaction is appropriate. Responsive to successfully verifying the information, at operation 226 the data transaction between the first vehicle 102a as a source vehicle and the second vehicle 102b as a target vehicle starts...") … However, while Clark discusses data exchange between two vehicles by way of server coordination, Clark does not explicitly disclose the following: …a method for providing media content adapted to movement of a vehicle… …wherein the current driving dynamic information of the first vehicle includes a current speed, a current acceleration, and a current steering angle of the first vehicle… …wherein the future driving dynamic information of the first vehicle includes a future speed, future acceleration, and a future steering angle of the first vehicle… …recording, by a vehicle camera of the first vehicle, video of current surroundings of the first vehicle, wherein the vehicle camera of the first vehicle is aligned parallel to a transverse axis of the first vehicle… …transmitting…the video recorded by the vehicle camera of the first vehicle… …wherein the current driving dynamic information of the second vehicle includes a current speed, a current acceleration, and a current steering angle of the second vehicle… …wherein the future driving dynamic information of the second vehicle includes a future speed, a future acceleration, and a future steering angle of the second vehicle… …transmitting…the video recorded by the first vehicle to the second vehicle and outputting the video recorded by the first vehicle on a display device of the second vehicle, wherein the display device of the second vehicle is arranged on a back of a driver seat, on a back of a front passenger seat, or is viewable by a front-seat passenger. Grigsby, in the same field of endeavor, teaches the following: …a method for providing media content adapted to movement of a vehicle (see Grigsby at least [0006] "The ability of a participating driver to receive and display views generated by video cameras installed in other participating vehicles is a major factor in enabling vehicle-to-vehicle networks to serve such purposes." [0007] "The present invention may be implemented as a method of generating a simulated view for presentation on a video display… ")… … … …recording, by a vehicle camera of the first vehicle, video of current surroundings of the first vehicle, wherein the vehicle camera of the first vehicle is aligned parallel to a transverse axis of the first vehicle (see Grigsby at least [0030] "Referring to FIG. 3, it is assumed that any vehicle that participates in a typical V2V network will have a least one video camera, such as video camera 42 that is mounted on or near the interior rearview mirror of vehicle 40 to provide a video field of view 44 that approximates what the driver of vehicle 40 actually sees when seated behind the steering wheel. Video data captured by video camera 42 would probably be more useful to other participating drivers than to the driver of vehicle 40. The vehicle 40 could, of course, be equipped with additional video cameras, such as a trunk-mounted video camera 46 having a field of view 48 directly behind vehicle 40, a side-mounted video camera 54 having a field of view 56 approximating what the driver would see in the driver-side exterior rearview mirror, and a second side-mounted video camera 50 having a field of view 52 approximating what the driver would see in a passenger-side exterior rearview mirror.")… …transmitting…the video recorded by the vehicle camera of the first vehicle (see Grigsby at least [0046] "Returning to FIG. 8, the V2V system in vehicle 66 would identify each of the secondary video cameras that could provide pel data that might be used to replace pels in the primary data set and would, if not already receiving video data from those cameras, begin receiving it in step 132…")… … … …transmitting…the video recorded by the first vehicle to the second vehicle (see Grigsby at least [0046] "Returning to FIG. 8, the V2V system in vehicle 66 would identify each of the secondary video cameras that could provide pel data that might be used to replace pels in the primary data set and would, if not already receiving video data from those cameras, begin receiving it in step 132…") and outputting the video recorded by the first vehicle on a display device of the second vehicle (see Grigsby at least [0034] "Using video data received from other vehicles, such as vehicles 60 and 62 and even tractor-trailer 64, it is possible to generate a simulated view on an in-vehicle display in vehicle 66 that electronically "removes" the tractor-trailer 64 from the view, allowing the driver of vehicle 66 to "see" what is in front of the tractor-trailer 64...") … It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the current and predicted vehicle data communicated by way of server as disclosed by Clark with transmitted camera data such as taught by Grigsby with a reasonable expectation of success so as to make drivers more aware of their surroundings which would also influence future actions (see Grigsby at least [0005]). However, neither Clark nor Grigsby explicitly disclose or teach the following: …wherein the current driving dynamic information of the first vehicle includes a current speed, a current acceleration, and a current steering angle of the first vehicle… …wherein the future driving dynamic information of the first vehicle includes a future speed, future acceleration, and a future steering angle of the first vehicle… …wherein the current driving dynamic information of the second vehicle includes a current speed, a current acceleration, and a current steering angle of the second vehicle… …wherein the future driving dynamic information of the second vehicle includes a future speed, a future acceleration, and a future steering angle of the second vehicle… …wherein the display device of the second vehicle is arranged on a back of a driver seat, on a back of a front passenger seat, or is viewable by a front-seat passenger. Malla, in the same field of endeavor, teaches the following: …wherein the current driving dynamic information of the first vehicle includes a current speed, a current acceleration, and a current steering angle of the first vehicle (see Malla at least [0042] "…The vehicle dynamic sensors 110 may include, but may not be limited to, position sensors, heading sensors, speed sensors, steering speed sensors, steering angle sensors, throttle angle sensors, accelerometers, magnetometers, gyroscopes, yaw rate sensors, brake force sensors, wheel speed sensors, wheel turning angle sensors, transmission gear sensors, temperature sensors, RPM sensors, GPS/DGPS sensors, and the like (individual sensors not shown).")… …wherein the future driving dynamic information of the first vehicle includes a future speed, future acceleration, and a future steering angle of the first vehicle (see Malla at least [0084] "...Such dynamic parameters may include, but may not be limited to, data that pertains to a future position of the ego vehicle 102, a future heading of the ego vehicle 102, a future velocity of the ego vehicle 102, a future steering angle of a steering of the ego vehicle 102, a future steering speed associated with the steering of the ego vehicle 102, a future throttle angle of a throttle of the ego vehicle 102, a future acceleration of the ego vehicle 102, a future yaw rate of the ego vehicle 102, a future brake force associated with the brakes of the ego vehicle 102, a future transmission gear of the ego vehicle 102, a future geo-location of the ego vehicle 102, and the like at one or more future time steps (e.g., t+1, t+2, t+n).")… …wherein the current driving dynamic information of the second vehicle includes a current speed, a current acceleration, and a current steering angle of the second vehicle (see Malla at least [0042] "…The vehicle dynamic sensors 110 may include, but may not be limited to, position sensors, heading sensors, speed sensors, steering speed sensors, steering angle sensors, throttle angle sensors, accelerometers, magnetometers, gyroscopes, yaw rate sensors, brake force sensors, wheel speed sensors, wheel turning angle sensors, transmission gear sensors, temperature sensors, RPM sensors, GPS/DGPS sensors, and the like (individual sensors not shown).")… …wherein the future driving dynamic information of the second vehicle includes a future speed, a future acceleration, and a future steering angle of the second vehicle (see Malla at least [0084] "...Such dynamic parameters may include, but may not be limited to, data that pertains to a future position of the ego vehicle 102, a future heading of the ego vehicle 102, a future velocity of the ego vehicle 102, a future steering angle of a steering of the ego vehicle 102, a future steering speed associated with the steering of the ego vehicle 102, a future throttle angle of a throttle of the ego vehicle 102, a future acceleration of the ego vehicle 102, a future yaw rate of the ego vehicle 102, a future brake force associated with the brakes of the ego vehicle 102, a future transmission gear of the ego vehicle 102, a future geo-location of the ego vehicle 102, and the like at one or more future time steps (e.g., t+1, t+2, t+n)." – where Clark discloses the estimation of future information for each vehicle, performed by each respective vehicle)… … It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the current and predicted vehicle data as disclosed by Clark with specifics such as the types of data such as taught by Malla with a reasonable expectation of success so as to provide specific data to be utilized by vehicle control systems (see Malla at least [0042]). However, while Grigsby teaches a heads up display which is viewable by the driver of a vehicle (and often from other passengers within the vehicle as well) it is not explicit recited that Clark nor Grigsby nor Malla disclose or teach the following: …wherein the display device of the second vehicle is arranged on a back of a driver seat, on a back of a front passenger seat, or is viewable by a front-seat passenger. Kuffner, in the same field of endeavor, teaches the following: …wherein the display device of the second vehicle is arranged on a back of a driver seat, on a back of a front passenger seat, or is viewable by a front-seat passenger (see Kuffner at least col 5 lines 7-12). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the second vehicle as disclosed by Clark with a display device viewable by a front-seat passenger such as taught by Kuffner with a reasonable expectation of success so as to provide a passenger with information about the vehicle and its surroundings (see Kuffner at least col 1 lines 25-32). Claims 14-24 are rejected under 35 U.S.C. 103 as being unpatentable over Clark in view of Grigsby and Malla and Kuffner, and further in view of Ratnasingam (US-9,672,734; already of record). Regarding claim 20, Clark in view of Grigsby and Malla and Kuffner teach the method of claim 13. While Clark discusses locational comparisons amongst vehicles, and Grigsby discusses an identification of common features within multiple data sets, neither reference (nor Malla nor Kuffner) appears to explicitly disclose or teach the following: the central computing unit compares roads travelled along by the first and second vehicles and assigns the first and second vehicles to one another as part of identifying when the roads have a similar road layout above a minimum road length within a defined tolerance threshold. Ratnasingam, in the same field of endeavor, teaches the central computing unit compares roads travelled along by the first and second vehicles and assigns the first and second vehicles to one another as part of the identifying when the roads have a similar road layout above a minimum road length within a defined tolerance threshold (see Ratnasingam at least col 43 lines 8-23 “The system may check whether another vehicle is in the same road segment as that of the first vehicle by means of comparing the location coordinates of the other vehicle to the coordinates stored for the current road segment of the first vehicle. For example, for each road segment the system may store a set of coordinate points obtained at an appropriate distance interval at an appropriate computer readable storage medium. To check if another vehicle is in the same road segment as that of the first vehicle, the system may compare the stored coordinate points of the road segment to the coordinate points obtained from the current location of the other vehicle. If the minimum distance is smaller than a predetermined threshold the system may declare that the other vehicle is in the same road segment as that of the first vehicle...”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as taught by Clark in view of Grigsby and Malla and Kuffner with road comparisons such as taught by Ratnasingam with a reasonable expectation of success to help determine navigation data estimates of another vehicle (see Ratnasingam at least col 43 lines 36-43). Regarding claim 14, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 20, wherein, in addition to the video and the current or future driving dynamic information of the first vehicle (Grigsby, see claim 13), additional information is transmitted to at least one third-party device or the second vehicle for outputting purposes (see Clark at least [0033] "...The vehicles 102 may exchange information to verify the connection message received from the server 186 is accurate. For instance, the second vehicle 102b as a target vehicle may send the current version of the software to be updated to the first vehicle 102a via the V2V connection 196. In response, the computing platform 104 of the first vehicle 102a compares the software version received from the second vehicle 102 with its own version number, and verifies a newer version of the software is stored in the storage 106 and ready to be transferred by blocks by sending a return message to the second vehicle 102b. The first vehicle 102a may start to transfer data blocks to the second vehicle 102b and continue to transfer until all designated blocks are transferred or distance between the vehicles extends beyond the geo-fence 308."). Regarding claim 15, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 14, wherein at least one of the following variables is used as the additional information: status information describing a status of the first vehicle (see Clark at least [0026] "Referring to FIG. 2, an example data flow diagram for a process 200 of one embodiment of the present disclosure is illustrated. In the present example, the vehicle 102a (hereinafter the first vehicle 102a ) and a second vehicle 102b are among a fleet involving multiple vehicles provided with V2V data sharing features through various wireless connections e.g. via the DSRC transceiver 180 of the first vehicle 102a The first vehicle 102a and the second vehicle 102b are in communication with the fleet server 186 configured to coordinate and facilitate the V2V data sharing between the vehicles. At operations 202 and 204, the server 186 sends a request for vehicle status to each of the first vehicle 102a and the second vehicle 102b respectively. The vehicle status is a set of information that can be used by the server 186 to identify which vehicles can share data to which other vehicles, and when/where the predicted data sharing may occur. Responsive to receiving the request from the server 186, the computing platform 104 collects information from various pre-defined components of the first vehicle 102a and generate a vehicle status to send to the server 186 at operation 206. For instance, the collected vehicle information may include data from various pre-defined ECUs 172 indicating software versions, software versions of various vehicle applications 108, map version used by the navigation controller 126 stored in the storage 106 as a part of the vehicle data 110, location data from the GNSS controller 124, configuration and availability of wireless transceivers (e.g. the wireless transceiver 132, and/or the DSRC transceiver 180), battery charge level or the like. Similarly, the second vehicle 102b sends the vehicle status to the server 186 at operation 208."); surroundings information describing a status of surroundings of the first vehicle; or an audio track in a form of external microphone recordings. Regarding claim 16, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 20, wherein a current or future road layout of a road being travelled along by the first vehicle (see Clark at least [0030] "...The first vehicle 102a may approach an intersection 302 from the south while the second vehicle 102b may approach the same intersection 302 from the west. The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status... A predicted travelling direction/route from both vehicles may be transmitted to the server 186. The predicted travelling route may be received from the navigation controller 126 of each vehicle. Alternatively, the server 186 may use one or more historic routes of each vehicle traversed in the past to determine the predicted routes. In the present example, the server 186 may predict the second vehicle 102b to travel straight passing the intersection 302 and continue to travel on eastbound of the road 306. The server 186 may further predict the first vehicle 102a to make a right turn at the intersection 302 and travel on eastbound of the road 306 after the second vehicle passed because of a red-light signal 304. After analyzing the vehicle information and predicted route, the server 186 may predict that the first vehicle 102a and the second vehicle 102b will both travel on eastbound of the road 306 within a geo-fence 308 defined by the transmission range 310 of each vehicle for a period of time..."), a current or future traffic situation, or a travel trajectory plan (see Clark at least [0030] "...The first vehicle 102a may approach an intersection 302 from the south while the second vehicle 102b may approach the same intersection 302 from the west. The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status... A predicted travelling direction/route from both vehicles may be transmitted to the server 186. The predicted travelling route may be received from the navigation controller 126 of each vehicle. Alternatively, the server 186 may use one or more historic routes of each vehicle traversed in the past to determine the predicted routes. In the present example, the server 186 may predict the second vehicle 102b to travel straight passing the intersection 302 and continue to travel on eastbound of the road 306. The server 186 may further predict the first vehicle 102a to make a right turn at the intersection 302 and travel on eastbound of the road 306 after the second vehicle passed because of a red-light signal 304. After analyzing the vehicle information and predicted route, the server 186 may predict that the first vehicle 102a and the second vehicle 102b will both travel on eastbound of the road 306 within a geo-fence 308 defined by the transmission range 310 of each vehicle for a period of time...") is taken into consideration to determine the future driving dynamic information of the first vehicle. Regarding claim 17, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 16, wherein the current or future road layout of the road being travelled along by the first vehicle is taken into consideration to determine the future driving dynamic information of the first vehicle (see Clark at least [0030] "...The first vehicle 102a may approach an intersection 302 from the south while the second vehicle 102b may approach the same intersection 302 from the west. The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status... A predicted travelling direction/route from both vehicles may be transmitted to the server 186. The predicted travelling route may be received from the navigation controller 126 of each vehicle. Alternatively, the server 186 may use one or more historic routes of each vehicle traversed in the past to determine the predicted routes. In the present example, the server 186 may predict the second vehicle 102b to travel straight passing the intersection 302 and continue to travel on eastbound of the road 306. The server 186 may further predict the first vehicle 102a to make a right turn at the intersection 302 and travel on eastbound of the road 306 after the second vehicle passed because of a red-light signal 304. After analyzing the vehicle information and predicted route, the server 186 may predict that the first vehicle 102a and the second vehicle 102b will both travel on eastbound of the road 306 within a geo-fence 308 defined by the transmission range 310 of each vehicle for a period of time..."), and wherein the current or future road layout or the current or future traffic situation is determined by: image analysis of at least one camera image of the video recorded by the vehicle camera of the first vehicle; extraction of a variable derived from an assistance system (see Clark at least [0030] "...The current location data from the GNSS 124 from both vehicles may be transmitted to the server 186 as vehicle status... The predicted travelling route may be received from the navigation controller 126 of each vehicle. Alternatively, the server 186 may use one or more historic routes of each vehicle traversed in the past to determine the predicted routes..."); or analysis of digital maps during active navigation. Regarding claim 18, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 20, wherein a person driving the first vehicle is identified and a clear profile is assigned to the person driving the first vehicle (see Ratnasingam at least col 36 lines 8-17). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method for providing media content as taught by Clark in view of Grigsby and Malla and Kuffner and Ratnasingam with a user profile such as further taught by Ratnasingam with a reasonable expectation of success so as to compile reputable sources of information to be exchanged (see Ratnasingam at least col 1 lines 31-53). Regarding claim 19, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 20, wherein machine learning methods are used for estimating the future driving dynamic information of the first vehicle (see Ratnasingam at least col 22 line 65-col 23 line 12). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the future driving dynamic information estimations as disclosed by Clark with machine learning methods such as taught by Ratnasingam with a reasonable expectation of success so as to accurately predict trends of vehicle travel for preemptive controls (see Ratnasingam at least col 22 line 65-col 23 line 12). Regarding claim 21, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 20, wherein user preferences are taken into consideration for selecting or displaying media content (see Grigsby at least [0048] "As noted earlier, the user may elect to have an on-screen visual reminder that an object, while visually removed, is physically still present. The user's preferences as to visual reminders are detected in step 140. If the user has not registered a preference for a visual reminder, the simulated view generated by merging appropriate pel data from the primary and second video cameras, is sent to the in-vehicle display for presentation there in a step 146. If the driver has indicated a visible reminder should be provided, the reminder must be generated and added to the video data to be presented on the in-vehicle display."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the displayed information as taught by Clark in view of Grigsby and Malla and Kuffner and Ratnasingam with a user preference such as further taught by Grigsby with a reasonable expectation of success to provide safety protocols for image referencing within a vehicle (see Grigsby at least [0005] and [0048]). Regarding claim 22, Clark in view of Grigsby and Malla and Kuffner and Ratnasingam teach the method of claim 20, wherein at least information on manual control of the first vehicle by a person driving the vehicle or at least a control command for at least partially automated control of the first vehicle is derived from driving dynamic information transmitted from the second vehicle to the first vehicle (see Kuffner at least col 31 lines 3-9 “According to a process 900 shown in FIG. 9, the vehicle 10 and its autonomous operation system 20 provide user assistance by prompting corrective manual or autonomous operation of the vehicle 10 under which its driving behavior matches the predominating driving behavior of a like population of reference vehicles, as described in a traffic behavior model.”) is derived from driving dynamic information transmitted from the second vehicle to the first vehicle (see Kuffner at least col 6 lines 52-62 "The V2V communication system 76 is operable to establish wireless communication with like V2V communication systems in other vehicles in the environment surrounding the vehicle 10. The V2V communication system 76 wirelessly transmits information about the vehicle 10, including i
Read full office action

Prosecution Timeline

Jan 26, 2024
Application Filed
Jun 20, 2024
Non-Final Rejection — §101, §103
Sep 16, 2024
Response Filed
Sep 25, 2024
Final Rejection — §101, §103
Nov 25, 2024
Response after Non-Final Action
Dec 05, 2024
Applicant Interview (Telephonic)
Dec 05, 2024
Response after Non-Final Action
Jan 27, 2025
Request for Continued Examination
Jan 28, 2025
Response after Non-Final Action
Jun 23, 2025
Non-Final Rejection — §101, §103
Sep 26, 2025
Response Filed
Oct 09, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530975
UNCREWED AERIAL VEHICLE CONTROL METHOD, APPARATUS, AND SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12491896
WHEEL STEERING CALIBRATION
2y 5m to grant Granted Dec 09, 2025
Patent 12466414
SYSTEMS AND METHODS OF ADJUSTING VEHICLE COMPONENTS FROM OUTSIDE OF A VEHICLE
2y 5m to grant Granted Nov 11, 2025
Patent 12460379
COLLISION AVOIDANCE SYSTEM AND METHOD FOR AVOIDING COLLISION OF WORK MACHINE WITH OBSTACLES
2y 5m to grant Granted Nov 04, 2025
Patent 12454448
CONTROL METHOD, CONTROL DEVICE, AND CONTROL SYSTEM FOR DETECTING ABNORMALITY IN AUTOMATIC FORKLIFT OPERATION
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
36%
Grant Probability
72%
With Interview (+36.3%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 98 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month