Prosecution Insights
Last updated: April 19, 2026
Application No. 18/166,567

VEHICULAR VISION SYSTEM WITH FORWARD VIEWING CAMERA WITH SYNCHRONIZED RECORDING FEATURE

Non-Final OA §103
Filed
Feb 09, 2023
Examiner
ALLEN, LUCIUS CAMERON GREE
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Magna Electronics Inc.
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
27 granted / 38 resolved
+9.1% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
20 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . All the claims are examined on the basis of the merit of the claims. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/18/2025 has been entered. Response to Arguments Applicant’s arguments see remarks, filed 11/18/2025, with respect to the claim 1-24 have been fully considered but are moot because the arguments do not apply to the current combinations of references being used in the current rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12 and 15-19, are rejected under 35 U.S.C 103 as being unpatentable over Lu et al. (US 20200039448 A1) hereafter referenced as Lu in view of Marasigan et al. (US 20200159230 A1) hereafter referenced as Marasigan and Musk et al. (US 20200265247 A1) hereafter referenced as Musk. Regarding claim 1, Lu teaches a vehicular vision system (Fig. 3, Paragraph [0018]- Lu discloses as shown in FIG. 3, a vehicular vision system 110 includes a camera 114 with dual video output working with multiple other cameras 115 (one shown) with single video output), the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system (Fig. 1, Paragraph [0014]- Lu discloses a forward viewing camera may be disposed at the windshield of the vehicle and view through the windshield and forward of the vehicle, such as for a machine vision system (such as for traffic sign recognition); wherein the camera comprises a CMOS imaging array (Fig. 1, Paragraph [0004]- Lu discloses a driver assistance system or vision system or imaging system for a vehicle that utilizes one or more cameras (preferably one or more CMOS cameras) to capture image data representative of images exterior of the vehicle), and wherein the CMOS imaging array comprises at least one million photosensors arranged in rows and columns (Fig. 1, Paragraph [0027]- Lu discloses a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels.); wherein the camera is operable to capture video image data (Fig. 2, Paragraph [0014]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); wherein the camera comprises a first output interface (Fig. 4, Paragraph [0021]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle. (Wherein the serializer 222a is the first output interface)) and a second output interface (Fig. 4, Paragraph [0021]- Lu discloses the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219. (Wherein the serializer 222b is the second output interface)), and wherein, during operation of the camera, a first data stream is output by the camera via the first output interface (Fig. 4, Paragraph [0021]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle. (Wherein the output data stream 214a is the first data stream)) and a second data stream is output by the camera via the second output interface (Fig. 4, Paragraph [0021]- Lu discloses the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219. (Wherein the serializer 214b is the second data stream)), and wherein the first data stream and the second data stream are different (Fig. 4, Paragraph [0015]- Lu discloses due to the requirements of the image processing, which are different between the head unit display and the machine vision ECU, a camera or camera system outputs two different video streams.); wherein the first data stream is derived at least in part from video image data captured by the camera (Fig. 4, Paragraph [0021]- Lu discloses a vehicular vision system 210 includes a camera 214 with dual video outputs or data streams 214a, 214b that, without a surround view ECU, output directly to a display 216 of a head unit and a machine vision ECU 219.), and wherein the second data stream is derived at least in part from video image data captured by the camera (Fig. 4, Paragraph [0021]- Lu discloses a vehicular vision system 210 includes a camera 214 with dual video outputs or data streams 214a, 214b that, without a surround view ECU, output directly to a display 216 of a head unit and a machine vision ECU 219.); an electronic control unit (ECU) comprising electronic circuitry and associated software (Fig. 4, Paragraph [0012]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); wherein the electronic circuitry of the ECU comprises a data processor for processing data of the first data stream for a driving assistance system of the vehicle (Fig. 4, Paragraph [0012]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); Lu fails to explicitly teach wherein the second data stream is provided to a storage device that is remote from the camera, and wherein the storage device stores data of the second data stream. However, Marasigan explicitly teaches wherein the second data stream is provided to a storage device that is remote from the camera (Fig. 2, Paragraph [0032]- Marasigan discloses upon execution by the processor 410, the first program 430 is configured to determine whether one or more data streams or data points from the sensors 460 are to be stored on board, or to be transmitted over the network 200 to a cloud system.), and wherein the storage device stores data of the second data stream (Fig. 4, Paragraph [0035]- Marasigan discloses the processor 410 executes the first program 430 to determine whether or not one or more data streams are to be stored on board (i.e., locally), or not.); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the second data stream is provided to a storage device that is remote from the camera, and wherein the storage device stores data of the second data stream. Wherein having Lu’s system for vehicle vision wherein the second data stream is provided to a storage device that is remote from the camera, and wherein the storage device stores data of the second data stream. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Lu in view of Marasigan fails to explicitly teach wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information. However, Musk explicitly teaches wherein the first data stream comprises first synchronization data (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. In various embodiments, the elements include timestamps to maintain an ordering of the elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), and the second data stream comprises second synchronization data (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream (Fig. 2, Paragraph [0027]- Musk discloses image data is annotated with sensor data from additional auxiliary sensors to automatically create training data. In some embodiments, a time series of elements made up of sensor and related auxiliary data is collected from a vehicle and used to automatically create training data. Further in Fig. 2 Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list (Fig. 5, Paragraph [0060]- Musk discloses sensors 503 and 553 capture distance and direction measurements. Distance vector 513 depicts the distance and direction of neighboring vehicle 511, distance vector 523 depicts the distance and direction of neighboring vehicle 521, and distance vector 563 depicts the distance and direction of neighboring vehicle 561. (wherein the distance vectors show information on detected objects)), lane marker information (Fig. 2, Paragraph [0037]- Musk discloses a detected vehicle can be labeled based on a predicted distance and direction as being in the left lane or right lane. In some embodiments, the detected vehicle can be labeled as being in a blind spot, as a vehicle that should be yielded to, or with another appropriate semantic label. In some embodiments, vehicles are assigned to roads or lanes in a map based on the determined ground truth. As additional examples, the determined ground truth can be used to label traffic lights, lanes, drivable space, or other features that assist autonomous driving.), or traffic sign information (Fig. 2, Paragraph [0037]- Musk discloses as additional examples, the determined ground truth can be used to label traffic lights, lanes, drivable space, or other features that assist autonomous driving.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Musk wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information. Wherein having Lu’s system for vehicle vision wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information. The motivation behind the modification would have been to allow for a more accurate and efficient system, since both Lu and Musk are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Musk’s system provides a more accurate and efficient system. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Musk et al. (US 20200265247 A1) Paragraph [0011]. Regarding claim 2, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu in view of Marasigan fails to explicitly teach wherein the first synchronization data and the second synchronization data comprise timestamps. However, Musk explicitly teaches wherein the first synchronization data and the second synchronization data comprise timestamps (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Musk wherein the first synchronization data and the second synchronization data comprise timestamps. Wherein having Lu’s system for vehicle vision wherein the first synchronization data and the second synchronization data comprise timestamps. The motivation behind the modification would have been to allow for a more accurate and efficient system, since both Lu and Musk are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Musk’s system provides a more accurate and efficient system. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Musk et al. (US 20200265247 A1) Paragraph [0011]. Regarding claim 3, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu further teaches wherein the camera views exterior of the vehicle (Fig. 1, Paragraph [0014]- Lu discloses a vehicle 10 includes an imaging system or vision system 12 that includes at least one exterior viewing imaging sensor or camera). Regarding claim 4, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 3, Lu further teaches wherein the camera is disposed at an in-cabin side of a windshield of the vehicle and views forward of the vehicle through the windshield (Fig. 1, Paragraph [0014]- Lu discloses a forward viewing camera may be disposed at the windshield of the vehicle and view through the windshield and forward of the vehicle). Regarding claim 5, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu further teaches wherein the second data stream comprises video image data captured by the camera (Fig. 2, Paragraph [0014]- Lu discloses the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle.), Lu in view of Musk fails to explicitly teach wherein the storage device stores the video image data of the second data stream. However, Marasigan explicitly teaches wherein the storage device stores the video image data of the second data stream (Fig. 4, Paragraph [0035]- Marasigan discloses the processor 410 executes the first program 430 to determine whether or not one or more data streams are to be stored on board (i.e., locally), or not. The processor 410 receives data streams from the sensors 460. (Step 610). The sensors 460 continuously capture and generate data streams while the vehicle 500 is operating. For example, the data streams include video data captured by a camera installed in the vehicle 500, acceleration information captured by an accelerometer, braking information captured by a brake sensor, speed information captured by a speed sensor, engine information captured by various sensors arranged with a vehicle engine, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk, of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the storage device stores the video image data of the second data stream. Wherein having Lu’s system for vehicle vision wherein the storage device stores the video image data of the second data stream. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Regarding claim 6, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu in view of Musk fails to explicitly teach wherein the first output interface comprises a controller area network (CAN) bus interface. However, Marasigan explicitly teaches wherein the first output interface comprises a controller area network (CAN) bus interface (Fig. 1, Paragraph [0034]- Marasigan discloses the CAN bus 560 operates as a communication interface among various components of the vehicle 100 such as the processor 410, the memory 420, the control mechanism 550, and the sensors 460.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the first output interface comprises a controller area network (CAN) bus interface. Wherein having Lu’s system for vehicle vision wherein the first output interface comprises a controller area network (CAN) bus interface. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Regarding claim 7, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu further teaches wherein the second output interface comprises an Ethernet interface (Fig. 1, Paragraph [0024]- Lu discloses it can be applied to other video signal transmission types, such as, for example, NTSC, Ethernet, and/or digital video transmitted over analog cables (for example, the technologies developed by Analog Devices C2B or Techpoint's HD-TVI or the like).). Regarding claim 8, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu further teaches wherein the storage device receives the second data stream via low voltage differential signaling (LVDS) (Fig. 1, Paragraph [0016]- Lu discloses one data stream goes to a surround view system-on-chip (SOC) for surround view processing, such as image signal processing (ISP), image stitching, image warping, etc., and is eventually output through a LVDS (low voltage differential signaling) serializer of the ECU to a display in a head unit. The other data stream that is output by the deserializer is communicated to another LVDS serializer as a pass-through and is output to a machine vision ECU for machine vision processing of the captured image data (such as for a driving assist system or function).). Regarding claim 9, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu in view Musk fails to explicitly teach wherein the storage device is disposed at a head unit of the vehicle. However, Marasigan explicitly teaches wherein the storage device is disposed at a head unit of the vehicle (Fig. 1, Paragraph [0025]- Marasigan discloses the storage 140 is coupled to the head unit 120 and stores a set of data points under the control of the head unit 120.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the storage device is disposed at a head unit of the vehicle. Wherein having Lu’s system for vehicle vision wherein the storage device is disposed at a head unit of the vehicle. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Regarding claim 10, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu in view of Musk fails to explicitly teach wherein the storage device is remote from the vehicle, and the second data stream is wirelessly communicated to the storage device. However, Marasigan explicitly teaches wherein the storage device is remote from the vehicle (Fig. 1, Paragraph [0040]- Marasigan discloses the processor 410 determines whether the received data streams are to be stored on board, or transmitted over the network to a cloud as indicated in the flow chart of FIG. 5.), and the second data stream is wirelessly communicated to the storage device (Fig. 1, Paragraph [0026]- Marasigan discloses alternatively, or additionally, a cloud server may receive data from the sensors 170. The network 200 may include cellular network, WiFi network, near field network, or any other available communication network.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the storage device is remote from the vehicle, and the second data stream is wirelessly communicated to the storage device. Wherein having Lu’s system for vehicle vision wherein the storage device is remote from the vehicle, and the second data stream is wirelessly communicated to the storage device. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Regarding claim 11, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 10, Lu in view of Musk fails to explicitly teach wherein the storage device comprises a cloud storage device. However, Marasigan teaches wherein the storage device comprises a cloud storage device (Fig. 1, Paragraph [0037]- Marasigan discloses once the processor 410 determines that the data streams correspond to the predetermined criteria (e.g., YES determination at any of Steps 631, 632, 634, and 636), then the data streams are stored in the memory 420. (Step 640). If the processor 410 does not determine that the data streams correspond to the predetermined criteria (e.g., NO determination at any of Steps 631, 632, 634, and 363), then the processor 410 may transmit the data streams to a cloud server (Step 650).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the storage device comprises a cloud storage device. Wherein having Lu’s system for vehicle vision wherein the storage device comprises a cloud storage device. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Regarding claim 12, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu further teaches wherein the camera outputs the first data stream via the first output interface using a first electrical connector (Fig. 4, Paragraph [0018]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit.), and wherein the camera outputs the second data stream via the second output interface using a second electrical connector (Fig. 4, Paragraph [0018]- Lu discloses as the machine vision ECU 219 expects a raw pixel data with Bayer format, the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219.), and wherein the first electrical connector and the second electrical connector are different (Fig. 4, Paragraph [0018]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit. As the machine vision ECU 219 expects a raw pixel data with Bayer format, the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219.). Regarding claim 15, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu in view of Marasigan fails to explicitly teach wherein the first data stream comprises point cloud information. However, Musk explicitly teaches wherein the first data stream comprises point cloud information (Fig. 3, Paragraph [0040]- Musk discloses additional sensors such as radar, lidar, ultrasonic, etc. may be used to provide relevant auxiliary sensor data. In various embodiments, the image data is paired with corresponding auxiliary data to help identify the properties of objects detected in the sensor data. (Wherein lidar is point cloud data).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Musk wherein the first data stream comprises point cloud information. Wherein having Lu’s system for vehicle vision wherein the first data stream comprises point cloud information. The motivation behind the modification would have been to allow for a more accurate and efficient system, since both Lu and Musk are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Musk’s system provides a more accurate and efficient system. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Musk et al. (US 20200265247 A1) Paragraph [0011]. Regarding claim 16, Lu teaches a vehicular vision system (Fig. 3, Paragraph [0018]- Lu discloses as shown in FIG. 3, a vehicular vision system 110 includes a camera 114 with dual video output working with multiple other cameras 115 (one shown) with single video output), the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system (Fig. 1, Paragraph [0014]- Lu discloses a forward viewing camera may be disposed at the windshield of the vehicle and view through the windshield and forward of the vehicle, such as for a machine vision system (such as for traffic sign recognition)), wherein the camera is disposed at an in-cabin side of a windshield of the vehicle and views forward of the vehicle through the windshield (Fig. 1, Paragraph [0014]- Lu discloses a forward viewing camera may be disposed at the windshield of the vehicle and view through the windshield and forward of the vehicle); wherein the camera comprises a CMOS imaging array (Fig. 1, Paragraph [0004]- Lu discloses a driver assistance system or vision system or imaging system for a vehicle that utilizes one or more cameras (preferably one or more CMOS cameras) to capture image data representative of images exterior of the vehicle), and wherein the CMOS imaging array comprises at least one million photosensors arranged in rows and columns (Fig. 1, Paragraph [0027]- Lu discloses a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels.); wherein the camera is operable to capture video image data (Fig. 2, Paragraph [0014]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); wherein the camera comprises a first output interface (Fig. 4, Paragraph [0021]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle. (Wherein the serializer 222a is the first output interface)) and a second output interface (Fig. 4, Paragraph [0021]- Lu discloses the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219. (Wherein the serializer 222b is the second output interface)), and wherein, during operation of the camera, a first data stream is output by the camera via the first output interface (Fig. 4, Paragraph [0021]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle. (Wherein the output data stream 214a is the first data stream)) and a second data stream is output by the camera via the second output interface (Fig. 4, Paragraph [0021]- Lu discloses the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219. (Wherein the serializer 214b is the second data stream)), and wherein the first data stream and the second data stream are different (Fig. 4, Paragraph [0015]- Lu discloses due to the requirements of the image processing, which are different between the head unit display and the machine vision ECU, a camera or camera system outputs two different video streams.); wherein the first data stream is derived at least in part from video image data captured by the camera (Fig. 4, Paragraph [0021]- Lu discloses a vehicular vision system 210 includes a camera 214 with dual video outputs or data streams 214a, 214b that, without a surround view ECU, output directly to a display 216 of a head unit and a machine vision ECU 219.), and wherein the second data stream is derived at least in part from video image data captured by the camera (Fig. 4, Paragraph [0021]- Lu discloses a vehicular vision system 210 includes a camera 214 with dual video outputs or data streams 214a, 214b that, without a surround view ECU, output directly to a display 216 of a head unit and a machine vision ECU 219.); an electronic control unit (ECU) comprising electronic circuitry and associated software (Fig. 4, Paragraph [0012]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); wherein the electronic circuitry of the ECU comprises a data processor for processing data of the first data stream for a driving assistance system of the vehicle (Fig. 4, Paragraph [0012]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); Lu fails to explicitly teach wherein the second data stream is provided to a storage device that is remote from the camera, and wherein the storage device stores data of the second data stream. However, Marasigan explicitly teaches wherein the second data stream is provided to a storage device that is remote from the camera (Fig. 2, Paragraph [0032]- Marasigan discloses upon execution by the processor 410, the first program 430 is configured to determine whether one or more data streams or data points from the sensors 460 are to be stored on board, or to be transmitted over the network 200 to a cloud system.), and wherein the storage device stores data of the second data stream (Fig. 4, Paragraph [0035]- Marasigan discloses the processor 410 executes the first program 430 to determine whether or not one or more data streams are to be stored on board (i.e., locally), or not.); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the second data stream is provided to a storage device that is remote from the camera, and wherein the storage device stores data of the second data stream. Wherein having Lu’s system for vehicle vision wherein the second data stream is provided to a storage device that is remote from the camera, and wherein the storage device stores data of the second data stream. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Lu in view of Marasigan fails to explicitly teach wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information, and wherein the first synchronization data and the second synchronization data comprise timestamps. However, Musk explicitly teaches wherein the first data stream comprises first synchronization data (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. In various embodiments, the elements include timestamps to maintain an ordering of the elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), and the second data stream comprises second synchronization data (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream (Fig. 2, Paragraph [0027]- Musk discloses image data is annotated with sensor data from additional auxiliary sensors to automatically create training data. In some embodiments, a time series of elements made up of sensor and related auxiliary data is collected from a vehicle and used to automatically create training data. Further in Fig. 2 Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list (Fig. 5, Paragraph [0060]- Musk discloses sensors 503 and 553 capture distance and direction measurements. Distance vector 513 depicts the distance and direction of neighboring vehicle 511, distance vector 523 depicts the distance and direction of neighboring vehicle 521, and distance vector 563 depicts the distance and direction of neighboring vehicle 561. (wherein the distance vectors show information on detected objects)), lane marker information (Fig. 2, Paragraph [0037]- Musk discloses a detected vehicle can be labeled based on a predicted distance and direction as being in the left lane or right lane. In some embodiments, the detected vehicle can be labeled as being in a blind spot, as a vehicle that should be yielded to, or with another appropriate semantic label. In some embodiments, vehicles are assigned to roads or lanes in a map based on the determined ground truth. As additional examples, the determined ground truth can be used to label traffic lights, lanes, drivable space, or other features that assist autonomous driving.), or traffic sign information (Fig. 2, Paragraph [0037]- Musk discloses as additional examples, the determined ground truth can be used to label traffic lights, lanes, drivable space, or other features that assist autonomous driving.). wherein the first synchronization data and the second synchronization data comprise timestamps (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Musk wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information, and wherein the first synchronization data and the second synchronization data comprise timestamps. Wherein having Lu’s system for vehicle vision wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information, and wherein the first synchronization data and the second synchronization data comprise timestamps. The motivation behind the modification would have been to allow for a more accurate and efficient system, since both Lu and Musk are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Musk’s system provides a more accurate and efficient system. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Musk et al. (US 20200265247 A1) Paragraph [0011]. Regarding claim 17, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 16, Lu further teaches wherein the second data stream comprises video image data captured by the camera (Fig. 2, Paragraph [0014]- Lu discloses the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle.), Lu in view of Musk fails to explicitly wherein the storage device stores the video image data of the second data stream. However, Marasigan explicitly teaches and wherein the storage device stores the video image data of the second data stream (Fig. 4, Paragraph [0035]- Marasigan discloses the processor 410 executes the first program 430 to determine whether or not one or more data streams are to be stored on board (i.e., locally), or not. The processor 410 receives data streams from the sensors 460. (Step 610). The sensors 460 continuously capture and generate data streams while the vehicle 500 is operating. For example, the data streams include video data captured by a camera installed in the vehicle 500, acceleration information captured by an accelerometer, braking information captured by a brake sensor, speed information captured by a speed sensor, engine information captured by various sensors arranged with a vehicle engine, etc.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the storage device stores the video image data of the second data stream. Wherein having Lu’s system for vehicle vision wherein the storage device stores the video image data of the second data stream. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Regarding claim 18, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 16, Lu in view of Musk fails to explicitly wherein the first output interface comprises a controller area network (CAN) bus interface. However, Marasigan explicitly teaches wherein the first output interface comprises a controller area network (CAN) bus interface (Fig. 1, Paragraph [0034]- Marasigan discloses the CAN bus 560 operates as a communication interface among various components of the vehicle 100 such as the processor 410, the memory 420, the control mechanism 550, and the sensors 460.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of L Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the first output interface comprises a controller area network (CAN) bus interface. Wherein having Lu’s system for vehicle vision wherein the first output interface comprises a controller area network (CAN) bus interface. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Regarding claim 19, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 16, Lu further teaches wherein the second output interface comprises an Ethernet interface (Fig. 1, Paragraph [0024]- Lu discloses it can be applied to other video signal transmission types, such as, for example, NTSC, Ethernet, and/or digital video transmitted over analog cables (for example, the technologies developed by Analog Devices C2B or Techpoint's HD-TVI or the like).). Claim 13 is rejected under 35 U.S.C 103 as being unpatentable over Lu et al. (US 20200039448 A1) hereafter referenced as Lu in view of Marasigan et al. (US 20200159230 A1) hereafter referenced as Marasigan and Musk et al. (US 20200265247 A1) hereafter referenced as Musk, and Wilson et al. (US 20180152495 A1) hereafter referenced as Wilson. Regarding claim 13, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu in view Marasigan and Musk fails to explicitly teach wherein the camera outputs the first data stream via the first output interface and the second data stream via the second output interface simultaneously using a single electrical connector. However, Wilson explicitly teaches wherein the camera outputs the first data stream via the first output interface and the second data stream via the second output interface simultaneously using a single electrical connector (Fig. 1, Paragraph [0065]- Wilson discloses stream aggregating component 112, which can determine to obtain correlated data for the first stream source and second stream source for aggregating to a single stream output, can provide the indication of the callback function to stream correlating component 116 along with the indication of the stream sources, such that stream correlating component 116 can call the callback function with a single stream output including correlated data from the stream sources.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Wilson wherein the camera outputs the first data stream via the first output interface and the second data stream via the second output interface simultaneously using a single electrical connector. Wherein having Lu’s system for vehicle vision wherein the camera outputs the first data stream via the first output interface and the second data stream via the second output interface simultaneously using a single electrical connector. The motivation behind the modification would have been to simplify streaming multiple data streams, since both Lu and Wilson are both systems that utilize multiple data streams. Wherein Lu’s system wherein reduced profile housing, while Wilson’s system provides a way to simplify streaming the data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Wilson et al. (US 20180152495 A1) Paragraph [0023]. Claim 14 is rejected under 35 U.S.C 103 as being unpatentable over Lu et al. (US 20200039448 A1) hereafter referenced as Lu in view of Marasigan et al. (US 20200159230 A1) hereafter referenced as Marasigan, Musk et al. (US 20200265247 A1) hereafter referenced as Musk, and Nanami et al. (US 20110050482 A1) hereafter referenced as Nanami. Regarding claim 14, Lu in view Marasigan and Musk teaches the vehicular vision system of claim 1, Lu in view Marasigan and Musk fails to explicitly teach wherein the camera comprises the ECU. However, Nanami explicitly teaches wherein the camera comprises the ECU (Fig. 1, Paragraph [0031]- Nanami discloses the camera ECU 20 comprises an image portion search means 22, a detection point extracting means 24, and an image portion correcting means 26, and the DSS ECU 30 comprises an object information calculating means 32 and a collision determination means 34.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Nanami wherein the camera comprises the ECU. Wherein having Lu’s system for vehicle vision wherein the camera comprises the ECU. The motivation behind the modification would have been to increase accuracy of the system, since both Lu and Nanami are both systems that utilize cameras mounted on vehicles. Wherein Lu’s system wherein reduced profile housing, while Nanami’s system provides an improvement to precision. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Nanami et al. (US 20110050482 A1) Paragraph [0044]. Claims 20-22, and 24, are rejected under 35 U.S.C 103 as being unpatentable over Lu et al. (US 20200039448 A1) hereafter referenced as Lu in view of Nanami et al. (US 20110050482 A1) hereafter referenced as Nanami, Marasigan et al. (US 20200159230 A1) hereafter referenced as Marasigan, and Musk et al. (US 20200265247 A1) hereafter referenced as Musk. Regarding claim 20, Lu teaches a vehicular vision system (Fig. 3, Paragraph [0018]- Lu discloses as shown in FIG. 3, a vehicular vision system 110 includes a camera 114 with dual video output working with multiple other cameras 115 (one shown) with single video output), the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system (Fig. 1, Paragraph [0014]- Lu discloses a forward viewing camera may be disposed at the windshield of the vehicle and view through the windshield and forward of the vehicle, such as for a machine vision system (such as for traffic sign recognition); wherein the camera comprises a CMOS imaging array (Fig. 1, Paragraph [0004]- Lu discloses a driver assistance system or vision system or imaging system for a vehicle that utilizes one or more cameras (preferably one or more CMOS cameras) to capture image data representative of images exterior of the vehicle), and wherein the CMOS imaging array comprises at least one million photosensors arranged in rows and columns (Fig. 1, Paragraph [0027]- Lu discloses a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels.); wherein the camera is operable to capture video image data (Fig. 2, Paragraph [0014]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); wherein the camera comprises a first output interface (Fig. 4, Paragraph [0021]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle. (Wherein the serializer 222a is the first output interface)) and a second output interface (Fig. 4, Paragraph [0021]- Lu discloses the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219. (Wherein the serializer 222b is the second output interface)), and wherein, during operation of the camera, a first data stream is output by the camera via the first output interface (Fig. 4, Paragraph [0021]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit, where video images derived from the captured image data are displayed at a video display screen for viewing by a driver of the vehicle. (Wherein the output data stream 214a is the first data stream)) and a second data stream is output by the camera via the second output interface (Fig. 4, Paragraph [0021]- Lu discloses the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219. (Wherein the serializer 214b is the second data stream)), and wherein the first data stream and the second data stream are different (Fig. 4, Paragraph [0015]- Lu discloses due to the requirements of the image processing, which are different between the head unit display and the machine vision ECU, a camera or camera system outputs two different video streams.); wherein the first data stream is derived at least in part from video image data captured by the camera (Fig. 4, Paragraph [0021]- Lu discloses a vehicular vision system 210 includes a camera 214 with dual video outputs or data streams 214a, 214b that, without a surround view ECU, output directly to a display 216 of a head unit and a machine vision ECU 219.), and wherein the second data stream is derived at least in part from video image data captured by the camera (Fig. 4, Paragraph [0021]- Lu discloses a vehicular vision system 210 includes a camera 214 with dual video outputs or data streams 214a, 214b that, without a surround view ECU, output directly to a display 216 of a head unit and a machine vision ECU 219.); and wherein the ECU comprises electronic circuitry and associated software (Fig. 4, Paragraph [0012]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); wherein the electronic circuitry of the ECU comprises a data processor for processing data of the first data stream for a driving assistance system of the vehicle (Fig. 4, Paragraph [0012]- Lu discloses the vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).); Lu fails to explicitly teach wherein the camera comprises an electronic control unit (ECU). However, Nanami explicitly teaches wherein the camera comprises an electronic control unit (ECU) (Fig. 1, Paragraph [0031]- Nanami discloses the camera ECU 20 comprises an image portion search means 22, a detection point extracting means 24, and an image portion correcting means 26, and the DSS ECU 30 comprises an object information calculating means 32 and a collision determination means 34.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Nanami wherein the camera comprises an electronic control unit (ECU). Wherein having Lu’s system for vehicle vision wherein the camera comprises an electronic control unit (ECU). The motivation behind the modification would have been to increase accuracy of the system, since both Lu and Nanami are both systems that utilize cameras mounted on vehicles. Wherein Lu’s system wherein reduced profile housing, while Nanami’s system provides an improvement to precision. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Nanami et al. (US 20110050482 A1) Paragraph [0044]. Lu in view of Nanami fails to explicitly teach wherein the second data stream is provided to a storage device within a head unit of the vehicle and wherein the storage device stores data of the second data stream. However, Marasigan explicitly teaches wherein the second data stream is provided to a storage device within a head unit of the vehicle (Fig. 1, Paragraph [0025]- Marasigan discloses the storage 140 is coupled to the head unit 120 and stores a set of data points under the control of the head unit 120.) that is remote from the camera (Fig. 1, Paragraph [0026]- Marasigan discloses the vehicle 100 may receive data points from the sensors 170 via the network 200. Alternatively, or additionally, a cloud server may receive data from the sensors 170.), and wherein the storage device stores data of the second data stream (Fig. 4, Paragraph [0035]- Marasigan discloses the processor 410 executes the first program 430 to determine whether or not one or more data streams are to be stored on board (i.e., locally), or not.); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Nanami of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Marasigan wherein the first output interface comprises a controller area network (CAN) bus interface. Wherein having Lu’s system for vehicle vision wherein the first output interface comprises a controller area network (CAN) bus interface. The motivation behind the modification would have been to allow for efficient storing of data, since both Lu and Marasigan are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Marasigan’s system provides an efficient way to store data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Marasigan et al. (US 20200159230 A1) Paragraph [0048]. Lu in view of Marasigan fails to explicitly teach wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information. However, Musk explicitly teaches wherein the first data stream comprises first synchronization data (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. In various embodiments, the elements include timestamps to maintain an ordering of the elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), and the second data stream comprises second synchronization data (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream (Fig. 2, Paragraph [0027]- Musk discloses image data is annotated with sensor data from additional auxiliary sensors to automatically create training data. In some embodiments, a time series of elements made up of sensor and related auxiliary data is collected from a vehicle and used to automatically create training data. Further in Fig. 2 Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.), and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list (Fig. 5, Paragraph [0060]- Musk discloses sensors 503 and 553 capture distance and direction measurements. Distance vector 513 depicts the distance and direction of neighboring vehicle 511, distance vector 523 depicts the distance and direction of neighboring vehicle 521, and distance vector 563 depicts the distance and direction of neighboring vehicle 561. (wherein the distance vectors show information on detected objects)), lane marker information (Fig. 2, Paragraph [0037]- Musk discloses a detected vehicle can be labeled based on a predicted distance and direction as being in the left lane or right lane. In some embodiments, the detected vehicle can be labeled as being in a blind spot, as a vehicle that should be yielded to, or with another appropriate semantic label. In some embodiments, vehicles are assigned to roads or lanes in a map based on the determined ground truth. As additional examples, the determined ground truth can be used to label traffic lights, lanes, drivable space, or other features that assist autonomous driving.), or traffic sign information (Fig. 2, Paragraph [0037]- Musk discloses as additional examples, the determined ground truth can be used to label traffic lights, lanes, drivable space, or other features that assist autonomous driving.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Musk wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information. Wherein having Lu’s system for vehicle vision wherein the first data stream comprises first synchronization data and the second data stream comprises second synchronization data, and wherein the first synchronization data and the second synchronization data allow the first data stream and the second data stream to be temporally synchronized to enable images from the second data stream stored on the storage device to be annotated with vehicle bus data derived from the first data stream, and wherein the vehicle bus data comprises information extracted from the first data stream, the information comprising at least one of a detected object list, lane marker information, or traffic sign information. The motivation behind the modification would have been to allow for a more accurate and efficient system, since both Lu and Musk are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Musk’s system provides a more accurate and efficient system. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Musk et al. (US 20200265247 A1) Paragraph [0011]. Regarding claim 21, Lu in view of Nanami, Marasigan and Musk teaches the vehicular vision system of claim 20, Lu in view of Nanami and Marasigan fails to explicitly teach wherein the first synchronization data and the second synchronization data comprise timestamps. However, Musk explicitly teaches wherein the first synchronization data and the second synchronization data comprise timestamps (Fig. 2, Paragraph [0028]- Musk discloses the vision data may be captured over a period of time to create a time series of elements. Further in Fig. 2, Paragraph [0034]- Musk discloses vision data and related data are organized by timestamps and corresponding timestamps are used to synchronize the two data sets. In some embodiments, timestamps are used to synchronize a time series of data, such as a sequence of images and a corresponding sequence of related data.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Musk wherein the first synchronization data and the second synchronization data comprise timestamps. Wherein having Lu’s system for vehicle vision wherein the first synchronization data and the second synchronization data comprise timestamps. The motivation behind the modification would have been to allow for a more accurate and efficient system, since both Lu and Musk are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Musk’s system provides a more accurate and efficient system. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Musk et al. (US 20200265247 A1) Paragraph [0011]. Regarding claim 22, Lu in view of Nanami, Marasigan and Musk the vehicular vision system of claim 20, Lu further teaches wherein the camera outputs the first data stream via the first output interface using a first electrical connector (Fig. 4, Paragraph [0018]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit.), and wherein the camera outputs the second data stream via the second output interface using a second electrical connector (Fig. 4, Paragraph [0018]- Lu discloses as the machine vision ECU 219 expects a raw pixel data with Bayer format, the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219.), and wherein the first electrical connector and the second electrical connector are different (Fig. 4, Paragraph [0018]- Lu discloses the other LVDS serializer 222a of the camera 214 takes processed pixel data (for example in YUV422 or RGB888 format), as processed at an ISP chip or processor 214d of the camera, and the camera outputs (via the output connector 224a) the output data stream 214a to the display 216 of head unit. As the machine vision ECU 219 expects a raw pixel data with Bayer format, the LVDS serializer 222b takes pixel data directly from imager 214c of the camera 214 and the camera outputs the data stream 214b (via the output connector 224b) to the machine vision ECU 219.). Regarding claim 24, Lu in view of Nanami, Marasigan, and Musk teaches the vehicular vision system of claim 20, Lu in view of Nanami, Marasigan fails to explicitly teach wherein the first data stream comprises point cloud information. However, Musk explicitly teaches wherein the first data stream comprises point cloud information (Fig. 3, Paragraph [0040]- Musk discloses additional sensors such as radar, lidar, ultrasonic, etc. may be used to provide relevant auxiliary sensor data. In various embodiments, the image data is paired with corresponding auxiliary data to help identify the properties of objects detected in the sensor data. (Wherein lidar is point cloud data).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Marasigan and of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Musk wherein the first data stream comprises point cloud information. Wherein having Lu’s system for vehicle vision wherein the first data stream comprises point cloud information. The motivation behind the modification would have been to allow for a more accurate and efficient system, since both Lu and Musk are both systems that utilize cameras to collect information surrounding a vehicle. Wherein Lu’s system wherein reduced profile housing, while Musk’s system provides a more accurate and efficient system. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Musk et al. (US 20200265247 A1) Paragraph [0011]. Claim 23, is rejected under 35 U.S.C 103 as being unpatentable over Lu et al. (US 20200039448 A1) hereafter referenced as Lu in view of Nanami et al. (US 20110050482 A1) hereafter referenced as Nanami, Marasigan et al. (US 20200159230 A1) hereafter referenced as Marasigan, and Musk et al. (US 20200265247 A1) hereafter referenced as Musk, and Wilson et al. (US 20180152495 A1) hereafter referenced as Wilson. Regarding claim 23, Lu in view of Nanami, Marasigan, and Musk teaches the vehicular vision system of claim 20, However, Wilson explicitly teaches wherein the camera outputs the first data stream via the first output interface and the second data stream via the second output interface simultaneously using a single electrical connector (Fig. 1, Paragraph [0065]- Wilson discloses stream aggregating component 112, which can determine to obtain correlated data for the first stream source and second stream source for aggregating to a single stream output, can provide the indication of the callback function to stream correlating component 116 along with the indication of the stream sources, such that stream correlating component 116 can call the callback function with a single stream output including correlated data from the stream sources.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Lu in view of Nanami, Marasigan, and Musk of having a vehicular vision system the vehicular vision system comprising: a camera disposed at a vehicle equipped with the vehicular vision system with the teachings of Wilson wherein the camera outputs the first data stream via the first output interface and the second data stream via the second output interface simultaneously using a single electrical connector. Wherein having Lu’s system for vehicle vision wherein the camera outputs the first data stream via the first output interface and the second data stream via the second output interface simultaneously using a single electrical connector. The motivation behind the modification would have been to simplify streaming multiple data streams, since both Lu and Wilson are both systems that utilize multiple data streams. Wherein Lu’s system wherein reduced profile housing, while Wilson’s system provides a way to simplify streaming the data. Please see Lu et al. (US 20200039448 A1), Paragraph [0023] and Wilson et al. (US 20180152495 A1) Paragraph [0023]. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. Urano et al. (US 20230100249 A1)- An information processing device includes one or more memories and one or more processors. The one or more processors and the one or more memories are configured to receive control information and data that contains three-dimensional position information generated by a three-dimensional range sensor and convert, based on the received control information, the three-dimensional position information contained in the data received from the three-dimensional range sensor into two-dimensional image data containing information on a distance from a predetermined viewpoint.................Please see Fig. 1. Abstract. Diedrich et al. (US 20190389385 A1)- Method and apparatus are disclosed for overlay interfaces for rearview mirror displays. An example vehicle includes a front-view camera to capture a front-view image, a rearview camera to capture a rearview image, and a controller. The controller is configured to determine lane line projections and vehicle-width projections based on the front-view image and generate an overlay interface by overlaying the lane line projections and the vehicle-width projections onto the rearview image. The example vehicle also includes a rearview mirror display to present the overlay interface...................Please see Fig. 1. Abstract. Gummadi et al. (US 20210012120 A1)- A system and method for estimating free space including applying a machine learning model to camera images of a navigation area, where the navigation area is broken into cells, synchronizing point cloud data from the navigation area with the processed camera images, and associating probabilities that the cell is occupied and object classifications of objects that could occupy the cells with cells in the navigation area based on sensor data, sensor noise, and the machine learning model...................Please see Fig. 1. Abstract. Binion et al. (US 20150112543 A1)- A method includes receiving and storing sensor data including a first plurality of data points indicative of a plurality of respective states of the environment external to a vehicle at a plurality of respective times, operational data including a second plurality of data points indicative of a plurality of respective states of an operational parameter of the vehicle at a plurality of respective times, and synchronization data. The method also includes generating an animated re-creation of an event involving the vehicle using the stored data, and causing the animated re-creation of the event to be displayed. ...................Please see Fig. 1. Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUCIUS C.G. ALLEN whose telephone number is (703)756-5987. The examiner can normally be reached Mon - Fri 8-5pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571)272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUCIUS CAMERON GREEN ALLEN/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Feb 09, 2023
Application Filed
Apr 30, 2025
Non-Final Rejection — §103
Jul 22, 2025
Response Filed
Aug 15, 2025
Final Rejection — §103
Nov 18, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Dec 23, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597105
SEMANTIC-AWARE AUTO WHITE BALANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12579755
OVERLAYING AUGMENTED REALITY (AR) CONTENT WITHIN AN AR HEADSET COUPLED TO A MAGNIFYING LOUPE
2y 5m to grant Granted Mar 17, 2026
Patent 12541972
Computing Device and Method for Handling an Object in Recorded Images
2y 5m to grant Granted Feb 03, 2026
Patent 12536247
Roughness Compensation Method and System, Image Processing Device, and Readable Storage Medium
2y 5m to grant Granted Jan 27, 2026
Patent 12529684
INSPECTION DEVICE, INSPECTION METHOD, AND INSPECTION PROGRAM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+39.3%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month