DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Remarks
This is a reply to the application filed on 12/27/2024, in which, claims 1-20 remain pending in the present application with claims 1, 8, and 12 being independent claims.
When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/13/2025, 07/15/2025, and 12/30/2025 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 6, 8-9, and 12-17 are rejected under 35 U.S.C. 103 as being unpatentable over Bassi et al. (US 20170232896 A1, hereinafter referred to as “Bassi”) in view of Jaegal (US 20220118996 A1, hereinafter referred to as “Jaegal”).
Regarding claim 1, Bassi discloses a first system comprising:
a plurality of cameras configured to collect a first image at a left rearview angle and a second image at a right rearview angle outside a vehicle (see Bassi, FIG. 2 and paragraph [0039]: “FIG. 2 illustrates an exemplary arrangement of a plurality of cameras in a vehicle visual system. At least one UWA lens camera is mounted on each of rear side 210, left side 220 and right side 230 of a vehicle 200”);
a first controller and a second controller (see Bassi, paragraph [0048]: “Each camera 520, 540, 560 and 580 may be equipped with its own specialized Geometry and Color Processing (GCP) unit in addition to the ISP, collectively referred to as the edge processor (as opposed to a central processor). The GCP and ISP may be physically implemented in the same processor or in two separate processors”) configured to:
obtain first information about the first image and second information about the second image from the plurality of cameras (see Bassi, paragraph [0048]: “individual feeds may be fully pre-processed by the corresponding cameras, either for independent display or in preparation for a combination. The pre-processed portions of the image are provided to the central logic 550 that may simply combine the individually pre-processed feeds pixel-by-pixel. Pre-processing may include one or more of geometry transformation, UWA lens image mapping, perspective correction and color/brightness adjustments”);
perform image processing on the first information to obtain a first processed image (see Bassi, paragraph [0049]: “each camera in this configuration becomes a smart camera not only capable of image processing and self calibrating, but also capable of serving as a master to control and manage the other components on performing their processing duties”);
perform image processing on the second information to obtain a second processed image (see Bassi, paragraph [0049]: “each camera in this configuration becomes a smart camera not only capable of image processing and self calibrating, but also capable of serving as a master to control and manage the other components on performing their processing duties”); and
a plurality of displays configured to display the first processed image and the second processed image (see Bassi, paragraph [0056]: “FIG. 8B illustrates an embodiment of the invention, showing a full system comprising a plurality of cameras 830-1 . . . 830-M and displays 810-1 . . . 810-N with distributed (edge) processing capabilities to provide information to multiple users 860-1 . . . 860-K. The processing capability includes, but is not limited to, geometry, color, and brightness processing; and could reside in both the cameras 830-1 . . . 830-M and displays 810-1 . . . 810-N, or all within the displays only. The processors in the cameras and displays will allow inter-device communication by forming an ad-hoc communication network 820 in order to coordinate the processing and display effort between the cameras and displays”).
Regarding claim 1, Bassi discloses all the claimed limitations with the exception of control at least one in-vehicle system, wherein the at least one in-vehicle system is different from the first system.
Jaegal from the same or similar fields of endeavor discloses control at least one in-vehicle system, wherein the at least one in-vehicle system is different from the first system (see Jaegal, paragraph [0304]: “A display area of a display included in the first display device 410 may be divided into a first area 411 a and a second area 411 b. The first area 411 a can be defined as a content display area. For example, the first area 411 may display at least one of graphic objects corresponding to can display entertainment content (e.g., movies, sports, shopping, food, etc.), video conferences, food menu and augmented reality screens … The second area 411 b can be defined as a user interface area. For example, the second area 411 b may display an AI agent screen. The second area 411 b may be located in an area defined by a seat frame according to an embodiment. In this case, a user can view content displayed in the second area 411 b between seats. The first display device 410 may provide hologram content according to an embodiment. For example, the first display device 410 may provide hologram content for each of a plurality of users such that only a user who requests the content can view the content”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Jaegal with the teachings as in Bassi. The motivation for doing so would ensure the system to have the ability to use the method and apparatus for controlling a vehicle in autonomous driving system disclosed in Jaegal to implement ADAS function for autonomous driving and to control the display of entertainment content, video conferences, and other multimedia resources thus controlling at least one in-vehicle system which is different from the ADAS system in order to integrate functions of an electronic rearview mirror into another in-vehicle system so that the costs of deploying the electronic rearview mirror can be reduced.
Regarding claim 2, the combination teachings of Bassi and Jaegal as discussed above also disclose the first system of claim 1, wherein the at least one in-vehicle system comprises an advanced driver assistance system (ADAS), or wherein the at least one in-vehicle system comprises an ADAS and an in-vehicle infotainment (IVI) system, the first controller is further configured to control the ADAS (see Jaegal, paragraph [0183]: “The autonomous device 260 can implement at least one ADAS (Advanced Driver Assistance System) function”), and the second controller is further configured to control the IVI system (see Jaegal, paragraph [0249]: “A display area of a display included in the first display device 410 may be divided into a first area 411 a and a second area 411 b. The first area 411 a can be defined as a content display area. For example, the first area 411 may display at least one of graphic objects corresponding to can display entertainment content”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 6, the combination teachings of Bassi and Jaegal as discussed above also disclose the first system of claim 1, wherein the first controller and the second controller are packaged in different boxes or a same box (see Jaegal, paragraph [0304]: “A display area of a display included in the first display device 410 may be divided into a first area 411 a and a second area 411 b. The first area 411 a can be defined as a content display area. For example, the first area 411 may display at least one of graphic objects corresponding to can display entertainment content ... The second area 411 b can be defined as a user interface area. For example, the second area 411 b may display an AI agent screen”).
The motivation for combining the references has been discussed in claim 1 above.
Claim 8 is rejected for the same reasons as discussed in claim 1 above.
Claim 9 is rejected for the same reasons as discussed in claim 2 above.
Claim 12 is rejected for the same reasons as discussed in claim 1 above.
Regarding claim 13, the combination teachings of Bassi and Jaegal as discussed above also disclose the method of claim 12, further comprising:
obtaining a steering direction indication signal (see Jaegal, paragraph [0295]: “The display system 350 may be implemented as a touch screen integrally formed with the touch input unit and disposed on one region of the steering wheel of the vehicle”); and
determining, based on the steering direction indication signal, the first rearview angle as a left rearview angle or a right rearview angle (see Jaegal, paragraph [0295]: “at least one graphic object for manipulation of the user may be displayed on one area of the output screen. The steering wheel may include a plurality of touch screens”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 14, the combination teachings of Bassi and Jaegal as discussed above also disclose the method of claim 12, wherein the second system is an advanced driver assistance system (ADAS) (see Jaegal, paragraph [0183]: “The autonomous device 260 can implement at least one ADAS (Advanced Driver Assistance System) function”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 15, the combination teachings of Bassi and Jaegal as discussed above also disclose the method of claim 12, wherein the second system is an in-vehicle infotainment (IVI) system (see Jaegal, paragraph [0238]: “The information related to the currently operating application may include information related to an operation of at least one of the music player, the navigation, the radio, and the telephone. However, the information related to the application is not limited thereto, and may include information related to the operation of all other types of applications installed in the vehicle”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 16, the combination teachings of Bassi and Jaegal as discussed above also disclose the method of claim 12, further comprising performing a second function of a third system, wherein the third system is a second in-vehicle system and is different from the first system and the second system (see Jaegal, paragraph [0304]: “A display area of a display included in the first display device 410 may be divided into a first area 411 a and a second area 411 b. The first area 411 a can be defined as a content display area. For example, the first area 411 may display at least one of graphic objects corresponding to can display entertainment content (e.g., movies, sports, shopping, food, etc.), video conferences, food menu and augmented reality screens … The second area 411 b can be defined as a user interface area. For example, the second area 411 b may display an AI agent screen. The second area 411 b may be located in an area defined by a seat frame according to an embodiment. In this case, a user can view content displayed in the second area 411 b between seats. The first display device 410 may provide hologram content according to an embodiment. For example, the first display device 410 may provide hologram content for each of a plurality of users such that only a user who requests the content can view the content”).
The motivation for combining the references has been discussed in claim 1 above.
Regarding claim 17, the combination teachings of Bassi and Jaegal as discussed above also disclose the method of claim 12, further comprising performing fault detection to detect if a component of the first system is faulty (see Bassi, FIG. 8B and paragraph [0056]: “When there are redundant cameras or displays components, this architecture may also facilitate fault detection and failover in case there is problem with some of the cameras or displays, through communication via the ad-hoc network 820 formed by the distributed processors”).
The motivation for combining the references has been discussed in claim 1 above.
Claims 3-5, 7, 10-11, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bassi and Jaegal as applied to claim 1, and further in view of IIDA et al. (US 20220171275 A1, hereinafter referred to as “IIDA”).
Regarding claim 3, the combination teachings of Bassi and Jaegal as discussed above also disclose the first system of claim 1, wherein the first controller is connected to the second controller, wherein the plurality of cameras comprises:
a human-machine interaction (HMI) display connected to the second controller (see Bassi, paragraph [0051]: “the vision system further comprises a user interactive medium, so that the driver could select a view as desired”) and configured to:
display, when the left visual field display is faulty, the first processed image (see Bassi, FIG. 8B and paragraph [0056]: “When there are redundant cameras or displays components, this architecture may also facilitate fault detection and failover in case there is problem with some of the cameras or displays, through communication via the ad-hoc network 820 formed by the distributed processors”); or
display, when the right visual field display is faulty, the second processed image (see Bassi, FIG. 8B and paragraph [0056]: “When there are redundant cameras or displays components, this architecture may also facilitate fault detection and failover in case there is problem with some of the cameras or displays, through communication via the ad-hoc network 820 formed by the distributed processors”).
The motivation for combining Bassi and Jaegal has been discussed in claim 1 above.
Regarding claim 3, the combination teachings of Bassi and Jaegal as discussed above disclose all the claimed limitations with the exceptions of a first left camera configured to collect the first image; and a first right camera configured to collect the second image; and wherein the plurality of displays comprises: a left visual field display connected to the first controller and configured to display the first processed image; and a right visual field display connected to the first controller and configured to display the second processed image.
IIDA from the same or similar fields of endeavor discloses a first left camera configured to collect the first image (see IIDA, paragraph [0121]: “a driving support apparatus including a left PVM camera sensor and a right PVM camera sensor each serving as an image pickup device”); and
a first right camera configured to collect the second image (see IIDA, paragraph [0121]: “a driving support apparatus including a left PVM camera sensor and a right PVM camera sensor each serving as an image pickup device”), and
wherein the plurality of displays comprises:
a left visual field display connected to the first controller and configured to display the first processed image (see IIDA, paragraph [0080]: “The PVM switch 43 is provided in the vicinity of a steering wheel (not shown), and may be pressed by the driver when a PVM image (described later) is to be displayed on the display 60 a of the display device”);
a right visual field display connected to the first controller and configured to display the second processed image (see IIDA, paragraph [0080]: “The PVM switch 43 is provided in the vicinity of a steering wheel (not shown), and may be pressed by the driver when a PVM image (described later) is to be displayed on the display 60 a of the display device”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in IIDA with the teachings as in Bassi and Jaegal. The motivation for doing so would ensure the system to have the ability to use the image pickup system for a vehicle disclosed in IIDA to include a left PVM camera sensor and a right PVM camera sensor each serving as an image pickup device to capture right wide-area image data and left wide-area image data; to provide a PVM switch in the vicinity of a steering wheel for driver to control the display of the right or left PVM image thus collecting the first image using the left camera and the second image using the right camera; displaying the first processed image in a left visual field display connected to the first controller; and displaying the second processed image in a right visual field display connected to the same controller in order to display an image that is at a rearview angle corresponding to the steering instruction so that driver can easily control the display of the desired image.
Regarding claim 4, the combination teachings of Bassi, Jaegal, and IIDA as discussed above also disclose the first system of claim 3, wherein both the first left camera and the first right camera are connected to the first controller and the second controller (see IIDA, paragraph [0057]: “The driving support apparatus further includes a PVM camera sensor 20, a front camera sensor 30, a vehicle speed sensor 40, a yaw rate sensor 41, a shift position sensor 42, a PVM switch 43, a collision avoidance device 50, and a display device 60, which are each connected to the ECU 10”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 5, the combination teachings of Bassi, Jaegal, and IIDA as discussed above also disclose the first system of claim 3, wherein the first left camera and the first right camera are connected to the first controller, and wherein the first system further comprises:
a second left camera connected to the second controller and configured to collect, when the first left camera or the left visual field display is faulty, the first image (see Bassi, FIG. 5B and paragraph [0048]: “an example of which is illustrated in FIG. 5B. Each camera 520, 540, 560 and 580 may be equipped with its own specialized Geometry and Color Processing (GCP) unit in addition to the ISP, collectively referred to as the edge processor (as opposed to a central processor). The GCP and ISP may be physically implemented in the same processor or in two separate processors. In this arrangement, referred to as the edge processing or distributive system, the role of a central processing unit may become minimal or at most equal to the participating cameras. Hence, it is referred to as the central logic 550. Accordingly, individual feeds may be fully pre-processed by the corresponding cameras, either for independent display or in preparation for a combination. The pre-processed portions of the image are provided to the central logic 550 that may simply combine the individually pre-processed feeds pixel-by-pixel”); and
a second right camera connected to the second controller and configured to collect, when the first right camera or the right visual field display is faulty, the second image (see Bassi, paragraph [0049]: “Essentially, each camera in this configuration becomes a smart camera not only capable of image processing and self calibrating, but also capable of serving as a master to control and manage the other components on performing their processing duties, realizing the concept of edge processing. In the distributive architecture, if one component mal-functions and goes offline, another component may take its place. Therefore, the image processing burden of the central logic 550 may be minimal”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 7, the combination teachings of Bassi, Jaegal, and IIDA as discussed above also disclose the first system of claim 3, wherein the second controller is further configured to perform fault detection to determine that one or more of the plurality of cameras is faulty, the first controller is faulty, the left visual field display is faulty, or the right visual field display is faulty (see Bassi, FIG. 8B and paragraph [0056]: “When there are redundant cameras or displays components, this architecture may also facilitate fault detection and failover in case there is problem with some of the cameras or displays, through communication via the ad-hoc network 820 formed by the distributed processors”).
The motivation for combining the references has been discussed in claim 3 above.
Claim 10 is rejected for the same reasons as discussed in claim 3 above.
Claim 11 is rejected for the same reasons as discussed in claim 7 above.
Regarding claim 18, the combination teachings of Bassi, Jaegal, and IIDA as discussed above also disclose the method of claim 12, further comprising:
obtaining a second image at a second rearview angle (see IIDA, paragraph [0074]: “The camera sensors 20R and 20Re each have a horizontal angle of view of 180 degrees and a vertical angle of view of 135 degrees”);
obtaining second information about the second image (see IIDA, paragraph [0074]: “the image data obtained through pickup by the camera sensor 20R and the image data obtained through pickup by the camera sensor 20Re are referred to as “right wide-area image data” and “rear wide-area image data,” respectively”);
performing image processing on the second information to obtain a second processed image (see IIDA, paragraph [0080]: “The PVM switch 43 is provided in the vicinity of a steering wheel (not shown), and may be pressed by the driver when a PVM image (described later) is to be displayed on the display 60 a of the display device”); and
displaying the second processed image (see IIDA, paragraph [0082]: “The display device 60 is configured to display the PVM image on the display 60 a when receiving a display command to display the PVM image (described later) from the ECU”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 19, the combination teachings of Bassi, Jaegal, and IIDA as discussed above also disclose the method of claim 18, wherein displaying the first processed image and the second processed image comprises displaying the first processed image on a first field display and displaying the second processed image on a second field display (see IIDA, paragraph [0080]: “The PVM switch 43 is provided in the vicinity of a steering wheel (not shown), and may be pressed by the driver when a PVM image (described later) is to be displayed on the display 60 a of the display device”).
The motivation for combining the references has been discussed in claim 3 above.
Regarding claim 20, the combination teachings of Bassi, Jaegal, and IIDA as discussed above also disclose the method of claim 19, wherein displaying the first processed image and the second processed image further comprises:
displaying the first processed image, when the first field display is faulty, on a third field display (see Bassi, FIG. 8B and paragraph [0056]: “When there are redundant cameras or displays components, this architecture may also facilitate fault detection and failover in case there is problem with some of the cameras or displays, through communication via the ad-hoc network 820 formed by the distributed processors”); or
displaying the second processed image, when the second field display is faulty, on the third field display or on a fourth field display (see Bassi, FIG. 8B and paragraph [0056]: “When there are redundant cameras or displays components, this architecture may also facilitate fault detection and failover in case there is problem with some of the cameras or displays, through communication via the ad-hoc network 820 formed by the distributed processors”).
The motivation for combining the references has been discussed in claim 3 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIENRU YANG whose telephone number is (571)272-4212. The examiner can normally be reached Monday-Friday 10AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NIENRU YANG
Examiner
Art Unit 2484
/NIENRU YANG/Examiner, Art Unit 2484
/THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484