Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments and Arguments
Amendments and arguments filed on 03/04/2026 have been fully considered and are not found to place the application in a condition for allowance. The applicant asserts that “Tsuda does not teach or suggest that the operation accepting unit 423 may transmit [a] signal back to he control unit 422 to respond [to] the detection result.” Additionally, the applicant asserts that “Tsuda does not teach or suggest that the display control unit 424 transmits [a] signal back to the control unit 42 to respond [to] the detection result.” The applicant concludes, based on such assumptions, that Tsuda does not teach that “the head end generates a first image signal according to the integrated command, and the first image signal is provided to the central processor circuit”. The Office respectfully disagrees.
The Office maintains that Tsuda teaches that the head end generates a first image signal according to the integrated command (figs. 1-2, ¶ 32-34: “the display control unit 424 displays an image based on the operation accepted by the operation accepting unit 423”), and the first image signal is provided to the central processor circuit (elements within element 42 provide signals to element 42, for example the distractedness signal is a signal that is provided to element 42 for processing). Note that per fig. 3 of Tsuda, elements 421-424 are within the control unit 42. Fig. 5 further illustrates a signal diagram of tasks performed by elements 421-424. Each of the signals (such as ‘Yes’ or ‘No’ signals output at S104, for example) is a signal that is provided to the central processor circuit. In other words, units 421-424 are in constant communication and integral to the processor, therefore, Tsuda is found to teach the limitations as claimed.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 5-6, 9, 11, 13, 15-16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuda et al., US 2023/0078015 A1, hereinafter “Tsuda”, in view of Guo et al., US 2018/0129202 A1, hereinafter “Guo”, and further in view of Landgraf, US 10,810,966 B1, hereinafter “Landgraf”.
Regarding claim 1, Tsuda teaches a display system (fig. 3, the vehicle includes a display system, ¶ 23), used in a transportation apparatus (¶ 23, see Vehicle), the transportation apparatus comprises a head end (fig. 3, element 423, ¶ 32-35), and the system chip is electrically connected to the head end (see fig. 3 wherein the head end is a portion of the system chip)comprising: a display (fig. 3, element 1, ¶ 23); a plurality of first sensor devices (fig. 3, element 2, ¶ 24; note that an image capturing unit includes a plurality of sensors in order to generate an image) configured to provide a plurality of detection signals (fig. 5, element 102, ¶ 45); and a system chip (fig. 3, element 4, ¶ 23: ECU) electrically connected to the display and the plurality of first sensor devices (see fig. 3), wherein the system chip comprises: an image processor circuit (fig. 3, element 421, ¶ 27) electrically connected to the plurality of first sensor devices and configured to analyze the plurality of detection signals (¶ 27); a central processor circuit (fig. 3, element 42, ¶ 26) electrically connected to the image processor circuit (¶ 26) and configured to generate an integrated command according to the analyzed plurality of detection signals (fig. 3, signal output from element 422; fig. 5, S104, ¶ 45-47); and a timing controller circuit (fig. 3, element 424, ¶ 34-35) electrically connected to the central processor circuit and configured to drive the display to display an image according to the integrated command (¶ 34-36).
Tsuda does not teach a second sensor device electrically connected to the system chip, wherein the plurality of first sensor devices and the second sensor device are different types of sensor devices; and a third sensor device electrically connected to the system chip and configured to provide an ambient light signal, wherein the central processor circuit adjusts brightness of the display or parameters of the plurality of first sensor devices and the second sensor device based on the ambient light signal.
Guo, however, teaches a second sensor device (fig. 5, at least one of element(s) 580 and or 582; ¶ 69) electrically connected to the system chip (fig. 5, element 510, ¶ 69), wherein the plurality of first sensor devices and the second sensor device are different types of sensor devices (¶ 69); and a third sensor device (at least the other of element(s) 580 and or 582; ¶ 69) electrically connected to the system chip (fig. 5, element 510, ¶ 69) and configured to provide an ambient light signal (¶ 30: “poor lighting state” and or “bright state”), wherein the central processor circuit adjusts brightness of the display or parameters of the plurality of first sensor devices and the second sensor device based on the ambient light signal (¶ 30, parameter of first sensor is changed (activated) according to the ambient light signal; also see fig. 4 where based on the ambient light data at 404 the parameters of a camera system (activation and determining state of occupant) at step 406, and/or activation of a depth sensor camera at step 408 is performed), wherein the integrated command is provided to the head end (¶ 33), the head end generates a first image signal according to the integrated command (figs. 1-2, ¶ 32-34: “the display control unit 424 displays an image based on the operation accepted by the operation accepting unit 423”), and the first image signal generated by the head end is provided to the central processor circuit (elements within element 42 provide signals to element 42, for example the distractedness signal is a signal that is provided to element 42 for processing).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Tsuda in view of Guo. The references teach similar systems for detecting whether a driver is distracted. Guo further teaches the use of other sensors for confirming the state of a user. For example, Guo teaches using a laser depth sensor for detecting facial features of a driver, similar to the camera system of Tsuda. Guo further teaches, however, that such a depth sensor may be activated after a secondary different set of sensors determine the requirements for such activation to further confirm the state of a user in order to accurately determine the distractedness of a user with a higher level of confidence. As such, one would have been motivated to make such a combination in order to increase the reliability of the system and accurately determine the state of alertness of a user.
Tsuda and Guo do not specifically teach that the system chip is connected to the head end through a connecting line.
Landgraf teaches a similar driver monitoring system in fig. 1, and further teaches that the system chip (fig. 1, element 106) is connected to the head end (fig. 1, element 120) through a connecting line (col. 4, lines 38- 54; interface such as CAN bus).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Tsuda, Guo and Landgraf. The references teach driver monitor systems and Landgraf teaches that while the components of system chip may interface internally, the interface may be provided as a bus to communicate information and/or convert information to/from various protocols. As such one would have been motivated to make such a combination in order to provide a connecting line for connecting the system chip and head end in an ensure proper data exchange within the system.
Tsuda and Guo do not specifically teach that the first image signal is used to drive a local dimming circuit.
Landgraf, however, teaches that an image signal is used to drive a local dimming circuit (fig. 11, col. 29, lines 28-62 wherein such a signal is provided for dimming).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Tsuda, Guo and Landgraf. The references teach vehicle display systems for reducing distractions for a driver. Landgraf further teaches determining brightness of the environment and adjusting the brightness of different display devices within a vehicle. One would have been motivated to make such a combination in order to reduce the distraction caused by display devices by adjusting the brightness of certain display devices as taught by Landgraf, therefore improving the alertness of a driver within a vehicle.
Regarding claim 11, Tsuda teaches a system chip (fig. 3, element 4, ¶ 23: ECU) suitable for a display system comprising a display (fig. 3, element 1, ¶ 23) and a plurality of first sensor devices (fig. 3, element 2, ¶ 24; note that an image capturing unit includes a plurality of sensors in order to generate an image) configured to provide a plurality of detection signals (fig. 5, element 102, ¶ 45), wherein the display system and the system chip are configured similarly to those of claim 1 (see rejection of claim 1 above).
Regarding claims 3 and 13, Tsuda teaches that the integrated command is provided to both the timing controller circuit and the head end (fig. 3, see output from element 422 to both elements 423 and 424, see ¶ 33 and 36), and the head end controls the transportation apparatus to perform a corresponding action according to the integrated command (see ¶ 33).
Regarding claims 5 and 15, Tsuda and Guo do not specifically teach that the system chip further comprises the local dimming circuit electrically connected to the central processor circuit, the display comprises a display panel and a light source module, the timing controller circuit provides a second image signal to the display panel, and the local dimming circuit provides a third image signal to the light source module.
Landgraf teaches a similar driver monitoring system in fig. 1 and further teaches that the system chip further comprises a local dimming circuit (col. 29, lines 23-24 and lines 40-43) electrically connected to the central processor circuit, the display comprises a display panel and a light source module (fig. 1, elements 118, note that each display inherently includes such a light source module such as pixels of a LED display or backlight of an LCD device), the timing controller circuit provides a second image signal to the display panel, and the local dimming circuit provides a third image signal to the light source module (col. 29, lines 55-62 wherein adjusted video content (second image signal) with brightness adjustment (third image signal) are provided).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Tsuda, Guo and Landgraf. The references teach vehicle display systems for reducing distractions for a driver. Landgraf further teaches determining brightness of the environment and adjusting the brightness of different display devices within a vehicle. One would have been motivated to make such a combination in order to reduce the distraction caused by display devices by adjusting the brightness of certain display devices as taught by Landgraf, therefore improving the alertness of a driver within a vehicle.
Regarding claims 6 and 16, Tsuda and Guo do not specifically teach that the connecting line is a bus line.
Landgraf, however, clearly teaches that the connecting line is a bus line (col. 4, lines 38- 54; interface such as CAN bus).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Tsuda, Guo and Landgraf. The references teach driver monitor systems and Landgraf teaches that while the components of system chip may interface internally, the interface may be provided as a bus to communicate information and/or convert information to/from various protocols. As such one would have been motivated to make such a combination in order to provide a connecting line for connecting the system chip and head end in an ensure proper data exchange within the system.
Regarding claims 9 and 19, Tsuda does not teach that the image processor circuit selects data provided by the second sensor device for analysis first and then selects data provided by the plurality of first sensor devices for analysis.
Guo, however, clearly teaches that the image processor circuit selects data provided by the second sensor device for analysis first and then selects data provided by the plurality of first sensor devices for analysis (fig. 4, see “receive first sensor data” which is data from the second sensor device in step 402, ¶ 58; also see “receive depth data” which is data from the first sensor device, ¶ 59).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Tsuda in view of Guo. The references teach similar systems for detecting whether a driver is distracted. Guo further teaches the use of other sensors for confirming the state of a user. For example, Guo teaches using a laser depth sensor for detecting facial features of a driver, similar to the camera system of Tsuda. Guo further teaches, however, that such a depth sensor may be activated after a secondary different set of sensors determine the requirements for such activation to further confirm the state of a user in order to accurately determine the distractedness of a user with a higher level of confidence. As such, one would have been motivated to make such a combination in order to increase the reliability of the system and accurately determine the state of alertness of a user.
Claims 7 and 17, are rejected under 35 U.S.C. 103 as being unpatentable over Tsuda and Guo, as applied above, further in view of Du, US 2018/0233092 A1, hereinafter “Du”.
Regarding claims 7 and 17, Tsuda and Guo do not teach that the display comprises a liquid crystal display, a light emitting diode display, or an organic light emitting diode display.
Du clearly teaches that the display comprises a liquid crystal display, a light emitting diode display, or an organic light emitting diode display (¶ 23).
It would have been obvious to one of ordinary skill in the art before the filing date of the invention to combine the teachings of Tsuda, Guo and Du. The references teach providing display devices within a vehicle and Du further teaches different types of such display panels. As such, one would have been motivated to make such a combination in order to incorporate a display device of a certain type as taught by Du, thereby providing the user with a display device appropriate for use within a vehicle.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEPEHR AZARI whose telephone number is (571)270-7903. The examiner can normally be reached weekdays from 11AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at (571) 272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEPEHR AZARI/ Primary Examiner, Art Unit 2621