DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment / Arguments
Applicant’s amendment overcomes the 112(b) rejections.
Regarding the 103 rejections, these are maintained. The amendments do not overcome the prior art, some language added is circular or repetitive, and/or are taught by the applied art (modification of display info based on driving mode, the prior teaching all of this and even motivation. See Lee, para. 96) and/or obvious design choice to display certain elements on a screen while driving. See remainder of this office action for details.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: mode identification unit, display control unit, state identification unit, grip identification unit, variously recited in claims 1-23.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-10, 15, 16, 23, 24, 25, 31 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (U.S. Patent App. Pub. No. 2016/0311323 A1) in view of Koyama (JP2019116182A)
Regarding claim 1:
Lee teaches: a vehicle display control device for a vehicle (e.g. Fig. 2: 50), the vehicle configured to switch from a without-monitoring-duty automated driving without a duty of monitoring by a driver (e.g. para. 99, an autonomous driving function without operation of the driver) to a with-monitoring-duty automated driving with the duty of monitoring by the driver (e.g. paras. 90, 452-56, Fig. 39 and related description, an automated driving with driving assistance function), the vehicle display control device comprising: a processor (Fig. 2: 190) and a memory unit (Fig. 2: 130) configured to implement:
a display control unit (one aspect of the processor Fig. 2: 190) configured to cause a display device (Fig. 2: 170), which is to be used in an interior of the vehicle (e.g. Fig. 4), to display a surrounding state image that is an image to show a surrounding state of the vehicle (see e.g. any one of Figs. 9B, 10A, 10B and related descriptions), the surrounding state image including an image of a lane (e.g. Fig. 9B, or para. 180: “On the driving image, various objects such as other vehicles located near the vehicle 1 and ground state such as lane may be displayed”, or para. 204: “FIG. 12B shows an example of a virtual lane 432 displayed on the transparent display 171 in the state shown in FIG. 12A. The controller 190 may display the virtual lane 432 on the transparent display 171 as information indicating the current driving route. The controller 190 may display the virtual lane 432 at a position corresponding to the actual lane 431 in the entire area of the transparent display 171. As a result, the driver may check the virtual lane 432 overlapping the actual lane 431.”).
Regarding the remaining features of claim 1, it would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained:
a mode identification unit (Lee, at least one aspect of the apparatus of claim 1 to select driving mode) configured to identify whether an automated driving in a hands-on mode, which requires gripping of a steering wheel of the vehicle, or an automated driving in a hands-off mode, which does not require gripping of the steering wheel, is performed when the vehicle is in the with-monitoring-duty automated driving,
wherein the display control unit is configured to, when the vehicle switches from the without-monitoring-duty automated driving to the with-monitoring-duty automated driving, differentiate a display of the surrounding state image, depending on whether the mode identification unit identifies the automated driving in the hands-on mode or the automated driving in the hands-off mode,
the display control unit is configured to cause an amount of information in the surrounding state image, when the mode identification unit identifies the automated driving in the hands-on mode, to be larger than an amount of information in the surrounding state image, when the mode identification unit identifies the automated driving in the hands-off mode,
the display control unit is configured to, when the vehicle is in the without-monitoring-duty automated driving, cause the display device to display the surrounding state image corresponding the without-monitoring duty automated driving (this is circular claim language) or not display the surrounding state image,
the display control unit is configured to display a subject vehicle lane, which is a driving lane of the vehicle, and a surrounding lane, which is other than the subject vehicle lane, (Figs. 9B, 10A, 10B) when the mode identification unit identifies the automated driving in the hands-on mode (this mode mapped above) in a case in which the vehicle switches from the without-monitoring duty automated driving to the with-monitoring duty automated driving, the subject vehicle lane and the surrounding lane being different from the surrounding state image corresponding to the without-monitoring duty automated driving,
the display control unit is configured to display only the subject vehicle lane among the subject vehicle lane and the surrounding lane, (see above lane mapping, only one lane can be shown) when the mode identification unit identifies the automated driving in the hands-off mode (this mode mapped above) in a case in which the vehicle switches from the without-monitoring duty automated driving to the with-monitoring duty automated driving, the subject vehicle lane being different from the surrounding state image corresponding to the without-monitoring duty automated driving, and
the surrounding lane includes at least a lane adjacent to the subject vehicle lane (e.g. Figs. 9B, 10A, 10B), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Further re: functions of the mode identification unit, Koyama teaches that it is known to have both: (1) an automated hands-on mode, and (2) an automated hands-off mode, both related to whether a driver’s hands are on the steering wheel or not. See Koyama, para. 1, first and second driving support modes. Both of these are automated driving modes. This teaches the functions of the mode identification unit.
Re: functions of the display control unit to differentiate a display of the surrounding state image, when the vehicle switches from without monitoring (i.e. autonomous driving, per Lee), to [Wingdings font/0xE0] with monitoring duty automated driving, depending on whether hands-on or hands-off mode (per Koyama), see Lee, e.g. Summary and claim 1, which teaches that the display of driving information is controlled based on driving mode (i.e. any one of the ones mapped above). See also para. 96, This teaches and suggests the differentiate function of the display control unit, based on mode.
Re: a “larger” “amount of information” to be displayed when in the hands-on versus hands-off mode, see above discussion re: Lee. As stated already, Lee teaches displaying driving information based on driving mode (see e.g. Abstract, claim 2, Summary). Having more information, or a larger amount, when a driver is in a hands-on mode (so, actively participating or involved, with hands gripping the steering wheel) versus hands-off (less active, hands are not on the steering wheel), is taught/suggested by nature of these modes and Lee. Moreover, Lee also teaches that the amount of information displayed can be based on, what Lee refers to as a ‘manipulation variable’. The greater this variable, the more information that can be displayed (e.g. Summary). Lee has many embodiments and examples of this manipulation variable, one is manipulation variables related to various variables such as pressure and touch (i.e. hands are touching/grabbing), of a driving manipulation device (i.e. steering wheel) (see paras. 285-91). This also, alternatively, teaches the above feature of having a larger amount of information in hands-on versus hands-off driving mode.
Re: functions related to displaying lanes (the last two claim phrase paragraphs above), this is mapped directly in the claim language. And further re: the above display differentiation with respect to what driving mode is currently engaged, Lee teaches that display differentiation can be done with respect to the current driving mode, as mapped in claim 1. Lee is not particularly limited with respect to both what information is displayed and how to differentiate: “For example, the display 170 may change the type, form, amount, color, position, size, etc. of the information displayed on the display 170 or change the brightness, transmittance, color, etc. of the display 170 according to different control signals provided by the controller 190”. Quoting para. 96 of Lee.
As mentioned during interviews, a lot of what is displayed during driving modes (the prior art teaching all the modes, as mapped above), is also and alternatively an obvious design choice to one of ordinary skill, and expressly motivated by Lee. See above Lee, paragraph 96.
Modifying the applied references, such that the display differentiation, based on driving mode, is done with respect to the modes as mapped above, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill.
The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Moreover, an alternate rationale would be an obvious design choice to one of ordinary skill. Applicant’s specification does not indicate any criticality to any specific display differentiation design form, over other possibilities, merely so that a driver of the subject vehicle can more easily recognize which mode he/she should be driving (e.g. filed specification, para. 71). Applicant gives different examples of display differentiation (see filed specification, paras. 77-83, with description that, in the example display differentiation of para. 78, “the present disclosure is not necessarily limited to this”. Nor is it limited to the display configuration of para. 118, etc.
Regarding claim 3:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the surrounding state image includes an image showing an obstacle, and the display control unit is configured to,
when the mode identification unit identifies the automated driving in the hands-off mode, display only the subject vehicle lane among the subject vehicle lane and the surrounding lane and
display both the image showing the obstacle corresponding to the subject vehicle lane and the image showing the obstacle corresponding to the surrounding lane, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Lee teaches display of driving information, which includes obstacle information (e.g. paras. 305-07, 342, 343, 381, Figs. 29A, 29B). The prior art teaches an automated driving in the hands-off mode, as mapped in claim 1. In terms of the display of both obstacle corresponding to the subject vehicle lane and surrounding lane, as mapped in claims 1 and 2, Lee is not limited to display of information modifications, based on driving mode. The embodiment of Applicant’s claim 3 is one display combination obvious over Lee and, alternatively, a design choice for one of ordinary skill. See also Lee, Figs. 9A, 9B, 12A, 12B, 18B, 29A, 29B. Applicant is encourage to review all of Lee, as available to one of ordinary skill, which gives several examples of display configurations, without limitation or restriction.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 4:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the surrounding state image is an image of a surrounding of the vehicle viewed from a virtual viewpoint (see mapping to claim 1 and, e.g. Lee, Fig. 10B), and the display control unit is configured to
when the mode identification unit identifies the automated driving in the hands-on mode, display the surrounding state image viewed from the virtual viewpoint, which is farther from a display target than the virtual viewpoint when the mode identification unit identifies the automated driving in the hands-off mode, and
when the mode identification unit identifies the automated driving in the hands-off mode, display the surrounding state image viewed from the virtual viewpoint, which is closer to the display target than the virtual viewpoint when the mode identification unit identifies the automated driving in the hands-on mode, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
See above mapping to claim 1, and teachings of Lee to vary display, based on driving mode, in manner of: type, form, amount, color, position, size, etc., (see e.g. para. 96 of Lee). Modifying the position and size corresponds to displaying a surrounding state image viewed from the virtual viewpoint farther from a display target, and closer, based on mode *(here, hands-on, and hands-off mode, mapped in claim 1). Such a modification would have been obvious and predictable to one of ordinary skill and, alternatively, a design choice (see rationale as described in claim 2, also applies here).
Regarding claim interpretation, see also Applicant’s Fig. 5, which illustrates this claim feature of virtual viewpoint being closer or farther from a display target. Accordingly, the examiner’s interpretation of this claim feature is a broad, reasonable interpretation consistent with Applicant’s specification as filed.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 5:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the surrounding state image is an image of a surrounding of the vehicle viewed from a virtual viewpoint (see mapping to claim 1 and, e.g. Lee, Fig. 10B), and the display control unit is configured to
when the mode identification unit identifies the automated driving in the hands-on mode, display the surrounding state image viewed from the virtual viewpoint that looks down from an upper position than the virtual viewpoint when the mode identification unit identifies the automated driving in the hands-off mode and
when the mode identification unit identifies the automated driving in the hands-off mode, display the surrounding state image viewed from the virtual viewpoint that looks down from a lower position than the virtual viewpoint when the mode identification unit identifies the automated driving in the hands-on mode, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
See mapping to claim 4 above, which applies here. Modification of position and size teaches the above claimed upper position and lower position, and is a broad, reasonable interpretation consistent with Applicant’s specification as filed (see Applicant’s Fig. 6).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 6:
Lee teaches: the vehicle display control device according to claim 1, wherein the display control unit is configured to when the mode identification unit identifies the automated driving in the hands-on mode, cause a region around the vehicle, which is displayed as the surrounding state image, to be wider than the region when the mode identification unit identifies the automated driving in the hands-off mode, and
when the mode identification unit identifies the automated driving in the hands-off mode, cause the region around the vehicle, which is displayed as the surrounding state image, to be narrower than the region when the mode identification unit identifies the automated driving in the hands-on mode (see mapping to claim 1 and, e.g. Lee, para. 96).
Modifying the applied references, in view of same, such to have included the above as taught by Lee, mapped above, and as mapped in claim 1, would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention, (see MPEP §2143(A), and, alternatively, would have been an obvious design choice (see explanatory rationale in claim 2, which applies here). Basically, to have included the above display configuration, which is one of several embodiments taught by Lee, to be related to the above mapped drive modes, is taught/suggested and obvious over the prior art.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 7:
Lee teaches: the vehicle display control device according to claim 1, wherein the display control unit is configured to differentiate a color tone of at least a part of the surrounding state image depending on whether the mode identification unit identifies the automated driving in the hands-on mode or the automated driving in the hands-off mode (see above mapping to claim 1. Lee teaches varying color, which teaches color tone (see also Lee, para. 188) (also, see Applicant’s published application, para. 100, which equivocates color tone with visible color (i.e. red or blue are color tones)).
Modifying the applied references, in view of same, such that color done is differentiated depending on modes, as mapped in claim 1, would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention, (see MPEP §2143(A), and, alternatively, would have been an obvious design choice (see explanatory rationale in claim 2, which applies here). Basically, to have included the above display configuration, which is one of several embodiments taught by Lee, to be related to the above mapped drive modes, is taught/suggested and obvious over the prior art.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 8:
Lee teaches: the vehicle display control device according to claim 1, wherein the surrounding state image includes a plurality of image elements (e.g. Figs. 29A and 29B and related description), and the display control unit is configured to differentiate at least one of an arrangement of the image elements or a size ratio of the image elements depending on whether the mode identification unit identifies the automated driving in the hands-on mode or the automated driving in the hands-off mode (see mapping to claim 1 and, e.g. Lee, para. 96).
Modifying the applied references, in view of same, such to have included the above as taught by Lee, mapped above, and as mapped in claim 1, would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention, (see MPEP §2143(A), and, alternatively, would have been an obvious design choice (see explanatory rationale in claim 2, which applies here). Basically, to have included the above display configuration, which is one of several embodiments taught by Lee, to be related to the above mapped drive modes, is taught/suggested and obvious over the prior art.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 9:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 8, wherein the surrounding state image includes, as one of the image elements, a hands-on-off image that is an image indicating whether the hands-on mode or the hands-off mode, and the display control unit is configured to, when the mode identification unit identifies the automated driving in the hands-on mode, increase the size ratio of the hands-on-off image more than the size ratio when the mode identification unit identifies the automated driving in the hands-off mode, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Lee teaches display of an image indicating what the current drive mode is (see e.g. Fig. 35: 1001). This corresponds to Applicant’s claimed “hands-off image” as an image indicating which hands mode is currently engaged. In terms of increasing size ratio as claimed, this is one obvious embodiment, both taught by Lee and also a design choice for one of ordinary skill (see e.g. mapping to claim 2).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 10:
Lee teaches: the vehicle display control device according to claim 1, wherein the surrounding state image includes a background image (e.g. Fig. 35: 1202, weather image as a background image), and the display control unit is configured to differentiate the background image depending on whether the mode identification unit identifies the automated driving in the hands-on mode or the automated driving in the hands-off mode (see above mapping to claim 2 and throughout this office action, regarding Lee and differentiating any images that are displayed in a variety of manners base on mode of driving. Differentiation can be with respect to color, transparency, line thickness, line shape, amount, size, position, etc.). Lee, paras. 393, 339, 303 and above referenced mappings).
Modifying the applied references, in view of same, such to have included the above as taught by Lee, mapped above, and as mapped in claim 1, would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention, (see MPEP §2143(A), and, alternatively, would have been an obvious design choice (see explanatory rationale in claim 2, which applies here). Basically, to have included the above display configuration, which is one of several embodiments taught by Lee, to be related to the above mapped drive modes, is taught/suggested and obvious over the prior art.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 15:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the display control unit is configured to, in a state where the display control unit displays the surrounding state image in the without-monitoring-duty automated driving and when switching of a stage of automated driving to a lower stage in automation is made,
change a display of the surrounding state image corresponding to the stage of automated driving before the switching to a display of the surrounding state image corresponding to the stage of automated driving after the switching, after a predetermined time has elapsed from the switching,
regardless of whether the automated driving in the hands-on mode or the automated driving in the hands-off mode, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
As best understood, claim 15 describes an embodiment whereby a surrounding state image is displayed in a w/out-monitoring automated driving (obvious over Lee, see mapping to claim 1 and claim 2, one display embodiment taught by Lee and, alternatively, a design choice), and switch to a lower stage of automation (interpreted as another automated driving mode), change display after a predetermined time has elapsed from the driving mode switching, regardless of whether the switched to automated driving mode is hands-on or hands-off. Lee teaches that the concept of using predetermined amounts of time to determine action (e.g. para. 353, 354, 375). Applying this concept of predetermined time to change display when a driving mode has been changed, is all of taught and suggested by the prior art, and further motivation can be found to use this time period to ensure the driver intended the switch (Id).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 16:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the display control unit is configured to, in a state where the display control unit does not display the surrounding state image in the without-monitoring-duty automated driving and when switching of a stage of automated driving to a lower stage in automation is made,
change a display of the surrounding state image according to the stage of automated driving after the switching, at the same time as the switching or before the switching,
regardless of whether the automated driving in the hands-on mode or the automated driving in the hands-off mode, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
As best understood, claim 16 describes an embodiment whereby no surrounding state image is displayed in a w/out-monitoring automated driving (obvious over Lee, see mapping to claim 1 and claim 2, one display embodiment taught by Lee and, alternatively, a design choice), and switch to a lower stage of automation (interpreted as another automated driving mode), change display after, at the same time, or before the switch, regardless of whether the switched to automated driving mode is hands-on or hands-off. The timing of display change in relation to mode switch is taught by Lee (e.g. para. 438, after switch), (para. 274, at the same time).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 23:
Lee teaches: a vehicle display control system for a vehicle (e.g. Fig. 2: 110, 120, 130, 140, 20), the vehicle configured to switch from the without-monitoring-duty automated driving without the duty of monitoring by a driver to the with-monitoring-duty automated driving with the duty of monitoring by the driver (see mapping to claim 1), the vehicle display control system comprising:
a display device to be provided to the vehicle so that a display surface of the display device is oriented to the interior of the vehicle (paras. 94-95); and the vehicle display control device according to claim 1.
It would have been obvious for one of ordinary skill, as of the effective filing date of Applicant’s claims, to have further modified the applied references, in view of Lee, to have obtained the above. The motivation would be to enhance vehicle driving.
Regarding claim 24: see also claim 1.
The method of claim 24 corresponds to the functions performed by the device of claim 1. The same rationale for rejection applies.
Regarding claim 25: see also claim 1.
Lee teaches: a vehicle display control device (Fig. 2: 50) comprising: a processor (Fig. 2: 190). Regarding the remaining features of claim 25, the functions of the device of claim 25 correspond to those performed by the device of claim 1, written in a stylistically different manner. The same rationale for rejection applies.
Regarding claim 31:
Lee and/or Koyama teach: the vehicle display control device according to claim 1, wherein the processor and the memory are further configured to implement: an action determination unit (Lee, at least one aspect of the apparatus of claim 1 to select driving mode) configured to, when determining that takeover of driving from the automated driving in the hands-off mode to the automated driving in the hands-on mode is necessary (Koyama, paras. 34-35, which describes a scenario of evasive action of a leading vehicle, as conditions for determination of takeover of driving to hands-on mode),
output a request for takeover of driving to the mode identification unit (Koyama, para. 35, driver is notified of a steering holding request),
wherein in response to the request for takeover of driving, the mode identification unit is configured to
maintain the automated driving in the hands-off mode, when a cause of the request for takeover of driving has been predicted, and
switch to the automated driving in the hands-on mode, when the cause of the request for takeover of driving has not been predicted (Koyama, paras. 34-35, 38-39).
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained the above, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 32:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the display control unit is configured to, when the vehicle is in the without- monitoring-duty automated driving, cause the display device not to display the surrounding state image, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
See above mapping to claim 1, this is an obvious design choice based on driving control signals and changing display parameters. See Lee. Para. 96, and mapping to claim 1).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Claim(s) 11, 12, 13, 14, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Koyama and further in view of Mimura (U.S. Patent App. Pub. No. 2017/0315556).
Regarding claim 11:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the display control unit is configured to, when the vehicle has switched to the automated driving in the hands-off mode and in at least one of cases where the vehicle changes a lane by the automated driving or where a vehicle around the vehicle is estimated to cut into a driving lane of the vehicle,
switch a display of the surrounding state image to a display that is of when the mode identification unit identifies the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Mimura teaches a driving control system with a plurality of automated driving modes, each having a different degree of automated driving (Summary and claim 1). See “Vehicle Control System” beginning at para. 68 and with Fig. 2, and a description of some exemplary modes, beginning with “First Mode” at para. 76. These modes can be selected based on driver behavior and environmental factors. The system of Mimura includes an external environment recognition unit 142 that can recognize a case of a vehicle around the vehicle cutting into a driving lane of a vehicle, for example. See Figs. 4-6 and related descriptions. Another environmental example to trigger a mode: the first mode can be implemented in the case of a traffic jam (para. 78). The system of Mimura can also detect changing of lane surrounding vehicles as claimed (e.g. para. 88-90). As a result of detected environmental factors, Mimura describes a notification condition changing unit (Fig. 11: 174) that can, among other things, change driving mode. See para. 106, Fig. 11 and Fig. 15 with related descriptions.
Modifying the applied references, such to incorporate the teachings of Mimura, which take in external information and driver information (among other information) to select modes of automated driving, to the device mapped in claim 1, such as to control the display of information per the mapping to claim 1, whereby Lee also teaches display of information being tied to mode of vehicle, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill. This would result in a switching of display that corresponds to a hands-on mode, even when a hands-off mode automated driving continues (i.e. the system determining, per the teachings of Mimura, what information should be displayed, based on factors described in relation to Mimura, Fig. 2. This would be an embodiment whereby, based on the data from the environment recognition unit (i.e. vehicle changes lanes, or a vehicle cuts into a driving lane) and other data per Mimura, the display should be changed accordingly.
The prior art included each element recited in claim 11, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 12:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the display control unit is configured to, when switching of the vehicle to the automated driving in the hands-off mode is made and when an elapsed time from the switching reaches a predetermined time,
switch a display of the surrounding state image to a display of the surrounding state image that is of when the mode identification unit identifies the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Re: switching to hands-off mode, this is mapped in claim 1 (user driver behavior). Re: a predetermined time elapsing from the switching, see mapping to claim 15. And re: switching a display to a hands-on mode, even when driver is in the hands-off driving mode, see above mapping to claim 11 and Mimura. Environmental factors, among other factors, can lead the system to override the expected display behavior, as mapped and described above in the rationale for claim 11, which also applies here.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 13:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the processor and memory are further configured to implement: a grip identification unit configured to identify gripping of a steering wheel by the driver (Mimura, para. 143-45, the notification condition changing unit can determine whether or not there is a gripping of steering wheel) (alternatively, Koyama, para. 29, steering wheel touch sensor) , wherein the display control unit is configured to, when switching of the vehicle to the automated driving in the hands-off mode is made and when the grip identification unit identifies gripping of the steering wheel (see above mapping re: gripping of wheel identification),
switch a display of the surrounding state image to a display of the surrounding state image that is of when the mode identification unit identifies the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Re: switch a display to a hands-on mode, even when automated driving in hands-off mode continues see above mapping to claim 11 and Mimura. This corresponds to a mode change, due to factors ascertained by the system of Mimura, which might not match the expected display (i.e. hands-on, as system senses hands on steering wheel, but mode of driving is hands-off). This is one embodiment taught by the prior art and mapped in claim 11, which applies here.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 14:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the processor and memory are further configured to implement: a grip identification unit configured to identify gripping of a steering wheel by the driver, wherein the display control unit is configured to when the vehicle is switched to the automated driving in the hands-off mode and when the grip identification unit identifies gripping of the steering wheel by the driver,
continue a display of the surrounding state image that is of when the mode identification unit identifies the automated driving in the hands-off mode for a predetermined time after the grip identification unit identifies gripping of the steering wheel by the driver, and
subsequently switch the display of the surrounding state image to a display of the surrounding state image that is of when the mode identification unit identifies the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
See mapping to claim 13, which applies here. The multiple switches in claim 14 is taught by at least one embodiment of changing environmental conditions and/or driver conditions, as taught by Mimura and mapped in claims 11-13. The grip identification unit and related features are also mapped in claim 13. See also Fig. 13 of Mimura.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 29: see claim 11.
These claims are similar. See table below; the same rationale for rejection applies.
Claim 11
Claim 29
the vehicle display control device according to claim 1, wherein the display control unit is configured to, when the vehicle has switched to the
automated driving in the hands-off mode and in at least one of cases where the vehicle changes a lane by the automated driving or where a vehicle around the vehicle is estimated to cut into a driving lane of the vehicle,
the vehicle display control device according to claim 1, wherein the display control unit is configured to, in a case where the vehicle has switched to the automated driving in the hands-off mode and in at least one of a case where the vehicle changes the lane by the automated driving or a case where it is estimated that a nearby vehicle is to cut into the subject vehicle lane,
switch a display of the surrounding state image to a display that is of when the mode identification unit identifies
the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues.
switch the display of the surrounding state image to the display that is of when the mode identification unit identifies
the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues.
Regarding claim 30: see claim 12.
These claims are similar. See table below; the same rationale for rejection applies.
Claim 12
Claim 30
the vehicle display control device according to claim 1, wherein the display control unit is configured to, when switching of the vehicle to the automated driving in the hands-off mode is made and when an elapsed time from the switching reaches a predetermined time,
the vehicle display control device according to claim 1, wherein the display control unit is configured to,
in a state where the vehicle is switched to the automated driving in the hands-off mode and when an elapsed time from this switching reaches a predetermined time, which is arbitrarily set,
switch a display of the surrounding state image to a display of the surrounding state image that is of when the mode identification unit identifies the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues.
switch to the display of the surrounding state image that is of
when the mode identification unit identifies the automated driving in the hands-on mode, even when the automated driving in the hands-off mode continues.
Claim(s) 17 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Koyama, and further in view of Wörle J, Metz B, Othersen I, Baumann M. Sleep in highly automated driving: Takeover performance after waking up. Accident; Analysis and Prevention. 2020 Jun 13;144:105617- (“Worle”).
Regarding claim 17:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the vehicle is configured to switch, as a stage of automated driving, at least between the without-monitoring-duty automated driving and the with-monitoring-duty automated driving,
the vehicle is configured to perform, as the without-monitoring-duty automated driving, at least a sleep-permitted automated driving, in which the driver is permitted to sleep, and a sleep-prohibited automated driving, in which the driver is not permitted to sleep,
the display control unit is configured to cause the display device to display driving related information, which is related to driving of the vehicle, and
the display control unit is configured to, when the sleep-permitted automated driving is switched to the sleep-prohibited automated driving, cause an amount of the driving related information, which is displayed on the display device in the sleep- prohibited automated driving, to be larger than an amount of the driving related information, which is displayed on the display device in the sleep-permitted automated driving, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Re: first wherein clause, please see claim 1 mapping re: with and without monitoring.
Re: sleep permitted automated driving and sleep prohibited automated driving, see Worle, Introduction, a transition from L3 to L4 driving in BMW cars (see citation in Worle, Introduction (BMW, 2018) in the second column of page 1), L3 being sleep prohibited and L4 sleep permitted automated driving. This teaches this claim feature.
Re: the last two functions of the display control unit, displaying a larger amount of information in sleep-prohibited, see mapping to claim 1. Lee teaches display related to mode, and having more when a driver isn’t sleeping versus sleeping, is obvious and taught/suggested by Lee, as described in claim 1 to the last claim feature of that claim. That applies here.
The prior art included each element recited in claim 17, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 22:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the vehicle is configured to switch at least between the without-monitoring-duty automated driving and the with-monitoring-duty automated driving (claim 17),
the vehicle is configured to perform, as the without-monitoring-duty automated driving, at least a sleep-permitted automated driving, in which the driver is permitted to sleep, and a sleep-prohibited automated driving, in which the driver is not permitted to sleep (claim 17),
the display control unit is configured to cause the display device to display driving related information, which is related to driving of the vehicle (claim 17), and
the display control unit is configured to, when switching from the sleep-permitted automated driving to a driving at a stage of the with-monitoring-duty automated driving or lower in automation is made, increase, after the switching, an amount of the driving related information displayed on the display device to be larger than an amount of the driving related information displayed in the sleep-permitted automated driving, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
See mapping to claim 17, which applies in part here (see above). Re: the last function of the display control unit, see above mapping to claim 1, with regard to Lee modifying the display based on driving mode. Having a larger amount of information when the mode is switched to a driver being allowed to sleep, to with-monitoring or lower in automation, is obvious and taught over the prior art (see mapping to claim 1). A driver who is asleep will not be looking at the display.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Claim(s) 18, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Koyama and Worle and further in view of Katz (U.S. Patent App. Pub. No. 2021/0269045 A1).
Regarding claim 18:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 17, wherein the processor and memory are further configured to implement: a state identification unit configured to identify a state of the driver, wherein the display control unit is configured to when the state identification unit identifies that the driver is not in a sleep state in the sleep-permitted automated driving, change a display of information, after switching of the sleep-permitted automated driving to the sleep-prohibited automated driving is made, according to the stage of automated driving after the switching, and
when the state identification unit identifies that the driver transitions from the sleep state to an awaken state in the sleep-permitted automated driving, change the display of information, before switching from the sleep-permitted automated driving to the sleep- prohibited automated driving is made, according to the stage of automated driving after the switching, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Katz teaches the state identification unit (e.g. para. 90, 125-37, sensor to obtain physiological or physical state information, to, i.e. determine a driver’s state associated with awareness or attentiveness of the driver (para. 50, 61)). Modifying the applied references, such that the attentiveness/awareness includes sleep, such for drivers in the vehicles described by Worle, is all of taught and suggested by the prior art and would have been obvious and predictable to one of ordinary skill. Re: modifying the display based on user’s sensed attentiveness (awake/asleep), before the mode is changed, Katz further teaches/suggested initiating actions based on the determined state/attentiveness/awareness of the driver, which can include generating/displaying any one of messaging or commands, and/or other visual stimuli. See paras. 50-51 and claim 79-80. Modifying this teaching of Katz, such that the initiated action is change of display information, as per Lee and mapped in claim 1, both references dealing with drivers in vehicles, is all of taught and suggested by the prior art and would have been obvious and predictable to one of ordinary skill.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 19:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 17, wherein the processor and memory are further configured to implement: a state identification unit configured to identify a state of the driver (see mapping to claim 18), wherein the display control unit is configured to when switching from the sleep-permitted automated driving to the sleep-prohibited automated driving is made and when the state identification unit has identified that the driver is in an awaken state (mapping to claim 18) before a predetermined time period in advance of a scheduled timing of the switching (Worle, Section 1.1, this is taught by the “takeover time” from driver sleeping and taking back control of vehicle), change a display of information, after switching from the sleep- permitted automated driving to the sleep-prohibited automated driving, according to the stage of automated driving after the switching (see mapping to claim 1, changing display based on mode), and
when the state identification unit has identified that the driver has transitioned from a sleep state to the awaken state within a predetermined time period before the scheduled timing of the switching (mapped above), change the display of information, before switching from the sleep- permitted automated driving to the sleep-prohibited automated driving, according to the stage of automated driving after the switching (see mapping to claim 1, changing display based on mode), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 20:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the vehicle is configured to switch, as a stage of automated driving, at least between the without-monitoring-duty automated driving and the with-monitoring-duty automated driving (see claim 17 mapping), the vehicle is configured to perform, as the without-monitoring-duty automated driving, at least a sleep-permitted automated driving, in which the driver is permitted to sleep, and a sleep-prohibited automated driving, in which the driver is not permitted to sleep (mapping to claim 17),
the display control unit is configured to cause the display device to display driving related information, which is related to driving of the vehicle (mapping to claim 17), the processor and memory are further configured to implement:
a state identification unit configured to identify a state of the driver (mapping to claim 18),
wherein the display control unit is configured to cause an amount of the driving related information, which is displayed on the display device when the state identification unit identifies that the driver is in a sleep state, to be larger than an amount of the driving related information, which is displayed on the display device when the state identification unit identifies that the driver is in a awaken state in the sleep permitted automated driving, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
The last wherein clause is a scenario where more information is presented when a driver is sleeping, than when a driver is awake in sleep permitted automated driving. Presumably the driver is sleeping in a mode other than sleep permitted automated driving, as best as the examiner can understand from the claim language. This can be a moment where the system, per Katz, has just sensed the driver is asleep and will change the mode and/or display; OR in an embodiment where the display is only changed based on driving mode, and independent of state of driver. Either one is an embodiment taught by the prior art and within the purview of one of ordinary skill.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Claim(s) 21 is rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Koyama and Worle and further in view of Katz and Kothari (U.S. Patent App. Pub. No. 2017/0313248).
Regarding claim 21:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 20, wherein the display control unit is configured to cause the display device to display information, and control, as the display device, a display of a driver side display device, which has a display surface positioned in front of a driver's seat of the vehicle (Kothari, Fig. 1), and
a display of a passenger side display device, which is other than the driver side display and having a display surface positioned at a location visible to a passenger of the vehicle (Kothari, Fig. 1), and
the display control unit is configured to, when the state identification unit identifies the driver in the sleep state in the sleep-permitted automated driving (see mapping to claims 18, 19 or 20) , increase an amount of the driving related information displayed on the passenger side display to be larger than an amount of the driving related information displayed on the driver side display, compared with a state where the state identification unit identifies the driver in the awaken state (having more info for a passenger side when a driver is sleeping is obvious over the prior art and/or an obvious design choice. As mapped throughout, Lee teaches modifying display based on mode. Having this scenario is one of many taught by Lee (in mode where sleeping, or where driver is stated as sleeping, increase display on passenger side, since driver won’t be engaging with passenger and passenger could be bored. See also discussion re: Worle and Katz, in aforementioned claims) (alternatively, an obvious design choice. As shown in the claims and specification, Applicant does not describe any criticality to an amount of information displayed in any of the many modes, and, in fact, has several design variations of display. This feature of claim would have been an obvious design choice in designing display elements based on driver state), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Claim(s) 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Koyama and further in view of Ali (U.S. Patent App. Pub. No. 2019/0378040).
Regarding claim 26:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the vehicle display control device according to claim 1, wherein the display control unit is configured to display, in addition to the surrounding state image, an image of a hand, which indicates that the vehicle performs the automated driving in the hands-on mode, when the mode identification unit identifies the automated driving in the hands-on mode, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Ali is relevant to user interface displays for vehicles (Fig. 1: 121). See also Fig. 6. Ali even teaches that a hand graphic or icon can be used to indicate a steering instruction (i.e. something related to hands) (see para. 42). Modifying the applied references, in view of Ali, to have included an image of a hand to identify “hands-on” mode, is taught/suggested by Ali, and, alternatively, as design choice for one of ordinary skill. Applicant’s specification gives no description that an image of a hand as related to a hands-on mode, and that this image is critical to Applicant’s invention. Instead, it is a design choice for one of ordinary skill.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 27: see claim 26.
These claims are similar; the same rationale for rejection applies.
Regarding claim 28: see claim 26.
These claims are similar; the same rationale for rejection applies.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
* * * * *
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Sarah Lhymn
Primary Examiner
Art Unit 2613
/Sarah Lhymn/Primary Examiner, Art Unit 2613