Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings were received on November 13th 2023. These drawings are accepted.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
The certified copy has been filed on December 22nd 2023.
Status of the Claims
This action is in response to the applicant’s filing on September 19th 2025.
Claims 1-5 are pending and examined below.
Response to Arguments
Applicant’s amendments with respect to the rejection of claims under 35 USC § 112 have been fully considered and are persuasive. Therefore, the rejection of claims under 35 USC § 112 has been withdrawn.
Applicant’s amendments with respect to the rejection of claims under 35 USC § 101. Therefore, the rejection of claims under 35 USC § 101 has been withdrawn.
Applicant’s amendments with respect to the rejection of claims under 35 USC § 103 have been fully considered but are moot. While the Examiner notes that the applicant is arguing the claim limitations recite " … a left lane marking image, and a right lane marking image displayed on the display unit when the processor has determined that the lane keeping control is being executed, wherein the vehicle image represents the vehicle, the left lane marking image represents the left marking line of the target lane, and the right lane marking image representing the right marking line of the target lane…receive a selection of a display mode from among the selectable display modes by a user in the vehicle as a result of the user's operation of a graphical display element from among the user operable graphical display elements that corresponds to the display mode… “. Therefore, the rejection has been withdrawn; However, upon further consideration a new ground(s) of rejection is made for Claims 1 over Shimizu (Patent No. 20240199010A1) in view of Li (Paten No. US20170240185A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Shimizu (Patent No. 20240199010A1) in view of Li (Paten No. US20170240185A1).
Regarding claim 1, Shimizu teaches a vehicle display control device that is installed in a vehicle, the vehicle display control device comprising; (See Shimizu paragraph 0032; “The HMI 30 outputs various types of information to occupants (including the driver) of the vehicle M and receives input operations from the occupants. The HMI 30 includes, for example, various types of display devices…”); a processor; and a memory storing executable instructions that cause the processor to; (See Shimizu paragraph 0040; “… HMI controller 170, and a storage 180. Each of the first controller 120, the second controller 160, and the HMI controller 170 is implemented, for example, by a hardware processor such as a central processing unit (CPU) executing a program (software). Also, some or all of the above components may be implemented by hardware (including a circuit; circuitry) such as a large-scale integration (LSI) circuit, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a graphics processing unit (GPU) or may be implemented by software and hardware in cooperation. The program may be pre-stored in a storage device (a storage device including a non-transitory storage medium) such as an HDD or a flash memory of the automated driving control device 100 or may be stored in a removable storage medium such as a DVD, a CD-ROM, or a memory card and installed in the storage device of the automated driving control device 100 when the storage medium (the non-transitory storage medium) is mounted in a drive device, a card slot, or the like. The HMI controller 170 is an example of an “output controller.””); detect differences in luminance in image data captured by a camera to identify lane markings of a target lane in which the vehicle is traveling; (See Shimizu paragraph 0063; “The first recognizer 132 recognizes markings around the vehicle M on the basis of an output of the detection device DD. For example, the first recognizer 132 recognizes left and right markings LL1 and RL1 for defining the travel lane of the vehicle M. The markings LL1 and RL1 are examples of “first markings.” For example, the first recognizer 132 analyzes an image (hereinafter referred to as a camera image) captured by the camera 10, extracts edge points having a large luminance difference from the adjacent pixels in the image, and recognizes the first markings LL1 and RL1 in an image plane by connecting the edge points…”);
and the execution image indicates whether a lane keeping control is being executed to keep the vehicle traveling in the target lane; (See Shimizu paragraph 0063-0064; “The first recognizer 132 recognizes markings around the vehicle M on the basis of an output of the detection device DD. For example, the first recognizer 132 recognizes left and right markings LL1 and RL1 for defining the travel lane of the vehicle M. The markings LL1 and RL1 are examples of “first markings.” For example, the first recognizer 132 analyzes an image (hereinafter referred to as a camera image) captured by the camera 10, extracts edge points having a large luminance difference from the adjacent pixels in the image, and recognizes the first markings LL1 and RL1 in an image plane by connecting the edge points. The first recognizer 132 converts the positions of the first markings LL1 and RL1 based on a position of a representative point (for example, the center of gravity or center) of the vehicle M into positions of the vehicle coordinate system. In this conversion, the positions of the first markings LL1 and RL1 projected onto the road surface on which the vehicle M is traveling (for example, on the XY plane (Z=0) or on the horizontal plane) are expressed in the coordinate system. The map expressed by this coordinate system becomes a distance measurement map indicating a distance from the position of the vehicle M obtained by projecting each point recognized by the camera image onto the XY plane in the visual line direction from the camera 10. The first recognizer 132 may recognize curvatures (or curvature radii) of the first markings LL1 and RL1 on the basis of the analysis result of the camera image and may recognize the type and content of road signs included in the camera image.
The second recognizer 134, for example, recognizes the left and right markings LL2 and RL2 for defining the travel lane of the vehicle M from the map information on the basis of the position of the vehicle M detected by the position detector. The markings LL2 and RL2 are examples of “second markings.” For example, the second recognizer 134 acquires the position information of the vehicle M detected by the position detector, refers to the second map information 62 on the basis of the acquired position information, and recognizes the second markings LL2 and RL2 for defining the lane located at the position of the vehicle M from the second map information 62. The second recognizer 134 may recognize curvatures (or curvature radii) and road gradient information of the markings LL2 and RL2 from the second map information 62.”);
determine whether the lane keeping control is being executed; (See Shimizu paragraph 0074 and 0076; “…FIG. 6 are points on the first marking LL1 on the left side of the vehicle M recognized by the first recognizer 132 using a camera image and points R1R to R3R are points on the first marking RL1 on the right side of the vehicle M recognized by the first recognizer 132… In the case of a curve path, the curvature of the left and right markings of the lane L2 may be different or the gradient may be in the lateral direction (the road width direction) of a bank or the like. Therefore, the corrector 136 may correct the positions of the second markings LL2 and RL2 in correspondence with the curvature of each marking and the gradient degree in the lateral direction (the road width direction).”);
a left lane marking image, and a right lane marking image displayed on the display unit when the processor has determined that the lane keeping control is being executed, wherein the vehicle image represents the vehicle, the left lane marking image represents the left marking line of the target lane, and the right lane marking image representing the right marking line of the target lane; (See Shimizu paragraph 0063; “The first recognizer 132 recognizes markings around the vehicle M on the basis of an output of the detection device DD. For example, the first recognizer 132 recognizes left and right markings LL1 and RL1 for defining the travel lane of the vehicle M. The markings LL1 and RL1 are examples of “first markings.” For example, the first recognizer 132 analyzes an image (hereinafter referred to as a camera image) captured by the camera 10, extracts edge points having a large luminance difference from the adjacent pixels in the image, and recognizes the first markings LL1 and RL1 in an image plane by connecting the edge points. The first recognizer 132 converts the positions of the first markings LL1 and RL1 based on a position of a representative point (for example, the center of gravity or center) of the vehicle M into positions of the vehicle coordinate system. In this conversion, the positions of the first markings LL1 and RL1 projected onto the road surface on which the vehicle M is traveling (for example, on the XY plane (Z=0) or on the horizontal plane) are expressed in the coordinate system. The map expressed by this coordinate system becomes a distance measurement map indicating a distance from the position of the vehicle M obtained by projecting each point recognized by the camera image onto the XY plane in the visual line direction from the camera 10. The first recognizer 132 may recognize curvatures (or curvature radii) of the first markings LL1 and RL1 on the basis of the analysis result of the camera image and may recognize the type and content of road signs included in the camera image.”).
Shimizu does not explicitly teach but Li teaches, display a screen on a display unit provided in the vehicle, wherein the screen includes user operable graphical display elements indicating selectable display modes in which an execution image is to be displayed;(See Li paragraph 0217; “…display a screen on a display unit provided in the vehicle, wherein the screen includes user operable graphical display elements indicating selectable display modes in which an execution image is to be displayed…The user may directly execute the display guide mode by selecting the menu for executing the display guide mode.”);
receive a selection of a display mode from among the selectable display modes by a user in the vehicle as a result of the user's operation of a graphical display element from among the user operable graphical display elements that corresponds to the display mode; (See Le paragraph 0217 and 0091-0092; “…display a screen on a display unit provided in the vehicle, wherein the screen includes user operable graphical display elements indicating selectable display modes in which an execution image is to be displayed…The user may directly execute the display guide mode by selecting the menu for executing the display guide mode…the display guide mode, a display method of an existing output graphic image or a newly-output graphic image may be also changed…in the display guide mode, …type, luminance and saturation of the existing output graphic image … or the graphic image is displayed ... “…display a screen on a display unit provided in the vehicle, wherein the screen includes user operable graphical display elements indicating selectable display modes in which an execution image is to be displayed…The user may directly execute the display guide mode by selecting the menu for executing the display guide mode.”);
and a control unit that causes a cause the display unit to display, the execution image in the selected display mode with respect to a vehicle image; (See Li paragraph 0054, 0217 and 0327; “…FIG. 2, such a driver assistance apparatus 100 may include an input unit 110, a communication unit 120, an interface 130, a memory 140, a sensor unit 155, a processor 170, a display unit 180…display a screen on a display unit provided in the vehicle, wherein the screen includes user operable graphical display elements indicating selectable display modes in which an execution image is to be displayed… The display unit 741 may configure an inter-layer structure with a touch sensor, or may be integrally formed with the touch sensor to implement a touchscreen. The touchscreen may function as the user input unit 724 which provides an input interface between the vehicle and the user and also function to provide an output interface between the vehicle and the user. In this case, the display unit 741 may include a touch sensor which senses a touch to the display unit 741 so as to receive a control command in a touch manner…”).
Both Shimizu and Li are in the same field of display control. It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to modify Shimizu vehicle display control device with Li display mode from among the selectable display modes. No new functionality would arise from the combination and the combination would improve usability of Shimizu by getting the right display mode and control of vehicle. Further, finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Regarding claim 5 Shimizu in view of Li teaches the vehicle display control device according to claim 1, Shimizu does not teach but Li theaches, wherein the executable instructions further cause the processor to receive the display mode from the occupant before the vehicle travels; (See Li paragraph 0054, 0217 and 0327; “…FIG. 2, such a driver assistance apparatus 100 may include an input unit 110, a communication unit 120, an interface 130, a memory 140, a sensor unit 155, a processor 170, a display unit 180…display a screen on a display unit provided in the vehicle, wherein the screen includes user operable graphical display elements indicating selectable display modes in which an execution image is to be displayed… The display unit 741 may configure an inter-layer structure with a touch sensor, or may be integrally formed with the touch sensor to implement a touchscreen. The touchscreen may function as the user input unit 724 which provides an input interface between the vehicle and the user and also function to provide an output interface between the vehicle and the user. In this case, the display unit 741 may include a touch sensor which senses a touch to the display unit 741 so as to receive a control command in a touch manner…”).
Both Shimizu and Li are in the same field of display control. It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to modify Shimizu vehicle display control device with Li display mode from among the selectable display modes. No new functionality would arise from the combination and the combination would improve usability of Shimizu by getting the right display mode and control of vehicle. Further, finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Shimizu (Patent No. 20240199010A1) in view of Li (Paten No. US20170240185A1) and Yagyu (Patent No. JP2021066419A).
Regarding claim 2 Shimizu in view of Li teaches the vehicle display control device according to claim 1, Shimizu does not teach but Yagyu teaches, wherein the executable instructions further cause the processor to display the execution image without overlapping a lane marking image representing the lane marking of the target lane; (See Yagyu paragraph 0121; “Even in the offset release display, the display of the target emphasized content CTet and the expected locus content CTp is continued. The virtual lane marking image unit LPv2 displayed as the target emphasized content CTet is superimposed and displayed on the own vehicle side of the lane marking Ll so as not to overlap with the actual lane marking Ll…”).
Regarding claim 3 Shimizu in view of Li teaches the vehicle display control device according to claim 2, Shimizu does not teach but Yagyu teaches, the executable instructions further cause the processor to display the execution image between the left lane marking image and the right lane marking image; (See Yagyu paragraph 0028; “The lane keeping control unit 51 is a functional unit that realizes the function of the LTA (Lane Tracing Assist) that controls the traveling of the vehicle A in the lane. LTA is also called LTC (Lane Trace Control). The lane keeping control unit 51 controls the steering angle of the steering wheel of the vehicle A based on the position and shape information of the lane marking Ll or the roadside Er extracted from the image data of the front camera 31 by recognizing the traveling environment. The lane keeping control unit 51 generates a planned traveling line PRL having a shape along the own lane Lns so that the vehicle can continue traveling in the own lane Lns, which is the running lane. The lane keeping control unit 51 defines the planned traveling line PRL at the center position of the own lane Lns, which is approximately equidistant from the left and right controlled target Tc. Therefore, when one of the left and right control target Tc changes (switches) from the lane marking Ll to the road end Er, the planned traveling line PRL of the own vehicle is gently offset so as to approach the road end Er which is the control target Tc (). (See FIG. 4)…”).
Both Shimizu and Yagyu are in the same field of display control. It would have been obvious for one ordinary skilled in the art before the effective filing date of present invention to modify Shimizu vehicle display control device with Yagyu different image execution. No new functionality would arise from the combination and the combination would improve usability of Ikeda by getting a better view and control of vehicle. Further, finding that one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Allowable Subject Matter
Claim 4 is objected to as being dependent upon a rejected base claim 1, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIDIA KWIATKOWSKA whose telephone number is (571)272-5161. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott A. Browne can be reached at (571) 270-0151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.K./Examiner, Art Unit 3666
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666