Prosecution Insights
Last updated: April 19, 2026
Application No. 18/165,297

VEHICLE DISPLAY APPARATUS

Final Rejection §102§103
Filed
Feb 06, 2023
Examiner
SMITH, JORDAN T
Art Unit
3666
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
DENSO CORPORATION
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
74%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
60 granted / 90 resolved
+14.7% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
114
Total Applications
across all art units

Statute-Specific Performance

§101
24.9%
-15.1% vs TC avg
§103
51.6%
+11.6% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 90 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments with respect to 35 U.S.C. 102/103 have been fully considered but they are not persuasive. Applicant argues: “The Seitz reference fails to teach or suggest these recites features. For example, Seitz fails to teach or suggest a rear image with any images or symbols representing a following vehicle. …[T]he Seitz reference merely displays a rear portion of the ego vehicle. Seitz, however, does not display any following vehicle. As such, Seitz fails to teach or suggest at least one of a circuit and a processor configured to display a front image and a rear image of a following vehicle in an additional manner when the level of autonomous driving is autonomous driving level 3 or higher, in which the driver's obligation to monitor the surround is necessary, as recited by claim 10. In addition, Seitz displays "a trailer object 90" as shown in FIG. 9 and described in paragraphs [0172] to [0175]. The "trailer object 90" of Seitz, however, is different and distinguishable from "a following vehicle," as recited by claim 10 as the trailer object is still part of the subject vehicle 31, as illustrated in FIG. 9 of Seitz[.]” Examiner respectfully disagrees. The broadest reasonable interpretation of a “following vehicle” includes a trailer, as the term “vehicle” is extremely broad. While an ego vehicle and a trailer are connected, they are not necessarily considered a single vehicle; they can just as easily be described as two separate vehicles. Moreover, applicant’s specification does not appear to define “vehicle” or “following vehicle” in a particular way that would preclude a trailer being interpreted as a following vehicle, nor does the specification recite that a following vehicle is not a trailer. Thus, Seitz teaches displaying a following vehicle in the behind view. Applicant further argues: “In addition, claims 20 and 21 each recite additional features that are not shown by the cited references. For example, claims 20 and 21 recite determining whether there is a following vehicle from the surrounding information or not, displaying the front area image and the rear area image without the following vehicle in a continuous and additional manner when the autonomous driving function is demonstrated and the determining step determines that there is no following vehicle, displaying the front area image and the rear area image, including the following vehicle, in a continuous and additional manner when the autonomous driving function is demonstrated and the determining step determines that there is the following vehicle. The Seitz reference is silent with respect to these recited features.” Examiner respectfully disagrees. As described below, Seitz describes determining if there is a following vehicle (trailer), and accordingly displays, or does not display, the following vehicle. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 10 and 21 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US20220135061 by Seitz et al. (hereinafter “Seitz”). Regarding claim 10, Seitz teaches A vehicle display apparatus, comprising: a display device that displays traveling information of a vehicle; and a controller including at least one of (i) a circuit and (ii) a processor with a memory storing computer program code executable by the processor, see for example paragraphs [0102]-[0104] for system architecture, including the display system. the at least one of the circuit and the processor configured to cause the controller to: acquire position information, traveling state, and surrounding information of the vehicle; and control the display device to display a surrounding image of the vehicle on the display unit as one of the traveling information, see for example paragraphs [0051]-[0052] and [0186], where the vehicle determines its position on the road and in relation to other vehicles based on sensors. and switch a display form relating to a relationship among the vehicle and a surrounding vehicle in the surrounding image according to: a level of autonomous driving of the vehicle, which is set based on the position information, the traveling state, and the surrounding information; the traveling state; and a state of surrounding vehicles as the surrounding information,1 see paragraphs [0179]-[0188], where the system displays differently based on a level of automation. See also for example paragraphs [0147]-[0149], where the system displays the ego vehicle, surrounding road detail, and other vehicles, reading on the traveling state and a state of surrounding vehicles as the surrounding information. wherein the at least one of the circuit and the processor are further configured to display, on the display device, a front area image including the vehicle and no rear area image when the level of autonomous driving is autonomous driving level 1 or autonomous driving level 2, in which a driver’s obligation to monitor a surrounding is necessary, see paragraphs [0179]-[0183], where the system, when in a low automation state, shows a reduced depiction of the environment on the display; see Figure 10A. and to display the front area image and the rear area image of a following vehicle in an additional manner when the level of autonomous driving is an autonomous driving level 3 or higher, in which the driver’s obligation to monitor the surround is unnecessary. See paragraphs [0184]-[0188], where the system, when in a higher automation state, shows an expanded view of the vehicle from above and to the rear; see Figure 10B. Regarding claim 21, Seitz teaches A vehicle display apparatus, comprising: a display device that displays traveling information of a vehicle; and a controller including at least one of (i) a circuit and (ii) a processor with a memory storing computer program code executable by the processor, see for example paragraphs [0102]-[0104] for system architecture, including the display system. the at least one of the circuit and the processor configured to cause the controller to: acquire position information, traveling state, and surrounding information of the vehicle; and control the display device to display a surrounding image of the vehicle on the display device as one of the traveling information, see for example paragraphs [0051]-[0052] and [0186], where the vehicle determines its position on the road and in relation to other vehicles based on sensors. and switch a display form relating to a relationship among the vehicle and a surrounding vehicle in the surrounding image according to: a level of autonomous driving of the vehicle, which is set based on the position information, the traveling state, and the surrounding information; the traveling state; and a state of surrounding vehicles as the surrounding information; see paragraphs [0179]-[0188], where the system displays differently based on a level of automation. See also for example paragraphs [0147]-[0149], where the system displays the ego vehicle, surrounding road detail, and other vehicles, reading on the traveling state and a state of surrounding vehicles as the surrounding information. wherein the at least one the circuit and the processor are further configured to: display, on the display device, a front area image including the vehicle and no rear area image when the level of autonomous driving is autonomous driving level 1 or autonomous driving level 2, in which a driver's obligation to monitor a surrounding is necessary; see paragraphs [0179]-[0183], where the system, when in a low automation state, shows a reduced depiction of the environment on the display; see Figure 10A. determine whether there is a following vehicle from the surrounding information or not; see for example paragraph [0094], where the system using a “detection unit” to determine the presence of a trailer (“an operating state of a trailer device”) (with a trailer reading on following vehicle). Similarly, in paragraph [0173] “[i]f it is recorded that a device is mounted on the trailer device, then the ego object 31 is generated in combination with a graphic trailer object 90” (emphasis added). display, on the display device, the front area image and the rear area image without the following vehicle in a continuous and additional manner when the level of autonomous driving is autonomous driving level 3 or higher, in which the driver's obligation to monitor the surrounding is unnecessary and the determining step determines that there is no following vehicle; see paragraphs [0184]-[0188], where the system, when in a higher automation state, shows an expanded view of the vehicle from above and to the rear; see Figure 10B. display, on the display device, the front area image and the rear area image including the following vehicle in a continuous and additional manner when the level of autonomous driving is autonomous driving level 3 or higher, in which the driver's obligation to monitor the surround is unnecessary and the determining step determines that there is the following vehicle. See paragraphs [0172]-[0175], where the system, when there is a trailer present, shows an image of the trailer (following vehicle) in a front area image with a rear area image. See also paragraphs [0184]-[0188], where the system, when in a higher automation state, shows an expanded view of the vehicle from above and to the rear; see Figure 10B. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Seitz, further in view of US20230296395 by Kumon (hereinafter “Kumon”). Regarding claim 1, Seitz teaches A vehicle display apparatus, comprising: a display device that displays traveling information of a vehicle; and a controller including at least one of (i) a circuit and (ii) a processor with a memory storing computer program code executable by the processor, see for example paragraphs [0102]-[0104] for system architecture, including the display system. the at least one of the circuit and the processor configured to cause the controller to: acquire position information of the vehicle and surrounding information of the vehicle; see for example paragraphs [0051]-[0052] and [0186], where the vehicle determines its position on the road and in relation to other vehicles based on sensors. and control, based on the position information and the surrounding information, the display device to: display a front area image including the vehicle and no rear area image on the display device when an autonomous driving function of the vehicle is see paragraphs [0179]-[0183], where the system, when in a low automation state, shows a reduced depiction of the environment on the display; see Figure 10A. and display the front area image and the rear area image, including a following vehicle, in a continuous and additional manner when the autonomous driving function is demonstrated. See paragraphs [0184]-[0188], where the system, when in a higher automation state, shows an expanded view of the vehicle from above and to the rear; see Figure 10B. Seitz does not explicitly teach that the display apparatus should display a front area image including the vehicle and no rear area image on the display device when an autonomous driving function of the vehicle is not demonstrated. Although Seitz analogously teaches displaying only a front area image when in a reduced automation, Seitz does not explicitly teach displaying the image when in a manual, or non-autonomous, driving state. However, Kumon teaches a system that will control the display device to: display a front area image including the vehicle and no rear area image on the display device when an autonomous driving function of the vehicle is not demonstrated; and display the front area image and the rear area image, including a following vehicle, in a continuous and additional manner when the autonomous driving function is demonstrated. See the flowchart shown in Figure 3 displaying the process outline. In paragraphs [0052]-[0053], the system determines whether the vehicle is in autonomous mode at step 100. If the vehicle is not in autonomous mode, the system shows the normal HUD shown in Figures 4A and 4B (see paragraph [0053]); but if the vehicle is in autonomous mode, the system can, depending on factors, display a third-person multi-lane image (see paragraphs [0054]-[0056]). Thus, the system displays a front area image when autonomous driving is not demonstrated (in context of a system that also displays a front and rear area image when autonomous driving is demonstrated). It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz with the automation discriminating perspective of Kumon with a reasonable expectation of success. Doing so allows the driver to have a better idea of what is occurring in the vicinity of the vehicle, while not suffering from excess information overwhelming their ability to understand it. See Kumon ¶ [0050]. Regarding claim 20, Seitz teaches A vehicle display apparatus, comprising: a display device that displays traveling information of a vehicle; and a controller including at least one of (i) a circuit and (ii) a processor with a memory storing computer program code executable by the processor, see for example paragraphs [0102]-[0104] for system architecture, including the display system. the at least one of the circuit and the processor configured to cause the controller to: acquire position information of the vehicle and surrounding information of the vehicle; see for example paragraphs [0051]-[0052] and [0186], where the vehicle determines its position on the road and in relation to other vehicles based on sensors. and control, based on the position information and the surrounding information, the display device to: display a front area image including the vehicle and no rear area image on the display device when an autonomous driving function of the vehicle is see paragraphs [0179]-[0183], where the system, when in a low automation state, shows a reduced depiction of the environment on the display; see Figure 10A. determine whether there is a following vehicle from the surrounding information or not; see for example paragraph [0094], where the system using a “detection unit” to determine the presence of a trailer (“an operating state of a trailer device”) (with a trailer reading on following vehicle). Similarly, in paragraph [0173] “[i]f it is recorded that a device is mounted on the trailer device, then the ego object 31 is generated in combination with a graphic trailer object 90” (emphasis added). display the front area image and the rear area image without the following vehicle in a continuous and additional manner when the autonomous driving function is demonstrated and the determining step determines that there is no following vehicle; see paragraphs [0184]-[0188], where the system, when in a higher automation state, shows an expanded view of the vehicle from above and to the rear; see Figure 10B. and display the front area image and the rear area image, including the following vehicle, in a continuous and additional manner when the autonomous driving function is demonstrated and the determining step determines that there is the following vehicle. See paragraphs [0172]-[0175], where the system, when there is a trailer present, shows an image of the trailer (following vehicle) in a front area image with a rear area image. See also paragraphs [0184]-[0188], where the system, when in a higher automation state, shows an expanded view of the vehicle from above and to the rear; see Figure 10B. Seitz does not explicitly teach that the display apparatus should display a front area image including the vehicle and no rear area image on the display device when an autonomous driving function of the vehicle is not demonstrated. Although Seitz analogously teaches displaying only a front area image when in a reduced automation, Seitz does not explicitly teach displaying the image when in a manual, or non-autonomous, driving state. However, Kumon teaches a system that will control the display device to: display a front area image including the vehicle and no rear area image on the display device when an autonomous driving function of the vehicle is not demonstrated. See the flowchart shown in Figure 3 displaying the process outline. In paragraphs [0052]-[0053], the system determines whether the vehicle is in autonomous mode at step 100. If the vehicle is not in autonomous mode, the system shows the normal HUD shown in Figures 4A and 4B (see paragraph [0053]); but if the vehicle is in autonomous mode, the system can, depending on factors, display a third-person multi-lane image (see paragraphs [0054]-[0056]). Thus, the system displays a front area image when autonomous driving is not demonstrated (in context of a system that also displays a front and rear area image when autonomous driving is demonstrated). It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz with the automation discriminating perspective of Kumon with a reasonable expectation of success. Doing so allows the driver to have a better idea of what is occurring in the vicinity of the vehicle, while not suffering from excess information overwhelming their ability to understand it. See Kumon ¶ [0050]. Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Seitz in view of Kumon, and further in view of US20170330463 by Li et al. (hereinafter “Li”). Regarding claim 2, Seitz does not explicitly teach wherein the at least one of the circuit and the processor are further configured to cause the controller to widen the rear area image as a distance between the vehicle and the following vehicle increases. However, Li teaches a wherein the at least one of the circuit and the processor are further configured to cause the controller to widen the rear area image as a distance between the vehicle and the following vehicle increases. See for example Figures 25A – 25C or paragraphs [0204] – [0205] where the perspective is adjusted to include more of the trailing vehicle when it is within a set range of the host vehicle. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the perspective shifting methods of Li with a reasonable expectation of success. Doing so allows the driver to have a better idea of what is occurring in the vicinity of the vehicle, including seeing blind spots and approaching vehicles, promoting safety of the vehicle. Regarding claim 3, Seitz does not explicitly teach wherein if the distance is equal to or greater than a predetermined distance, the at least one of the circuit and the processor are further configured to cause the controller to maximize the rear area image and display the following vehicle as a simple display for indicating an existence on the display device. However, Li teaches a system wherein if the distance is equal to or greater than a predetermined distance, the at least one of the circuit and the processor are further configured to cause the controller to maximize the rear area image and display the following vehicle as a simple display for indicating an existence on the display device. See for example Figures 24A – 24C, where the system reduces the zoom and maximizes the area to show the other vehicle in Figure 24A, as compared with the more zoomed in Figures 24B and 24C. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the perspective shifting methods of Li with a reasonable expectation of success. Doing so allows the driver to have a better idea of what is occurring in the vicinity of the vehicle, including seeing blind spots and approaching vehicles, promoting safety of the vehicle. Regarding claim 4, Seitz does not explicitly teach, but Li does teach wherein if a distance between the vehicle and the following vehicle varies, the at least one second processor and memory are further configured to fix a rear area to an area capable of absorbing a variation in the distance. See for example Figures 24A – 24C, where the system reduces the zoom and maximizes the area to show the other vehicle in Figure 24A, as compared with the more zoomed in Figures 24B and 24C. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the perspective shifting methods of Li with a reasonable expectation of success. Doing so allows the driver to have a better idea of what is occurring in the vicinity of the vehicle, including seeing blind spots and approaching vehicles, promoting safety of the vehicle. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Seitz in view of Kumon, further in view of WO2016185691 by Usui et al. (hereinafter “Usui”). Regarding claim 5, Sato does not explicitly teach wherein if there is a priority following vehicle having a predetermined high priority in the following vehicle, the at least one of the circuit and the processor are further configured to cause the controller to display up to the priority following vehicle in the rear area on the display device. However, Usui teaches a system wherein if there is a priority following vehicle having a predetermined high priority in the following vehicle, the at least one of the circuit and the processor are further configured to cause the controller to display up to the priority following vehicle in the rear area on the display device. See for example page 7 at line 320 through page 8 line 344, or Figure 10, where the system highlights predetermined types of emergency vehicles (ambulance, police car, fire engine, etc.) with a special notification/display superimposed on the camera image showing the emergency vehicle. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the emergency vehicle highlighting methods of Usui with a reasonable expectation of success. Doing so allows the driver to quickly understand that a priority vehicle is approaching in order to take appropriate action. Regarding claim 6, Sato does not explicitly teach wherein the at least one of the circuit and the processor are further configured to cause the controller to perform an emphasized display which emphasizes the priority following vehicle on the display device. However, Usui teaches wherein the at least one of the circuit and the processor are further configured to cause the controller to perform an emphasized display which emphasizes the priority following vehicle on the display device. See for example page 7 at line 320 through page 8 line 344, or Figure 10, where the system highlights predetermined types of emergency vehicles (ambulance, police car, fire engine, etc.) with a special notification/display superimposed on the camera image showing the emergency vehicle. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the emergency vehicle highlighting methods of Usui with a reasonable expectation of success. Doing so allows the driver to quickly understand that a priority vehicle is approaching in order to take appropriate action. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Seitz in view of Kumon, further in view of US20180257489 Watanabe et al. (hereinafter “Watanabe”). Regarding claim 7, Sato does not explicitly teach wherein the at least one of the circuit and the processor are further configured to cause the controller to perform a unity image display on the display device showing a sense of unity of the vehicle and the following vehicle in a case that the following vehicle performs an automatic following driving to the vehicle. However, Watanabe teaches a system wherein the at least one of the circuit and the processor are further configured to cause the controller to perform a unity image display on the display device showing a sense of unity of the vehicle and the following vehicle in a case that the following vehicle performs an automatic following driving to the vehicle. See for example paragraphs [0017] - [0019], where the HUD displays a following mark superimposed on a following vehicle. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the followed vehicle identification system of Watanabe with a reasonable expectation of success. Doing so allows the driver to quickly recognize the vehicle it is set to follow from among the other vehicles on the display. Regarding claim 8, Sato does not explicitly teach wherein the at least one of the circuit and the processor are further configured to cause the controller to display a message on the display device indicating a relationship between the vehicle and the following vehicle. However, Watanabe teaches a system wherein the at least one of the circuit and the processor are further configured to cause the controller to display a message on the display device indicating a relationship between the vehicle and the following vehicle. See for example paragraphs [0017] - [0019], where the HUD displays a following mark superimposed on a following vehicle. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the followed vehicle identification system of Watanabe with a reasonable expectation of success. Doing so allows the driver to quickly recognize the vehicle it is set to follow from among the other vehicles on the display. Regarding claim 9, Sato does not explicitly teach wherein the at least one of the circuit and the processor are further configured to cause the controller to display the message so as not to overlap the vehicle and the following vehicle among the image. However, Watanabe teaches a system wherein the at least one of the circuit and the processor are further configured to cause the controller to display the message so as not to overlap the vehicle and the following vehicle among the image. See for example Figures 3 or 4 where the following vehicle mark is on the HUD but does not overlap with the following vehicle itself. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz, modified by the automation discriminating perspective of Kumon, with the followed vehicle identification system of Watanabe with a reasonable expectation of success. Doing so allows the driver to quickly recognize the vehicle it is set to follow from among the other vehicles on the display. Claims 11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Seitz, further in view of Li. Regarding claim 11, Seitz does not explicitly teach wherein when the level of autonomous driving is autonomous driving level 3 or higher in a traffic congestion, the at least one of the circuit and the processor are further configured to display on the display device, if there is a following vehicle, the rear area image up to a rear end of the following vehicle, and to display on the display device, if there is no following vehicle, a wider area than an area assuming the following vehicle. However, Li teaches a system wherein when the level of autonomous driving is autonomous driving level 3 or higher in a traffic congestion, the at least one of the circuit and the processor are further configured to display on the display device, if there is a following vehicle, the rear area image up to a rear end of the following vehicle, and to display on the display device, if there is no following vehicle, a wider area than an area assuming the following vehicle. See for example Figures 25A – 25C or paragraphs [0204] – [0205] where the perspective is adjusted to include more of the trailing vehicle when it is within a set range of the host vehicle, where adjusting the camera further away from the host vehicle to include the following vehicle reads on if there is a following vehicle, the rear area image up to a rear end of the following vehicle (e.g. Fig. 25C), and alternatively shows a more centered view (e.g. Fig. 25B) when there is no following vehicle, reading on if there is no following vehicle, a wider area than an area assuming the following vehicle. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz with the perspective shifting methods of Li with a reasonable expectation of success. Doing so allows the driver to have a better idea of what is occurring in the vicinity of the vehicle, including seeing blind spots and approaching vehicles, promoting safety of the vehicle. Regarding claim 13, Sato does not explicitly teach wherein if a dangerous vehicle, which may be dangerous to the vehicle, approaches, the at least one of the circuit and the processor are further configured to display on the display device the dangerous vehicle so as to be included in the rear area image. However, Li teaches a system wherein if a dangerous vehicle, which may be dangerous to the vehicle, approaches, the at least one of the circuit and the processor are further configured to display on the display device the dangerous vehicle so as to be included in the rear area image. See again for example Figures 25A or 25C, where when a vehicle approaches the host vehicle, the viewpoint is adjusted to show the nearby vehicle. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz with the perspective shifting methods of Li with a reasonable expectation of success. Doing so allows the driver to have a better idea of what is occurring in the vicinity of the vehicle, including seeing blind spots and approaching vehicles, promoting safety of the vehicle. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Seitz, further in view of JP2017166913 by Kamiya (hereinafter “Kamiya”). Regarding claim 12, Seitz teaches wherein if the autonomous driving of the vehicle is autonomous driving level 3 or higher while the vehicle is traveling in a predetermined area permitting autonomous driving which is the autonomous driving is permitted in a predetermined specific area, the at least one of the circuit and the processor are further configured to the surrounding image by a . See again for example paragraphs [0184] – [0188] where the system displays the vehicle in the center of the screen (at least in a right to left direction) in a bird’s eye perspective while the driver has permitted the vehicle to operate in autonomous driving mode (reading on permitted in a predetermined specific area). Seitz does not explicitly teach a two-dimensional view. However, Kamiya teaches a system where the display can show a two-dimensional view. See for example Fig. 4, where the viewpoint can be given in a two dimensional view, as compared to Fig. 12 showing a bird’s eye view. It would have been prima facie obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to have modified the display system of Seitz with the two-dimensional view of Kamiya with a reasonable expectation of success. Doing so allows the driver to see clearly the vehicles in its vicinity in all directions, improving driving safety. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN THOMAS SMITH whose telephone number is (571)272-0522. The examiner can normally be reached Monday - Friday, 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORDAN T SMITH/Examiner, Art Unit 3666 /ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666 1 Note that Examiner is interpreting the claim to be organized as switching “according to: 1) a level of autonomous driving of the vehicle, which is set based on a) the position information, b) the traveling state, and c) the surrounding information; 2) the traveling state; and 3) a state of surrounding vehicles as the surrounding information.” That is, the switching is according to three conditions: the level of autonomous driving, the traveling state, and a state of surrounding vehicles; and that first condition, the level of autonomous driving, is set by a) the position information, b) the traveling state, and c) the surrounding information. A similar understanding applies to claims 14, 15, 16, 18, 19, and 21.
Read full office action

Prosecution Timeline

Feb 06, 2023
Application Filed
Feb 12, 2025
Non-Final Rejection — §102, §103
May 18, 2025
Response Filed
May 30, 2025
Examiner Interview Summary
May 30, 2025
Applicant Interview (Telephonic)
Jun 10, 2025
Final Rejection — §102, §103
Aug 11, 2025
Examiner Interview Summary
Aug 11, 2025
Response after Non-Final Action
Aug 11, 2025
Applicant Interview (Telephonic)
Sep 10, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Oct 09, 2025
Non-Final Rejection — §102, §103
Jan 14, 2026
Response Filed
Mar 11, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600378
AUTONOMOUS DRIVING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12579853
VEHICLE ABNORMALITY DETECTION DEVICE AND VEHICLE ABNORMALITY DETECTION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12568873
AGRICULTURAL ASSISTANCE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12556310
VEHICLE AND METHOD FOR PREVENTING COMMUNICATION COLLISIONS BETWEEN COMMUNICATION TERMINALS PROVIDED IN A VEHICLE
2y 5m to grant Granted Feb 17, 2026
Patent 12553727
ROUTE SELECTION USING MACHINE-LEARNED SAFETY MODEL
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
74%
With Interview (+7.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 90 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month