Prosecution Insights
Last updated: April 19, 2026
Application No. 18/528,532

DISPLAY CONTROL DEVICE AND DISPLAY CONTROL METHOD

Non-Final OA §103
Filed
Dec 04, 2023
Examiner
MATTHEWS, ANDRE L
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Faurecia Clarion Electronics Co. Ltd.
OA Round
5 (Non-Final)
61%
Grant Probability
Moderate
5-6
OA Rounds
3y 5m
To Grant
78%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
307 granted / 503 resolved
-1.0% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
36 currently pending
Career history
539
Total Applications
across all art units

Statute-Specific Performance

§101
2.0%
-38.0% vs TC avg
§103
68.6%
+28.6% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 503 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1/7/2026 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5, 7, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hato (US 2021/372810) in view of Higashiyama (US 202/0103649), Bark (US 2016/0257199) and further in view of Hoover (US 2019/0130622) and Mimura (US 2024/0042857) and Sun (US 2021/0223859). Regarding claim 1 and 5, Hato teaches A display control device, comprising: an occupant information acquisition unit for acquiring information related to the angle of the occupant (Figs. 4 and 15 viewpoint position specifying unit 71 detects the eyepoint of the driver); an image acquisition unit for acquiring an image in which a periphery of a vehicle is captured (External information acquisition unit 74); an image conversion unit for converting the image in which the periphery of the vehicle is captured by the image acquisition unit to a virtual viewpoint image viewed from a virtual viewpoint (virtual layout unit 75); a virtual viewpoint setting unit for setting the position of the virtual viewpoint based on the amount of change of the occupant (virtual layout unit 75); and a display processing unit for performing control to display the virtual viewpoint image on a display unit (Figs. 4 and 15 display generation unit 77). Although Hato teaches the limitations as discussed above and teaches tracking the eye point of the occupant, he does not explicitly teach acquiring information relating to the angle of face of occupant and setting a viewpoint based on the amount of change in the face angle in the occupant. However in the same field of projecting an image to occupant of a vehicle, Higashiyama teaches a display method with teach acquiring information relating to the angle of face of occupant and setting a viewpoint based on the amount of change in the face angle in the occupant ([0054] teaches that the system will detect change in user eye based on the head of use from the orientation of the face based on position of the eyes with respect to the head (inner points of eyes and moving points/irises of eyes. Fig. 13 steps 110-116 teach changing the image based on the gaze point of occupant). Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Hato with the method as taught by Higashiyama. This combination would provide an improved experience for an occupant without creating more obstructions. Although combination teaches the limitations as discussed above they fail to explicitly teach a storage unit for storing a reference angle in which the face angle of the occupant is a prescribed angle, wherein the virtual viewpoint setting unit is configured to set the position of the virtual viewpoint based on an amount of change, which is calculated by subtracting the angle of the face of the occupant acquired by the occupant information acquisition unit from the reference angle stored in the storage unit, and wherein the image conversion unit is configured to generate a display image to be displayed in a display region of the display unit based on the captured image and the position of the virtual viewpoint calculated by the virtual viewpoint setting unit. However in the field of recognizing a user head pose to determine viewing location, Bark teaches a storage unit for storing a reference angle of the occupant in a prescribed angle ([0026-0027] teach eye box 116 can is sized based on different possible head positions of the driver regardless of a position and posture of the driver seat. Therefore it is clear that the system can detect a start position of head (reference angle).) and a virtual viewpoint setting unit is configured to set the position of the virtual viewpoint based on the amount of change ([0026-0027] teach eye box 116 can is sized based on different possible head positions of the driver regardless of a position and posture of the driver seat. Therefore it is clear that the system can detect a start position of head and amount of change of a head position to adjust the size of the eye box for the user.)and wherein the image conversion unit is configured to generate a display image to be displayed in a display region of the display unit based on the captured image ([0034] the system uses cameras to detect a vehicle in front of the trailing vehicle. As shown and described with respect to Fig. 3) and the position of the virtual viewpoint calculated by the virtual viewpoint setting unit ([0026-0027] teach eye box 116 can is sized based on different possible head positions of the driver regardless of a position and posture of the driver seat. Therefore it is clear that the system can detect a start position of head and amount of change of a head position to adjust the size of the eye box for the user.) Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Hato with the method as taught by Higashiyama and the method of detecting user head pose as taught by Bark. This combination would provide an improved experience for an occupant without creating more obstructions. Although the combination teaches the limitations as discussed above, setting the position of the virtual view based on an amount of change, which is calculated by subtracting the angle of the face of the occupant acquired by the occupant information acquisition unit from the reference angle stored in the storage unit. However in the field of presenting a virtual image to an user, Hoover teaches a storage unit for storing a reference angle in which the face angle of the occupant is a prescribed angle (Fig. 13-14, reference angle), an occupant monitoring unit for outputting a yaw angle and a pitch angle indicating a direction in which the face of the occupant of the vehicle is facing in a horizontal direction(turn left or right) and a vertical direction(tilt head forward or backward) as information related to the angle of the face, wherein the yaw angle is the angle of the face of the occupant in the horizontal direction and the pitch angle is the angle of the face of the occupant in the vertical direction; (Fig. 13-14, reference angle [0113] teaches determining a yaw (turn left or right), pitch (tilt head forward or backward), and roll of user to calculate head pose) wherein the occupant information acquisition unit is configured to: acquire the angle of the face after the direction of the face has changed from the occupant monitoring unit, and acquire the reference angle from the storage unit; calculate an amount of change in the yaw angle by subtracting the yaw angle as the reference angle from the yaw angle as the angle of the face after movement; calculate an amount of change in the pitch angle by subtracting the pitch angle as the reference angle from the pitch angle as the angle of the face after movement, ( [0131-0138] describe in Fig. 14 that the system can determine a head pose angular difference by using a refence head pose and a new head pose). Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Hato with the method as taught by Higashiyama, the method of detecting user head pose as taught by Bark, and the method of detecting user head pose as taught by Hoover. This combination would provide an improved experience for an occupant without creating more obstructions. Although the combination teaches the limitations as discussed above, occupant, the a virtual viewpoint corresponding to a center of projection when a display region of the display unit is set to a projection surface; wherein the display image Is an image as if looking outside the vehicle through the display unit from the virtual viewpoint and the display image is the image of the captured image viewed from the virtual viewpoint. However in the field of presenting a virtual image to a user Mimura teaches a virtual viewpoint corresponding to a center of projection when a display region of the display unit is set to a projection surface ([0076-0079][0088][0092] teach the system will detect an angle of occupant and a line of sight of occupant to calculate if an image can be displayed based on occupant’s head position. [0088] teaches the image in the box is updated base on change in position of occupant head position); wherein the display image Is an image as if looking outside the vehicle through the display unit from the virtual viewpoint and the display image is the image of the captured image viewed from the virtual viewpoint (Figs 24-28 [0110-0119] teach that the image displayed is an image captured by peripheral cameras an presented to the user based on line of sight. [0113] teaches the content of images displayed changes as the car moves). Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Hato with the method as taught by Higashiyama, the method of detecting user head pose as taught by Bark, the method of detecting user head pose as taught by Hoover, and the method of presenting an image to the user as taught by Mimura. This combination would provide an improved experience for an occupant without creating more obstructions. Although the combination teaches the limitations as discussed above, they fail to explicitly teach set the position of the virtual viewpoint of the occupant in the horizontal direction by adding a value corresponding to the amount of change in the yaw angle to a position in the horizontal direction corresponding to the reference angle; and set the position of the virtual viewpoint of the occupant in the vertical direction by adding a value corresponding to the amount of change in the pitch angle to a position in the vertical direction corresponding to the reference angle. However in the field of presenting images to users Sun teaches a method to set the position of the virtual viewpoint of the occupant in the horizontal direction by adding a value corresponding to the amount of change in the yaw angle to a position in the horizontal direction corresponding to the reference angle; and set the position of the virtual viewpoint of the occupant in the vertical direction by adding a value corresponding to the amount of change in the pitch angle to a position in the vertical direction corresponding to the reference angle ([0063] teaches a gaze placement value for a x axis is added to a head compensation value for the x-horizontal value and a gaze placement y axis value is added to a head compensation value for y vertical value) . Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Hato with the method as taught by Higashiyama, the method of detecting user head pose as taught by Bark, the method of detecting user head pose as taught by Hoover, the method of presenting an image to the user as taught by Mimura and the method of head movement compensation as taught by Sun. This combination would provide an improved experience for an occupant without creating more obstructions. Regarding claims 3 and 7, Higashiyama teaches viewpoint position acquisition unit for acquiring information related to the position of an eye of the occupant, wherein the virtual viewpoint setting unit sets the virtual viewpoint to a position corresponding to the position of the eye of the occupant when the face angle of the occupant is the reference angle, and when the face angle of the occupant is not the reference angle, moves the position of the virtual viewpoint to a position based on the amount of change in the position of the eye of the occupant ([0054][0079-0081]) and Hoover teaches moving a virtual viewpoint based corresponding to the amount of change being calculated by subtracting the angle of the face(head) of the occupant acquired by the occupant information acquisition unit form the stored reference angle. ([0113] teaches determining a yaw, pitch, and roll of user to calculate head pose and [0131-0138] describe in Fig. 14 that the system can determine a head pose angular difference by using a refence head pose and a new head pose. See also Fig. 13 and the respective description). Regarding claims 13 and 14, Mimura teaches wherein the image conversion unit is configured to execute a first conversion process for converting coordinates of the captured image from an image coordinate system of the captured image to coordinates of a vehicle coordinate system, which is a coordinate system of the vehicle, and a second conversion process for converting the coordinates of the vehicle coordinate system to a display coordinate system, which is a coordinate system of the display unit([0110-0119] teach that the image displayed is an image captured by peripheral cameras an presented to the user based on line of sight. [0113] teaches the content of images displayed changes as the car moves. Therefore it is obvious the system will map accordingly the outside image captured by the camera to the display unit for presentation to an occupant based on position of occupant as seen in Figs. 24-28). Allowable Subject Matter Claim 17 is allowed. Claims 15-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE L MATTHEWS whose telephone number is (571)270-5806. The examiner can normally be reached Mon-Fri 9:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDRE L MATTHEWS/ Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Aug 24, 2024
Non-Final Rejection — §103
Nov 18, 2024
Interview Requested
Nov 26, 2024
Response Filed
Dec 10, 2024
Applicant Interview (Telephonic)
Dec 10, 2024
Examiner Interview Summary
Mar 03, 2025
Final Rejection — §103
May 05, 2025
Request for Continued Examination
May 08, 2025
Response after Non-Final Action
May 27, 2025
Non-Final Rejection — §103
Aug 28, 2025
Response Filed
Nov 06, 2025
Final Rejection — §103
Jan 08, 2026
Request for Continued Examination
Jan 12, 2026
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592187
Zonal Attenuation Compensation
2y 5m to grant Granted Mar 31, 2026
Patent 12586494
COLOR CALIBRATION SYSTEM AND COLOR CALIBRATION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12575301
DISPLAY DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12567349
DISPLAY PANEL AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12546652
LIGHT DETECTION MODULE, LIGHT DETECTION METHOD AND DISPLAY DEVICE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
61%
Grant Probability
78%
With Interview (+17.0%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 503 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month