Prosecution Insights
Last updated: April 19, 2026
Application No. 18/724,629

DISPLAY CONTROL DEVICE, HEAD-MOUNTED DISPLAY, AND DISPLAY CONTROL METHOD

Non-Final OA §101§103
Filed
Jun 27, 2024
Examiner
STATZ, BENJAMIN TOM
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
65.2%
+25.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Application claims priority to foreign application with application number JP2022-006403 dated 01/19/2022. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The information disclosures statements dated 06/27/2024, 06/18/2025, and 10/30/2025 have been considered and placed in the application file. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. The following terms in the claims have been given the following interpretations in light of the specification: “Content image” (claim 6): [0021] “In the following explanation, a “content image” refers to an image that is not a real-time image of a real space displayed in the see-through mode, and is used for comparison.” This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims. Should applicant wish different definitions, applicant should point to the portions of the specification that clearly show a different definition. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 14 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims appear to be directed to a software embodiment and not to hardware embodiment, where a machine claim is directed towards a system, apparatus, or arrangement. The claim appears to be directed towards a software embodiment. Paragraph [0054] of the published specification describes the elements of the system being implemented as software alone actualizing the embodiments of the invention. The claimed limitations are capable of being performed as software as described in the above paragraphs, alone since no hardware component is being claimed. Software, alone, are not physical components and thus are not statutory since software do not define any structural and functional interrelationships between the computer programs and other claimed elements of a computer, which permit the computer' s program functionality to be realized. Hence, the stated functions comprise software and is thus not directed to a hardware embodiment. Data structures not claimed as embodied in computer readable media are descriptive material per se and are not statutory because they are not capable of causing functional change in the computer. See e.g., Warmerdam, 33 F.3d at 1361, 31, USPQ2d at 1760 (claim to a data structure per se held non-statutory). Such claimed data structures do not define any structural and functional interrelationships between data and other claimed aspects of the invention, which permit the data structure' s functionality to be realized. In contrast, a claimed computer readable medium encoded with a data structure defines structural and functional interrelationships between the data structure and the computer software and hardware components which permit the data structure' s functionality to be realized, and is thus statutory. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3 and 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okuda (WO 2011024423 A1) in view of Wang et al. (US 20220155595 A1, hereinafter "Wang"). Regarding claim 1, Okuda teaches: A display control device that causes a left-eye display image and a right-eye display image constituting a frame of a video to be displayed in [left and right] regions of a display panel, respectively (fig. 4 shows left-eye and right-eye images displayed in offset locations on a display panel), the display control device comprising: an image data generation section that alternately generates either one of the left-eye display image and the right-eye display image for each frame (pg. 13-14 “When capturing a stereoscopic image with the compound eye camera 50, the left and right cameras 51 and 52 alternately capture still images. In this way, the compound eye camera 50 sequentially generates a left eye image taken by the left eye camera 51 and a right eye image taken by the right eye camera 52, which are necessary to display a stereoscopic image. For example, a left eye image 61 and a right eye image 62 as shown in FIG. 5 are generated.”); and an output control section that performs control in such a way that, on the display panel, the either one of the display images is displayed in a corresponding one of the [left and right] regions while no image is displayed in the other region (pg. 15 “Specifically, as shown in FIG. 6(b), the stereoscopic image display control device 1 alternately outputs screen data representing an image for the left eye (left eye image) and an image for the right eye (right eye image) to the stereoscopic image display device 2. The stereoscopic image display device 2 sequentially displays the screen data received from the stereoscopic image display control device 1 on the screen of the display 24.”). Okuda does not explicitly teach the use of distinct left and right regions, instead teaching overlapping regions in conjunction with the use of active shutter glasses to block vision in one eye at a time. Wang teaches: A display control device that causes a left-eye display image and a right-eye display image constituting a frame of a video to be displayed in left and right regions of a display panel, respectively ([0054] “Currently, in an HMD image display technical solution shown in FIG. 1, the HMD may include a first display screen and a second display screen, or referred to as a left display screen and a right display screen. The left display screen and the right display screen respectively correspond to two eyeballs of the user, and respectively display an image (that is, a left-eye image) used for viewing by a left eye of the user and an image (that is, a right-eye image) used for viewing by a right eye of the user. When the user uses the HMD, the HMD can obtain an image in real time according to a preset display frame rate f, and display the left-eye image and the right-eye image frame by frame.”), the display control device comprising: an output control section that performs control in such a way that, on the display panel, the either one of the display images is displayed in a corresponding one of the left and right regions while no image is displayed in the other region (fig. 12, [0145] “In another embodiment, the HMD may alternately display the left-eye image and the right-eye image in different display cycles T according to a preset display frame rate f. For example, referring to FIG. 12, the display frame rate f is 120 fps, and the HMD may sequentially perform displaying according to the following order: displaying the first frame of left-eye image on the left display screen in the first display cycle T; displaying the second frame of right-eye image on the right display screen in the second display cycle T; displaying a third frame of left-eye image on the left display screen in the third display cycle T; . . . ; displaying a 120.sup.th frame of right-eye image on the right display screen in a 120.sup.th display cycle T.”). Okuda and Wang are analogous to the claimed invention because they are in the same field of stereoscopic 3D head-mounted displays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda with the teachings of Wang to replace the active shutter-based system of Okuda with a system with a separate display for each eye. The motivation would have been to increase reliability by reducing dependence on additional electro-chemical components. Regarding claim 2, the combination of Okuda in view of Wang teaches: The display control device according to claim 1, further comprising: a photographed-image acquisition section that acquires data regarding a stereo video photographed by a stereo camera from left and right viewpoints, wherein, by alternately using a left-viewpoint photographed image and a right-viewpoint photographed image constituting a frame of the stereo video, the image data generation section alternately generates the left-eye display image and the right-eye display image (Okuda pg. 13-14 “When capturing a stereoscopic image with the compound eye camera 50, the left and right cameras 51 and 52 alternately capture still images. In this way, the compound eye camera 50 sequentially generates a left eye image taken by the left eye camera 51 and a right eye image taken by the right eye camera 52, which are necessary to display a stereoscopic image. For example, a left eye image 61 and a right eye image 62 as shown in FIG. 5 are generated.”). Regarding claim 3, the combination of Okuda in view of Wang teaches: The display control device according to claim 2, wherein the photographed-image acquisition section alternately acquires the left-viewpoint photographed image and the right-viewpoint photographed image constituting a frame of the stereo video (Okuda pg. 13-14 “When capturing a stereoscopic image with the compound eye camera 50, the left and right cameras 51 and 52 alternately capture still images. In this way, the compound eye camera 50 sequentially generates a left eye image taken by the left eye camera 51 and a right eye image taken by the right eye camera 52, which are necessary to display a stereoscopic image. For example, a left eye image 61 and a right eye image 62 as shown in FIG. 5 are generated.”), and the image data generation section generates the display images by using the acquired photographed images (Okuda pg. 14 “The compound eye camera 50 records the left eye image and right eye image thus generated as stereoscopic image data on a recording medium (for example, an optical disk, a memory card, or a hard disk) using a stereoscopic image recording circuit 53. In this way, the compound eye camera 50 can generate the stereoscopic image data required to display a stereoscopic image. In particular, in this embodiment, the compound eye camera 50 (stereoscopic video recording circuit 53) adds management information, including information on the distance between the left and right cameras 51, 52 when capturing the stereoscopic video data and information on the focal length at the time of capturing the data, to the stereoscopic video data and records the data as a stereoscopic video stream on a recording medium.”). Regarding claim 11, Okuda teaches: A head-mounted display (pg. 5 “The stereoscopic image display device 2 is a device capable of displaying an image signal, such as a display monitor, a TV receiver, or a head-mounted display.”) comprising: a display control device that causes a left-eye display image and a right-eve display image constituting a frame of a video to be displayed in [left and right] regions of a display panel, respectively (fig. 4 shows left-eye and right-eye images displayed in offset locations on a display panel), the display control device including an image data generation section that alternately generates either one of the left-eye display image and the right-eye display image for each frame (pg. 13-14 “When capturing a stereoscopic image with the compound eye camera 50, the left and right cameras 51 and 52 alternately capture still images. In this way, the compound eye camera 50 sequentially generates a left eye image taken by the left eye camera 51 and a right eye image taken by the right eye camera 52, which are necessary to display a stereoscopic image. For example, a left eye image 61 and a right eye image 62 as shown in FIG. 5 are generated.”), and an output control section that performs control in such a way that, on the display panel, the either one of the display images is displayed in a corresponding one of the [left and right] regions while no image is displayed in the other region (pg. 15 “Specifically, as shown in FIG. 6(b), the stereoscopic image display control device 1 alternately outputs screen data representing an image for the left eye (left eye image) and an image for the right eye (right eye image) to the stereoscopic image display device 2. The stereoscopic image display device 2 sequentially displays the screen data received from the stereoscopic image display control device 1 on the screen of the display 24.”); a stereo camera that photographs a stereo video to be displayed as the display images (pg. 14 “In this way, the compound eye camera 50 can generate the stereoscopic image data required to display a stereoscopic image. In particular, in this embodiment, the compound eye camera 50 (stereoscopic video recording circuit 53) adds management information, including information on the distance between the left and right cameras 51, 52 when capturing the stereoscopic video data and information on the focal length at the time of capturing the data, to the stereoscopic video data and records the data as a stereoscopic video stream on a recording medium.”); and the display panel (pg. 15 “The stereoscopic image display device 2 sequentially displays the screen data received from the stereoscopic image display control device 1 on the screen of the display 24)”. Okuda does not explicitly teach the use of distinct left and right regions, instead teaching overlapping regions in conjunction with the use of active shutter glasses to block vision in one eye at a time. Wang teaches: a display control device that causes a left-eye display image and a right-eve display image constituting a frame of a video to be displayed in left and right regions of a display panel, respectively, the display control device including an output control section that performs control in such a way that, on the display panel, the either one of the display images is displayed in a corresponding one of the left and right regions while no image is displayed in the other region (fig. 12, [0145] “In another embodiment, the HMD may alternately display the left-eye image and the right-eye image in different display cycles T according to a preset display frame rate f. For example, referring to FIG. 12, the display frame rate f is 120 fps, and the HMD may sequentially perform displaying according to the following order: displaying the first frame of left-eye image on the left display screen in the first display cycle T; displaying the second frame of right-eye image on the right display screen in the second display cycle T; displaying a third frame of left-eye image on the left display screen in the third display cycle T; . . . ; displaying a 120.sup.th frame of right-eye image on the right display screen in a 120.sup.th display cycle T.”). The motivation to combine the invention of Okuda with the teachings of Wang would have been identical to that of claim 1. Regarding claim 12, it is rejected with the same rationale as claim 2 because its limitations substantially correspond to the limitations of claim 2. Regarding claims 13 and 14, they are rejected with the same rationale as claim 1 because their limitations substantially correspond to the limitations of claim 1, as well as the additional limitation in claim 14 of a computer program (Wang [0022] “According to still another aspect, an embodiment of this application provides a computer program product. When the computer program product runs on a computer, the computer is enabled to perform the image display method in any possible design of the foregoing aspects.”). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okuda (WO 2011024423 A1) in view of Wang (US 20220155595 A1) as applied to claim 2 above, and further in view of Sadi et al. (US 20160088287 A1, hereinafter "Sadi"). Regarding claim 4, the combination of Okuda in view of Wang teaches: The display control device according to claim 2, but does not explicitly teach: further comprising: a correction regulation storage section that stores, for each pixel, an integrated correction amount of a plurality of kinds of corrections necessary to generate the display images from the photographed images, wherein the image data generation section generates the display images not via intermediate images by correcting the photographed images by the integrated correction amount. Sadi teaches: a correction regulation storage section that stores, for each pixel, an integrated correction amount of a plurality of kinds of corrections necessary to generate the display images from the photographed images ([0096] “In particular embodiments, cameras 112 may have some lens distortion as well as some deviation relative to a target position or orientation 114. In particular embodiments, corrections for these effects may be static, and they may be pre-calibrated and corrected using lookup tables in the front end.”), wherein the image data generation section generates the display images not via intermediate images by correcting the photographed images by the integrated correction amount ([0096] “As an example and not by way of limitation, panorama leveling, vignette correction, lens distortion correcting, white balance correction, exposure correction and matching, or viewpoint adjustment may be applied directly to an image. In this manner, an image may be operated on before any compression-induced color or feature shifts take place, which may reduce the occurrence of visible correction artifacts.”). Sadi and the combination of Okuda in view of Wang are analogous to the claimed invention because they are in the same field of stereoscopic 3D head-mounted displays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda in view of Wang with the teachings of Sadi to add an integrated system to compensate for lens distortion and other aberrations. The motivation would have been to ensure the output image looks as realistic as possible. Claim(s) 5 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okuda (WO 2011024423 A1) in view of Wang (US 20220155595 A1) as applied to claim 2 above, and further in view of Rockel et al. (US 20220101593 A1, hereinafter "Rockel"). Regarding claim 5, the combination of Okuda in view of Wang teaches: The display control device according to claim 2, wherein, when switching is performed between a simultaneous display state in which the left-eye display image and the right-eye display image are simultaneously displayed and a one-side display state in which the either one of the left-eye display image and the right-eye display image is displayed (Wang fig. 2 teaches a simultaneous display state, fig. 12 teaches a one-sided display state, [0147]-[0148] “In some other embodiments, the HMD may switch between different display solutions. For example, by default, the HMD may perform displaying by using the solution shown in FIG. 2. When the peak current is greater than the rated current, the HMD may prompt, by using a voice, vibration, display prompt information shown in FIG. 14, or the like, the user whether to switch the display solution. After detecting an operation that the user chooses to switch the display solution, the HMD may display an image by using the display solution for reducing the peak current provided in this embodiment of this application.”). The combination of Okuda in view of Wang does not explicitly teach: the image data generation section sets a cross-fading process of gradually decreasing brightness of a pre-switching display image and concurrently increasing brightness of a post- switching display image, and makes a brightness change alternately in either one of the left- eye display image and the right-eye display image displayed in the simultaneous display state, for each frame, before making a brightness change in the other image, to replace the image having undergone the brightness change with an image to be displayed in the one-side display state. Rockel teaches: the image data generation section sets a cross-fading process of gradually decreasing brightness of a pre-switching display image and concurrently increasing brightness of a post- switching display image ([0335] “…the computer system gradually blurs out and/or darkens the portions of the representation of the physical environment that are still visible, and replaces them with virtual content (e.g., expansion of the existing virtual content, adding new virtual content, etc.). In some embodiments, the computer system displays the virtual content, such as virtual wallpaper, virtual room decor, virtual scenery, virtual movie screen, virtual desktop, etc., which gradually replaces the blurred and/or darkened portions of the representation of the physical environment (e.g., fading in from behind the portions of the representation of the physical environment, or creeping in from surrounding regions of the portions of the representation of the physical environment, etc.). When the transition is completed, the user's field of view of the first computer-generated experience has been expanded and less of the physical environment is visible via the display generation component.”). One of ordinary skill in the art would understand that in order to gradually increase or decrease the brightness of an image which is alternating between two displays, the brightness of each successive image would need to be alternately adjusted in sequence. Therefore, the combination of Okuda in view of Wang and further in view of Rockel teaches the remaining limitation: and makes a brightness change alternately in either one of the left- eye display image and the right-eye display image displayed in the simultaneous display state, for each frame, before making a brightness change in the other image, to replace the image having undergone the brightness change with an image to be displayed in the one-side display state. Rockel and the combination of Okuda in view of Wang are analogous to the claimed invention because they are in the same field of stereoscopic 3D head-mounted displays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda in view of Wang with the teachings of Rockel to add the ability to smoothly fade between display modes and to apply this type of transition to the display modes of Okuda in view of Wang. The motivation would have been to improve the user experience by adding aesthetically pleasing features. Regarding claim 6, the combination of Okuda in view of Wang and further in view of Rockel teaches: The display control device according to claim 5, wherein, in display switching from a content image to each photographed image (Rockel [0335] “In some embodiments, the computer system displays the virtual content, such as virtual wallpaper, virtual room decor, virtual scenery, virtual movie screen, virtual desktop, etc., which gradually replaces the blurred and/or darkened portions of the representation of the physical environment (e.g., fading in from behind the portions of the representation of the physical environment, or creeping in from surrounding regions of the portions of the representation of the physical environment, etc.). When the transition is completed, the user's field of view of the first computer-generated experience has been expanded and less of the physical environment is visible via the display generation component.”), the image data generation section switches the simultaneous display state to the one-side display state (Wang fig. 2 teaches a simultaneous display state, fig. 12 teaches a one-sided display state, [0147]-[0148] “In some other embodiments, the HMD may switch between different display solutions. For example, by default, the HMD may perform displaying by using the solution shown in FIG. 2. When the peak current is greater than the rated current, the HMD may prompt, by using a voice, vibration, display prompt information shown in FIG. 14, or the like, the user whether to switch the display solution. After detecting an operation that the user chooses to switch the display solution, the HMD may display an image by using the display solution for reducing the peak current provided in this embodiment of this application.”). Rockel and the combination of Okuda in view of Wang are analogous to the claimed invention because they are in the same field of stereoscopic 3D head-mounted displays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda in view of Wang with the teachings of Rockel to switch to the one-sided alternating display in the specified situation; the motivation would have been to reduce heat and power consumption, as taught by Wang. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okuda (WO 2011024423 A1) in view of Wang (US 20220155595 A1) and further in view of Rockel (US 20220101593 A1) as applied to claim 5 above, and further in view of Loferer et al. (US 20200158498 A1, hereinafter "Loferer"). Regarding claim 7, the combination of Okuda in view of Wang and further in view of Rockel teaches: The display control device according to claim 5, as well as switching between the simultaneous display state and the one-side display state (Wang, see claim 5). The combination of Okuda in view of Wang and further in view of Rockel does not explicitly teach: wherein, in switching between the simultaneous display state and the one-side display state, the image data generation section changes a pixel value on data of the display images with time in accordance with a regulation of linearly changing display brightness with time. Loferer teaches: the image data generation section changes a pixel value on data of the display images with time in accordance with a regulation of linearly changing display brightness with time (fig. 14, [0014] describes the use of gamma correction to adjust the brightness scale of a display device to appear linear to the human eye; [0090] “FIG. 14 depicts the gamma of a commercially available monitor 1 (full line). With a suitably chosen correcting function (long broken line) for the brightness display on monitor 1 it is possible to adjust a linear brightness impression on the resulting image (short broken line).”). Loferer is analogous to the claimed invention because it pertains to the same issue of correcting a digital image output for a digital display, particularly when adjusting brightness, as discussed by Okuda in view of Wang and further in view of Rockel. Furthermore, the use of gamma curves to adjust display brightness is a well-known concept in the art. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda in view of Wang and further in view of Rockel with the teachings of Loferer to implement a gamma curve to ensure that fade-in/fade-out transitions appear smooth and linear to a user. The motivation would have been to improve the user experience by adding aesthetically pleasing features. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okuda (WO 2011024423 A1) in view of Wang (US 20220155595 A1) as applied to claim 1 above, and further in view of Inada et al. (US 20160227203 A1, hereinafter "Inada"). Regarding claim 8, the combination of Okuda in view of Wang discloses: The display control device according to claim 1, but does not explicitly teach: wherein the image data generation section renders an additional image indicating additional information directly in a region within a predetermined range from a center of each display image to which an inverse distortion has been given in view of a distortion of an eyepiece lens during a viewing time, after giving a distortion according to a position to the additional image. Inada teaches: wherein the image data generation section renders an additional image indicating additional information ([0089] “FIG. 9B shows exemplary images where auxiliary data is displayed.”; [0091] “(1) After main data images are rendered from the left and right views, rectangular auxiliary data images are additionally rendered at the very front of the two images and subjected to distortion correction.”) directly in a region within a predetermined range from a center of each display image to which an inverse distortion has been given in view of a distortion of an eyepiece lens during a viewing time, after giving a distortion according to a position to the additional image ([0092] “If the image for the flat display 16 (shown in the right subfigures) is extracted from the images for the HMD 18 generated by this technique, inverse distortion correction becomes progressively ineffective toward the image edges. The initial rectangular captured images thus become increasingly difficult to reconstitute at their edges. In view of these characteristics, in particular, if technique (1) is employed, the auxiliary data images are preferably displayed around the center of the main data image. This reduces the possibility of the extracted auxiliary data image partially dropping.”). Inada and the combination of Okuda in view of Wang are analogous to the claimed invention because they are in the same field of stereoscopic 3D head-mounted displays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda in view of Wang with the teachings of Inada to add the ability to overlay an additional image over the displayed 3D image. The motivation would be to display important information or warnings about a user’s 3D capture area, as taught by Inada. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okuda (WO 2011024423 A1) in view of Wang (US 20220155595 A1) and further in view of Inada (US 20160227203 A1) as applied to claim 8 above, and further in view of Zhou et al. (CN 1996389 A, hereinafter "Zhou"). Regarding claim 9, the combination of Okuda in view of Wang and further in view of Inada teaches: The display control device according to claim 8, but does not explicitly teach: further comprising: an additional image data storage section that stores, as model data for the additional image, data regarding a plane including a plurality of polygons each having at least one vertex disposed in an inner region of the plane, wherein the image data generation section renders the additional image having a distortion for each of the polygons on a basis of position coordinates of the vertex defined on a plane of each display image. Zhou teaches: an additional image data storage section that stores, as model data for the additional image, data regarding a plane including a plurality of polygons each having at least one vertex disposed in an inner region of the plane ([0009] “1.1. Set target 1. The target is a two-dimensional plane, and black rectangles are arranged in a matrix on the target plane, with a quantity of 4 to 100. The length and width of the rectangles are in the range of 3 to 50 mm. The vertices of the black rectangles on the target surface are selected as feature points, and the number of feature points ranges from 16 to 400. There must be at least one straight line on the target plane, which includes three or more collinear feature points whose coordinates are unknown.”; [0010] 1.2 Within the effective field of view of the camera, place the target freely at least once. Take an image at each position, which is called a distortion calibration image. All feature points on the target should be included in the captured image. All captured target images are superimposed together, and the superimposed target image should fill the entire image as much as possible. [0011] 1.3 Extract the actual image coordinates of feature points in all distortion calibration images, and mark feature points with at least 3 collinear features as the same group of feature points; [0012] 1.4. Utilizing the property that feature points in the same group should be collinear in the distortion calibration image, an optimization function is established with the goal of minimizing the sum of the distances from the ideal image coordinates to the fitted straight line. The camera distortion coefficient k is estimated through a nonlinear optimization method. [0013] 1.5 Save the distortion coefficient k to the system parameter file for later use in distortion correction.”) wherein the image data generation section renders the additional image having a distortion for each of the polygons on a basis of position coordinates of the vertex defined on a plane of each display image ([0014] “After obtaining the distortion coefficients of the camera, distortion correction can be performed on the image.”). Zhou and the combination of Okuda in view of Wang and further in view of Inada are analogous to the claimed invention because they pertain to the same issue of lens distortion correction for a digital display. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda in view of Wang and further in view of Inada with the teachings of Zhou to implement the claimed method of matching and calibrating the distortion correction of an overlaid image. The motivation would be to improve the user experience by ensuring that the information or warning images taught by Inada are clear and appropriately configured for viewing. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okuda (WO 2011024423 A1) in view of Wang (US 20220155595 A1) and further in view of Inada (US 20160227203 A1) as applied to claim 8 above, and further in view of Sylvan et al. (US 20170372457 A1, hereinafter "Sylvan"). Regarding claim 10, the combination of Okuda in view of Wang and further in view of Inada teaches: The display control device according to claim 8, but does not explicitly teach: wherein the image data generation section excludes, from a target to be rendered, a region in each display image that is to be hidden by the additional image. Sylvan teaches: wherein the image data generation section excludes, from a target to be rendered, a region in each display image that is to be hidden by the additional image ([0063] “Generating the composite image overlaying each layer, including the visual data layer, the vector graphic layer, and the graphical user interface overlay on top of one another. In one example, the signed distance field includes depth values for each pixel represented by the signed distance field. Thus, in this example, any reprojected data of the signed distance field having a depth value that is behind a corresponding pixel of the reprojected rendered visual scene data may be determined to be occluded and consequently not rendered in the composite image.”). Sylvan and the combination of Okuda in view of Wang and further in view of Inada are analogous to the claimed invention because they are in the same field of graphical rendering for a head-mounted display. Furthermore, the concept of occlusion culling is well known in the art. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Okuda in view of Wang and further in view of Inada with the teachings of Sylvan to not render scene geometry which is hidden by a superimposed image. The motivation would have been to improve efficiency by avoiding unnecessary computation. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN STATZ whose telephone number is (571)272-6654. The examiner can normally be reached Mon-Fri 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN TOM STATZ/Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 27, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §103
Apr 14, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month