Prosecution Insights
Last updated: April 19, 2026
Application No. 18/249,190

RENDERING FORMAT SELECTION BASED ON VIRTUAL DISTANCE

Final Rejection §103
Filed
Apr 14, 2023
Examiner
BEARD, CHARLES LLOYD
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Hewlett-Packard Development Company, L.P.
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
235 granted / 350 resolved
+5.1% vs TC avg
Strong +36% interview lift
Without
With
+36.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
387
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
70.2%
+30.2% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Received 10/29/2025 Claim(s) 1-20 is/are pending. Claim(s) 1, 7, 13, and 20 has/have been amended. The 35 U.S.C § 103 rejection to claim(s) 1-20 have been fully considered in view of the amendments received on 10/29/2025 and are fully addressed in the prior art rejection below. Response to Arguments Received 10/29/2025 Regarding independent claims 1, 7, and 13: Applicant’s arguments (Remarks; Page 11: ¶ 2-3 to Page 12: ¶ 5), filed 10/29/2025, with respect to the rejection(s) of claim(s) 1 under 35 U.S.C § 103 have been fully considered and are persuasive. Wherein, the amendments changed the scope of the invention to include a multi-view display aspect. Thus, Clemens et al. (US Patent No. 10616567 B1) and Ogata et al. (US PGPUB No. 20120242655 A1) fail to disclose the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. Therefore, the rejection has been withdrawn, necessitated by Applicant's proposed amendments. However, upon further consideration, a new ground(s) of rejection is made in view of Beith et al. (US PGPUB No. 20210183343 A1), in view of Clemens et al., in view of Ogata et al., and Song et al., US PGPUB No. 20150341622 A1, hereinafter Song. Applicant’s arguments (Remarks; Page 13: ¶ 2-3), filed 10/29/2025, with respect to the rejection(s) of claim(s) 7 under 35 U.S.C § 103 have been fully considered and are persuasive due to claim 7’s similarity to claim 1. Therefore, the rejection has been withdrawn, necessitated by Applicant's proposed amendments. However, upon further consideration, a new ground(s) of rejection is made in view of Clemens, in view of Vesely et al. (US Patent No. 8717360 B2), and further in view of Lanman et al. (US PGPUB No. 20170160798 A1), in view of Beith, and further in view of Song et al.   Applicant’s arguments (Remarks; Page 13: ¶ 4-5), filed 10/29/2025, with respect to the rejection(s) of claim(s) 13 under 35 U.S.C § 103 have been fully considered and are persuasive due to claim 13’s similarity to claim 1. Therefore, the rejection has been withdrawn, necessitated by Applicant's proposed amendments. However, upon further consideration, a new ground(s) of rejection is made in view of the prior art as mentioned above. Regarding dependent claims 2-6, 8-12, and 14-20: Applicant’s arguments (Remarks; Page 12: ¶ 6 and Page 13: ¶ 6 to Page 14: ¶ 1), filed 10/29/2025, with respect to the rejection(s) of claim(s) 2-6, 8-12, and 14-20 under 35 U.S.C § 103 have been fully considered and are persuasive due to a dependency upon claims 1, 7, and 13 respectively. Therefore, the rejection has been withdrawn, necessitated by Applicant's proposed amendments. However, upon further consideration, a new ground(s) of rejection is made in view of the prior art as mentioned above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-6, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beith et al., US PGPUB No. 20210183343 A1, hereinafter Beith, Clemens et al., US Patent No. 11310487 B1, hereinafter Clemens, in view of Ogata et al., US PGPUB No. 20120242655 A1, hereinafter Ogata, and further in view of Song et al., US PGPUB No. 20150341622 A1, hereinafter Song. Regarding claim 1, Beith discloses a display system (Beith; a display system [¶ 0041], as illustrated within Fig. 1A), comprising: a display device to display a digital scene (Beith; the display system [as addressed above] comprises a display device (i.e. optical lenses and rendering device) to display a digital scene [¶ 0041-0043]), the display device to be worn on a head of a user (Beith; the display device to be worn on a head of a user (i.e. HMD) [¶ 0001, ¶ 0020, and ¶ 0041-0042]; moreover, HMD [¶ 0033]); and a processor to determine, for the user, a distance at which a difference between a left eye image and a right eye image are distinguishable (Beith; the display system [as addressed above] comprises a processor [¶ 0002-0004 and ¶ 0020] to determine a distance at which a difference between a left eye image and a right eye image are subjectively distinguishable for the user [¶ 0041 and ¶ 0044-0047] moreover, determine distances and angles between each of the user's eyes, e ach of the optical lenses, and/or each of the world-view image sensors/cameras [¶ 0048]); and a rendering engine to select a rendering format for different portions of the digital scene based on a comparison of virtual distances of the portions compared to the distance (Beith; the display system [as addressed above] comprises a rendering engine [¶ 0042] to select a rendering format for different portions of the digital scene based on a comparison of virtual distances of the portions compared to the distance [¶ 0052-0055], as illustrated within Figs. 2A-C) and provide the digital scene to the display device for display (Beith; provide the digital scene to the display device for display [¶ 0024-0025, ¶ 0042-0043, and ¶ 0052]), wherein the display device, the processor, and the rendering engine are included within a housing of the display system (Beith; the display device, the processor, and the rendering engine [as addressed above] are included within a housing of the display system [¶ 0041-0043], as illustrated within Fig. 1A). Beith fails to disclose a maximum distance at which a difference between a left eye image and a right eye image are distinguishable; and a comparison of virtual distances of the portions compared to the maximum distance; and wherein, the processor is to cause a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. However, Clemens teaches a processor to determine, for the user, a maximum distance at which a difference between a left eye image and a right eye image are distinguishable (Clemens; a processor [Col. 6, lines 59-67 and Col. 8, lines 21-41] to determine a maximum distance (i.e. comfort boundary and/or clipping plane(s)) at which a difference between a left eye image and a right eye image are distinguishable for the user [Col. 8, line 55 to Col. 9, line 2, Col. 14, lines 6-47, and Col. 15, line 31 to Col. 16, line 26], as illustrated within Fig. 9; moreover, frustum characteristics associated with one or more viewing zones/areas further associated with negative, positive, and/or zero parallax [Col. 14, line 48 to Col. 15, line 30]; and moreover, disparity [Col. 7, lines 10-23 and Col. 13, lines 30-61]); and a rendering engine to select a rendering format for different portions of the digital scene based on a comparison of virtual distances of the portions compared to the maximum distance (Clemens; a rendering engine [Col. 8, lines 21-41 and Col. 11, lines 40-55] to select/determine a rendering format (corresponding to a manner in which to present data) for different portions (in relation with a viewing frustrum) of the digital scene based on an implicit comparison (given a determined position of graphic objects within different frustrum areas/zones) of virtual distances of the portions implicitly compared to the maximum distance (i.e. comfort boundary and/or clipping plane(s)) (given the determining of objects within a viewing range) [Col. 14, lines 6-26]; moreover, determining positions of graphic objects within different frustrum areas/zones [Col. 14, line 27 to Col. 15, line 45] is in part in relation with the comfort boundary and/or clipping plane(s) [Col. 12, lines 38-53 and Col. 16, lines 4-37]). Beith and Clemens are considered to be analogous art because both pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith, to incorporate a processor to determine, for the user, a maximum distance at which a difference between a left eye image and a right eye image are distinguishable; and a rendering engine to select a rendering format for different portions of the digital scene based on a comparison of virtual distances of the portions compared to the maximum distance (as taught by Clemens), in order to provide improved techniques of stereoscopic displaying/imaging that reduce discomfort and/or fatigue (Clemens; [Col. 1, lines 21-40 and Col. 1, line 56 to Col. 2, line 19]). Beith as modified by Clemens fails to disclose a comparison of virtual distances of the portions compared to the maximum distance; and wherein, the processor is to cause a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. However, Ogata teaches a rendering format for different portions of the digital scene based on a comparison of virtual distances of the portions compared to the maximum distance (Ogata; a rendering format for different portions of the digital scene based on a comparison of virtual distances of the portions compared to the maximum distance [¶ 0056-0059]; moreover, determining an allowable nearest distance and an allowable farthest distance [¶ 0069, ¶ 0072-0073, and ¶ 0077-0078], as illustrated within Fig. 6, by calculating parallax value(s) [¶ 0070-0071 and ¶ 0075-0076]). Beith in view of Clemens and Ogata are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, to incorporate a rendering format for different portions of the digital scene based on a comparison of virtual distances of the portions compared to the maximum distance (as taught by Ogata), in order to provide improved stereoscopic displaying/imaging techniques that reduce user fatigue (Ogata; [¶ 0003-0006]). Beith as modified by Clemens and Ogata fails to disclose wherein, the processor is to cause a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. However, Song teaches wherein, the processor is to cause a view of the digital screen to be displayed on the display device (Song; the processor is to cause a view of the digital screen to be displayed on the display device [¶ 0032-0033]; wherein, a main image and a sub-image are simultaneously displayed [¶ 0004-0006]; even further, distance representative object(s) [¶ 0007-0008]), wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered (Song; the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered [¶ 0046-0048 and ¶ 0054-0056]; wherein, the 1st and 2nd image are combined [¶ 0049-0050] in relation with dynamic changing depth [¶ 0051-0053], as illustrated within Figs. 8-9). Beith in view of Clemens and Ogata and Song are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens and Ogata, to incorporate wherein, the processor is to cause a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered (as taught by Song), in order to provide improved stereoscopic displaying/imaging techniques that reduce user fatigue (Song; [¶ 0003-0006]). Regarding claim 3, Beith in view of Clemens, Ogata, and Song further discloses the display system of claim 1: further comprising a gaze tracking system to determine a location that the user is observing (Beith; a gaze tracking system to determine a location that the user is observing [¶ 0027 and ¶ 0033]; moreover, inward facing or gaze-view detection [¶ 0044-0045]). Clemens further teaches further comprising a gaze tracking system to determine a location that the user is observing (Clemens; a gaze tracking system to determine a location that the user is observing [Col. 9, line 63 to Col. 10, line 33 and Col. 11, lines 9-32]); and wherein the rendering engine selects a rendering format based on a virtual distance of the location that the user is observing (Clemens; the rendering engine selects a rendering format (corresponding to a manner in which to present data) [as addressed within the parent claim(s)] based on a virtual distance (as indicated by one or more frustums) of the location that the user is observing [Col. 14, line 48 to Col. 15, line 30], as illustrated within Fig. 9; additionally, eyepoint distance(s) to display plane region(s) [Col. 20, lines 22-47], as illustrated within Fig. 11). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, and Song, to incorporate further comprising a gaze tracking system to determine a location that the user is observing; and wherein the rendering engine selects a rendering format based on a virtual distance of the location that the user is observing (as taught by Clemens), in order to provide improved techniques of stereoscopic displaying/imaging that reduce discomfort and/or fatigue (Clemens; [Col. 1, lines 21-40 and Col. 1, line 56 to Col. 2, line 19]). Regarding claim 4, Beith in view of Clemens, Ogata, and Song further discloses the display system of claim 3, wherein: when the location that the user is observing has a virtual distance that is less than the maximum distance (Clemens; when the location that the user is observing [Col. 10, lines 16-37 and Col. 20, lines 22-47] has a virtual distance (as indicated by one or more frustums) that is less than the maximum distance (i.e. comfort boundary and/or clipping plane(s)) [Col. 14, line 6 to Col. 15, line 30]), the rendering engine (Clemens; rendering engine [as addressed within the parent claim(s)]) is to: render the portions that have a virtual distance greater than the maximum distance as a single eye image (Clemens; rendering engine [as addressed above] is to render the portions that have a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. comfort boundary and/or (far) clipping plane) as a perspective [Col. 14, lines 6-26 and Col. 16, lines 4-54] in relation with a constant image or non-rendering [Col. 14, lines 6-26 and Col. 16, lines 17-54]; additionally, single point of view or eye image [Col. 5, lines 32-55 and Col. 6, lines 1-26] in relation with rendering a scene [Col. 10, lines 16-33]); and render the portions that have a virtual distance less than the maximum distance stereoscopically (Clemens; rendering engine [as addressed above] is to render the portions that have a virtual distance (as indicated by one or more frustums) less than the maximum distance (i.e. comfort boundary and/or (far) clipping plane) stereoscopically [Col. 16, lines 4-54]; wherein, determination that the user is looking further corresponds to eye tracking [Col. 9, line 63 to Col. 10, line 39 and Col. 11, lines 9-32]; moreover, between rendering planes [Col. 14, line 48 to Col. 15, line 30]); and when the location that the user is observing has a virtual distance that is greater than the maximum distance, the rendering engine is to render the entire digital scene stereoscopically (Clemens; the rendering engine [as addressed above] is to render the entire digital scene stereoscopically when the location that the user is observing [Col. 10, lines 16-37 and Col. 20, lines 22-47] has a virtual distance (as indicated by one or more frustums) that is greater than the maximum distance (i.e. (near) clipping plane) [Col. 14, line 48 to Col. 15, line 30]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith in view of Clemens, Ogata, and Song, to incorporate when the location that the user is observing has a virtual distance that is less than the maximum distance, the rendering engine is to: render the portions that have a virtual distance greater than the maximum distance as a single eye image; and render the portions that have a virtual distance less than the maximum distance stereoscopically; and when the location that the user is observing has a virtual distance that is greater than the maximum distance, the rendering engine is to render the entire digital scene stereoscopically (as taught by Clemens), in order to provide improved techniques of stereoscopic displaying/imaging that reduce discomfort and/or fatigue (Clemens; [Col. 1, lines 21-40 and Col. 1, line 56 to Col. 2, line 19]). Regarding claim 5, Beith in view of Clemens, Ogata, and Song further discloses the display system of claim 1, wherein the display device is an extended reality (Beith; the display device [as addressed within the parent claim(s)] is an AR (corresponding to an XR) [¶ 0001-0002 and ¶ 0033]). Clemens further teaches the display device is a stereoscopic extended reality headset (Clemens; the display device is a stereoscopic XR headset [Col. 9, line 38 to Col. 10, line 15]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, Song, to incorporate the display device is a stereoscopic extended reality headset (as taught by Clemens), in order to provide improved techniques of stereoscopic displaying/imaging that reduce discomfort and/or fatigue (Clemens; [Col. 1, lines 21-40 and Col. 1, line 56 to Col. 2, line 19]). Regarding claim 6, Beith in view of Clemens, Ogata, and Song further discloses the display system of claim 1, wherein: the processor determines an inter-pupillary distance for the user (Beith; the processor determines an inter-pupillary distance for the user [¶ 0029-0030 and ¶ 0044-0045]). Clemens further teaches the processor determines an inter-pupillary distance for the user (Clemens; the processor [as addressed within the parent claim(s)] determines an inter-pupillary distance for the user [Col. 15, lines 58-66 and Col. 18, lines 38-50]); and the maximum distance is determined based on the inter-pupillary distance (Clemens; the maximum distance (i.e. comfort boundary and/or clipping plane(s)) [as addressed within the parent claim(s)] is determined based on the inter-pupillary distance [Col. 15, lines 58-66 and Col. 18, lines 38-50]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, and Song, to incorporate the processor determines an inter-pupillary distance for the user; and the maximum distance is determined based on the inter-pupillary distance (as taught by Clemens), in order to provide improved techniques of stereoscopic displaying/imaging that reduce discomfort and/or fatigue (Clemens; [Col. 1, lines 21-40 and Col. 1, line 56 to Col. 2, line 19]). Regarding claim 18, Beith in view of Clemens, Ogata, and Song further discloses the system of claim 6, wherein the processor determines the inter-pupillary distance for the user based on information collected from a gaze tracking system (Beith; the processor [as addressed within the parent claim(s)] determines the inter-pupillary distance for the user based on information collected from a gaze tracking system [¶ 0029-0030 and ¶ 0044-0045]), wherein the gaze tracking system is included within the housing (Beith; the gaze tracking system is included within the housing [¶ 0041-0042], as illustrated within Fig. 1A). Ogata further teaches determining the inter-pupillary distance for the user based on information collected from a gaze tracking system (Ogata; determining the inter-pupillary distance for the user based on information collected from a gaze tracking system [¶ 0152 and ¶ 0156]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Clemens, Ogata and Song, to incorporate determining the inter-pupillary distance for the user based on information collected from a gaze tracking system (as taught by Ogata), in order to provide improved stereoscopic displaying/imaging techniques that reduce user fatigue (Ogata; [¶ 0003-0006]). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beith in view of Clemens, Ogata, and Song as applied to claim(s) 1 above, and further in view of Kim et al., US PGPUB No. 20140218472 A1, hereinafter Kim. Regarding claim 2, Beith in view of Clemens, Ogata, and Song further discloses the display system of claim 1, wherein the rendering engine (Clemens; the rendering engine [as addressed with the parent claim(s)]): is to render portions of the digital scene having a virtual distance greater than the maximum distance as an orthographic image (Clemens; the rendering engine [as addressed above] is to render portions of the digital scene having a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. comfort boundary and/or clipping plane(s)) as an orthographic image [Col. 14, lines 6-47 and Col. 16, lines 17-54]; additionally, eyepoint distance(s) in relation with one or more viewing region(s) [Col. 20, lines 22-47], as illustrated within Fig. 11; additionally, single point of view or eye image [Col. 5, lines 32-55 and Col. 6, lines 1-26] in relation with rendering a scene [Col. 10, lines 16-33]), wherein the orthographic image is from a perspective of either a left eye of the user or a right eye of the user and the orthographic image is presented to both the left eye of the user and the right eye of the user (Clemens; the orthographic image [as addressed above] is from a perspective of either a left eye of the user or a right eye of the user and the orthographic image is presented to both the left eye of the user and the right eye of the user [Col. 14, lines 6-47 and Col. 16, lines 17-67]); and is to render portions of the digital scene having a virtual distance less than the maximum distance stereoscopically (Clemens; the rendering engine [as addressed above] is to render portions of the digital scene having a virtual distance (as indicated by one or more frustums) less than the maximum distance (i.e. comfort boundary and/or clipping plane(s)) stereoscopically [Col. 14, lines 6-26]; moreover, generating parallax imaging within clipping planes [Col. 14, line 48 to Col. 15, line 30], as illustrated within Fig. 9; additionally, eyepoint distance(s) in relation with parallax region(s) [Col. 20, lines 22-47], as illustrated within Fig. 11). Beith in view of Clemens, Ogata, and Song fails to explicitly disclose the single eye image is from a perspective of either a left eye of the user or a right eye of the user and the single eye image is presented to both the left eye of the user and the right eye of the user. However, Kim teaches the rendering engine (Kim; the rendering engine (i.e. controller) [¶ 0043, ¶ 0048, and ¶ 0091]): is to render portions of the digital scene having a virtual distance greater than the maximum distance as a single eye image (Kim; the rendering engine (i.e. controller) [as addressed above]: is to render portions of the digital scene having a virtual distance greater than the maximum distance [¶ 0096-0100] as a single eye image (i.e. 2D left image (Di_1) or 2D right image (Di_2)) [¶ 0096-0100], as illustrated within Fig. 6; wherein, 2D/3D image selector determines which point the user’s actual observation position is [¶ 0101, ¶ 0106-0108, and ¶ 0114-0115]; moreover, multi-viewpoint image mode in relation with gap, length, or distance [¶ 0056-0059]), wherein the single eye image is from a perspective of either a left eye of the user or a right eye of the user and the single eye image is presented to both the left eye of the user and the right eye of the user (Kim; the single eye image (i.e. 2D left image (Di_1) or 2D right image (Di_2)) is from a perspective of either a left eye of the user or a right eye of the user and the single eye image (i.e. 2D left image (Di_1) or 2D right image (Di_2)) is presented to both the left eye of the user and the right eye of the user (corresponding to a 2D image) [¶ 0101 and ¶ 0106-0108]); and is to render portions of the digital scene having a virtual distance less than the maximum distance stereoscopically (Kim; the rendering engine (i.e. controller) [as addressed above]: is to render portions of the digital scene having a virtual distance less than the maximum distance stereoscopically [¶ 0097-0101], as illustrated within Fig. 6; moreover, when the user's actual observation position belongs to the range of the 3D area (Di_3) to display the 3D image [¶ 0102]). Beith in view of Clemens, Ogata, and Song and Kim are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, and Song, to incorporate the rendering engine: is to render portions of the digital scene having a virtual distance greater than the maximum distance as a single eye image, wherein the single eye image is from a perspective of either a left eye of the user or a right eye of the user and the single eye image is presented to both the left eye of the user and the right eye of the user; and is to render portions of the digital scene having a virtual distance less than the maximum distance stereoscopically (as taught by Kim), in order to provide improved realistic stereoscopic imaging based on viewpoints of a user (Kim; [¶ 0005-0006, ¶ 0009, and ¶ 0016-0017]). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beith in view of Clemens, Ogata, and Song as applied to claim(s) 6 above, and further in view of Han et al., US PGPUB No. 20110126160 A1, hereinafter Han. Regarding claim 17, Beith in view of Clemens, Ogata, and Song further discloses the system of claim 6, wherein the processor determines the inter-pupillary distance for the user (Clemens; the processor [as addressed within the parent claim(s)] determines the inter-pupillary distance for the user [Col. 15, lines 58-66 and Col. 18, lines 38-50]). Beith in view of Clemens, Ogata, and Song fails to explicitly discloses the inter-pupillary distance for the user based on a manual calibration performed by the user. However, Han teaches wherein the processor determines the inter-pupillary distance for the user based on a manual calibration performed by the user (Han; the processor [¶ 0174-0176] determines the inter-pupillary distance for the user based on a manual calibration performed by the user [¶ 0285 and ¶ 0292-0294]; moreover, determining the inter-pupillary distance for the user based on a manual calibration [¶ 0404-0406 and ¶ 0416-0418]). Beith in view of Clemens, Ogata, and Song and Han are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, and Song, to incorporate wherein the processor determines the inter-pupillary distance for the user based on a manual calibration performed by the user (as taught by Han), in order to provide improved stereoscopic imaging through formatting (Han; [¶ 0009-0013]). Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beith in view of Clemens, Ogata, and Song as applied to claim(s) 1 above, and further in view of Lanman et al., US PGPUB No. 20170160798 A1, hereinafter Lanman. Regarding claim 19, Beith in view of Clemens, Ogata, and Song further discloses the display system of claim 1, wherein the display system is an extended reality headset configured to be worn by the user such that the extended reality headset covers the eyes of the user and presents the digital scene in environment formed by the extended reality headset and a face of the user (Beith; the display system [as addressed within the parent claim(s)] is an extended reality headset (i.e. HMD, glasses) configured to be worn by the user such that the augmented/extended reality headset (i.e. HMD, glasses) covers the eyes of the user and presents the digital scene in environment formed by the augmented/extended reality headset (i.e. HMD, glasses) and an implicit face of the user [¶ 0033 and ¶ 0041-0043]). Beith in view of Clemens, Ogata, and Song fails to explicitly disclose an enclosed environment formed by the extended reality headset and a face of the user. However, Lanman teaches wherein the display system is an extended reality headset configured to be worn by the user such that the extended reality headset covers the eyes of the user and presents the digital scene in environment formed by the extended reality headset and a face of the user (Lanman; the display system is an extended reality headset configured to be worn by a user such that the extended reality headset covers the eyes of the user and presents the digital scene in environment formed by the extended reality headset and a face of the user [¶ 0041-0043 and ¶ 0045], as illustrated within Fig. 3 and Fig. 5). Beith in view of Clemens, Ogata, and Song and Lanman are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, and Song, to incorporate wherein the display system is an extended reality headset configured to be worn by the user such that the extended reality headset covers the eyes of the user and presents the digital scene in environment formed by the extended reality headset and a face of the user (as taught by Lanman), in order to provide improved stereoscopic displaying/imaging by automatically adjusting focus based on a location/gaze within a scene while reducing fatigue and nausea (Lanman; [¶ 0001-0005]). Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beith in view of Clemens, Ogata, and Song as applied to claim(s) 1 above, in view of Saigo and further in view of Kim et al., US PGPUB No. 20200326543 A1, hereinafter Kim-543. Regarding claim 20, Beith in view of Clemens, Ogata and Song further discloses the display system of claim 1, the processor (Beith; the processor [as addressed within the parent claim(s)]). Saigo further teaches responsive to the user positioning the display device on the head of the user, identify the user, wherein the user is associated with a predetermined maximum distance, wherein the processor is to determine the maximum distance as the predetermined maximum distance associated with the user (Saigo; identify the user wherein the processor is to determine the maximum distance as the predetermined maximum distance associated with the user, wherein the user is associated with a predetermined maximum/larger distance, responsive to the user positioning the display device on the head of the user [¶ 0028 and ¶ 0035-0038], as illustrated within Fig. 2; wherein, processor corresponds to a controller [¶ 0029-0031]; moreover, multi-viewpoint image mode in relation with distance [¶ 0049]; additionally, minimum distance [¶ 0039]). Beith in view of Clemens, Ogata, and Song and Saigo are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, and Song, to incorporate responsive to the user positioning the display device on the head of the user, identify the user, wherein the user is associated with a predetermined maximum distance, wherein the processor is to determine the maximum distance as the predetermined maximum distance associated with the user (as taught by Saigo), in order to provide improved techniques of stereoscopic displaying/imaging using a head-mount-display (Saigo; [¶ 0014-0015]). Beith in view of Clemens, Ogata, Song and Saigo fails to disclose wherein the predetermined maximum distance is a stored maximum distance at which the left eye image and the right eye image are distinguishable for the user, predetermined maximum distance associated with the user responsive to identification of the user. However, Kim-543 teaches wherein the predetermined maximum distance is a stored maximum distance at which the left eye image and the right eye image are distinguishable for the user (Kim-543; the predetermined maximum distance is a implicitly stored maximum distance (given memory) at which the left eye image and the right eye image are distinguishable for the user [¶ 0029, ¶ 0087-0089, and ¶ 0100]; moreover, computer programs [¶ 0107 and ¶ 0115]), to determine the maximum distance as the predetermined maximum distance associated with the user responsive to identification of the user (Kim-543; to determine the maximum distance as the predetermined maximum distance associated with the user responsive to identification of the user [¶ 0037-0039, ¶ 0050, and ¶ 0058-0061], as illustrated within Fig. 15; wherein, various users have different interpupillary distance (IPD, 54-68 mm) and nose shapes, which raise the bar on eye box and eye relief coverage beyond the requirement for a single user [¶ 0024]). Beith in view of Clemens, Ogata, Song, and Saigo and Kim-543 are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Beith as modified by Clemens, Ogata, and Song, to incorporate wherein the predetermined maximum distance is a stored maximum distance at which the left eye image and the right eye image are distinguishable for the user, and to determine the maximum distance as the predetermined maximum distance associated with the user responsive to identification of the user (as taught by Kim-543), in order to provide augmented reality eyewear that allows for vision correction (Kim-543; [¶ 0001 and ¶ 0024-0025]). Claim(s) 7, 8, 10, 11, and 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Clemens, in view of Vesely et al., US Patent No. 8717360 B2, hereinafter Vesely, in view of Lanman, in view of Beith, and further in view of Song. Regarding claim 7, Clemens disclose a method (Clemens; a method [Col. 2, lines 26-40 and Col. 8, lines 21-67]), comprising: determining, with a processor, for a user, a maximum distance at which a left eye image and a right eye image are distinguishable (Clemens; the method [as addressed above] comprises determining a maximum distance (i.e. comfort boundary and/or clipping plane(s)) at which a left eye image and a right eye image are distinguishable for a user with a processor [Col. 8, line 55 to Col. 9, line 2, Col. 14, lines 6-47, and Col. 15, line 31 to Col. 16, line 26], as illustrated within Fig. 9; moreover, frustum characteristics associated with one or more viewing zones/areas further associated with negative, positive, and/or zero parallax [Col. 14, line 48 to Col. 15, line 30]; and moreover, disparity [Col. 7, lines 10-23 and Col. 13, lines 30-61]); determining, with the processor, when the user is looking at a location in a digital scene that has a virtual distance greater than the maximum distance (Clemens; the method [as addressed above] comprises determining when the user is looking at a location in a digital scene [Col. 10, lines 16-37, Col. 14, line 48 to Col. 15, line 30, and Col. 20, lines 22-47], that has a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. comfort boundary and/or clipping plane(s)) with a processor [Col. 14, lines 6-26 and Col. 16, lines 17-54]; additionally, eyepoint distance(s) to display plane region(s) [Col. 20, lines 22-47], as illustrated within Fig. 11), wherein the digital scene is output via display device worn by the user (Clemens; the digital scene is output via display device worn (i.e. HMD) by the user [Col. 5, lines 20-31]; wherein, the functionality of the display is shared between two or more devices (e.g. display and eyewear) [Col. 8, line 55 to Col. 9, line 37 and Col. 11, lines 34-64]; moreover, eyewear [Col. 9, line 35 to Col. 10, line 33]); responsive to a determination that the user is looking at a location in the digital scene that has a virtual distance greater than the maximum distance: rendering, with the processor, portions of the digital scene that have a virtual distance greater than the maximum distance in a first format (Clemens; the method [as addressed above] comprises rendering portions of the digital scene that have a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. (near) clipping plane) in a 1st format (corresponding to a stereo, or an aspect of producing a visual effect) with the processor [Col. 14, lines 6-47] via the display device worn by the user [Col. 8, line 55 to Col. 9, line 37, Col. 9, line 38 to Col. 10, line 33, and Col. 11, lines 34-64] responsive to an implicit determination that the user is looking at a location (given eye tracking) in the digital scene [Col. 10, lines 16-37 and Col. 20, lines 22-47] that has a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. (near) clipping plane) [Col. 14, line 48 to Col. 15, line 30]; wherein, determination that the user is looking further corresponds to eye tracking [Col. 9, line 63 to Col. 10, line 39 and Col. 11, lines 9-32]; moreover, eyewear [Col. 9, line 35 to Col. 10, line 33 and Col. 15, line 31 to Col. 16, line 26]); and displaying, on the display device, the portions of the digital scene that have a virtual distance greater than the maximum distance in the first format (Clemens; the method [as addressed above] comprises displaying the portions of the digital scene that have a virtual distance greater than the maximum distance in the 1st format (e.g. left side) on the display device Col. 8, line 55 to Col. 9, line 2, Col. 14, lines 6-47, and Col. 15, line 31 to Col. 16, line 26], as illustrated within Fig. 9); and responsive to a determination that the user is looking at a location in the digital scene that has a virtual distance less than the maximum distance: rendering, with the processor, portions of the digital scene that have a virtual distance greater than the maximum distance in a second format (Clemens; the method [as addressed above] comprises rendering portions of the digital scene that have a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. comfort boundary and/or (far) clipping plane) in a 2nd format (corresponding to a constant disparity, or an aspect of controlling a visual effect) [Col. 16, lines 4-54] via the display device worn by the user [Col. 8, line 55 to Col. 9, line 37, Col. 9, line 38 to Col. 10, line 33, and Col. 11, lines 34-64] responsive to an implicit determination that the user is looking at a location (given eye tracking) in the digital scene [Col. 10, lines 16-37 and Col. 20, lines 22-47] that has a virtual distance (as indicated by one or more frustums) less than the maximum distance (i.e. comfort boundary and/or (far) clipping plane) [Col. 14, lines 6-38]; wherein, determination that the user is looking further corresponds to eye tracking [Col. 9, line 63 to Col. 10, line 39 and Col. 11, lines 9-32]; moreover, between rendering planes [Col. 14, line 48 to Col. 15, line 30]); and display, on the display device, the portions of the digital scene that have a virtual distance greater than the maximum distance in the second format (Clemens; display the portions of the digital scene that have a virtual distance greater than the maximum distance in the second format (e.g. right side) on the display device Col. 8, line 55 to Col. 9, line 2, Col. 14, lines 6-47, and Col. 15, line 31 to Col. 16, line 26], as illustrated within Fig. 9). Clemens fails to disclose wherein the display device and the processor are included within a housing; rendering via the display device worn by the user; and a determination that the user is looking at a location in the digital scene; and causing a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. However, Vesely teaches wherein the digital scene is output via display device worn by the user (Vesely; the digital scene is output via display device worn by the user [Col. 7, lines 10-67 and Col. 12, lines 41-48]; wherein, the functionality of the display is shared between two or more devices (e.g. display and eyewear) [Col. 6, lines 17-24 and Col. 6, line 43 to Col. 7, line 42]); and responsive to a determination that the user is looking at a location in the digital scene (Vesely; responsive to a determination that the user is looking at a location in the digital scene [Col. 3, lines 4-11 and Col. 4, lines 3-15]; wherein, one or more viewpoints are determined [Col. 11, line 66 to Col. 12, line 40, Col. 12, line 60 to Col. 13, line 15, and Col. 13, line 58 to Col. 14, line 38]). Clemens and Vesely are considered to be analogous art because both pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens, to incorporate wherein the digital scene is output via display device worn by the user; and responsive to a determination that the user is looking at a location in the digital scene (as taught by Vesely), in order to provide improved stereoscopic displaying/imaging techniques that reduce distortions (Vesely; [Col. 1, lines 20-30 and lines 37-51]). Clemens as modified by Vesely fails to disclose rendering via the display device worn by the user; and causing a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. However, Lanman teaches rendering via the display device worn by the user (Lanman; rendering via the display device worn by the user [¶ 0033 and ¶ 0045-0046]; moreover, vergence rendering associated with user’s gaze [¶ 0026-0027 and ¶ 0047-0049]). Clemens in view of Vesely and Lanman are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Vesely, to incorporate rendering via the display device worn by the user (as taught by Lanman), in order to provide improved stereoscopic displaying/imaging by automatically adjusting focus based on a location/gaze within a scene while reducing fatigue and nausea (Lanman; [¶ 0001-0005]). Clemens as modified by Vesely and Lanman fails to disclose wherein the display device and the processor are included within a housing; and causing a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. Beith teaches wherein the display device and the processor are included within a housing (Beith; the display device and the processor are included within a housing [¶ 0041-0042], as illustrated within Fig. 1A). Clemens in view of Vesely and Lanman and Beith are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Vesely and Lanman, to incorporate the display device and the processor are included within a housing (as taught by Beith), in order to provide improved head-mounted display that is easier or more comfortable to wear and increasing a user experience by reducing the impacts of movement and placement (Beith; [¶ 0023-0025]). Clemens as modified by Vesely, Lanman, and Beith fails to disclose causing a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. However, Song teaches causing a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered. Clemens in view of Vesely, Lanman, and Beith and Song are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Vesely, Lanman, and Beith, to incorporate causing a view of the digital screen to be displayed on the display device, wherein the view of the digital screen includes the portions of the digital scene that have the virtual distance less than the maximum distance as rendered and the portions of the digital scene that have the virtual distance greater than the maximum distance as rendered (as taught by Song), in order to provide improved stereoscopic displaying/imaging techniques that reduce user fatigue (Song; [¶ 0003-0006]). Regarding claim 8, Clemens in view of Vesely, Lanman, Beith, and Song further discloses the method of claim 7, wherein determining, for the user, the maximum distance at which the left eye image and the right eye image are distinguishable (Clemens; determining the maximum distance (i.e. comfort boundary and/or clipping plane(s)) at which the left eye image and the right eye image are distinguishable for the user [as addressed within the parent claim(s)]) comprises: determining a field of view and a resolution of the display device that projects the digital scene (Clemens; determining the maximum distance (i.e. comfort boundary and/or clipping plane(s)) [as addressed above] comprises determining a FOV and a resolution of the display device that projects the digital scene [Col. 12, lines 22-37]; moreover, disparity [Col. 7, lines 10-23 and Col. 13, lines 30-61]); and determining an inter-pupillary distance for the user (Clemens; determining the maximum distance [as addressed above] comprises determining an inter-pupillary distance for the user [Col. 15, lines 58-66 and Col. 18, lines 38-50]), wherein the maximum distance is determined based on the field of view (Clemens; the maximum distance (i.e. comfort boundary and/or clipping plane(s)) [Col. 8, line 55 to Col. 9, line 2, Col. 9, line 63 to Col. 10, line 39, Col. 12, line 38 to Col. 13, line 29, and Col. 14, lines 6-47], as illustrated within Fig. 9, is determined based on the FOV [Col. 12, lines 9-37, Col. 13, lines 31-52, and Col. 14, line 65 to Col. 15, line 45]; moreover, focus accommodation/convergence [Col. 15, line 31 to Col. 16, line 26]; and wherein, frustum characteristics associated with one or more viewing zones/areas further associated with negative, positive, and/or zero parallax [Col. 14, line 48 to Col. 15, line 30]; and moreover, disparity [Col. 7, lines 10-23 and Col. 13, lines 30-61]). Lanman further teaches wherein the maximum distance is determined based on the field of view, the resolution, and the inter-pupillary distance (Lanman; the maximum/vergence distance is determined based on the implicit FOV (given gaze tracking) [¶ 0024-0026], the implicit resolution (given focus/blur) [¶ 0043-0044 and ¶ 0049], and the implicit inter-pupillary distance (given eye position) [¶ 0045-0048]; moreover, eye tracking [¶ 0024-0026] and vergence (associated with movement or rotation of both eyes) [¶ 0051-0052] in relation with comfortable viewing of an object [¶ 0053-0055]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Vesely, Lanman, Beith, and Song, to incorporate wherein the maximum distance is determined based on the field of view, the resolution, and the inter-pupillary distance (as taught by Lanman), in order to provide improved stereoscopic displaying/imaging by automatically adjusting focus based on a location/gaze within a scene while reducing fatigue and nausea (Lanman; [¶ 0001-0005]). Regarding claim 10, Clemens in view of Vesely, Lanman, Beith, and Song further discloses the method of claim 7, further comprising stereoscopically rendering portions of the digital scene that have a virtual distance less than the maximum distance (Clemens; stereoscopically rendering portions of the digital scene that have a virtual distance (as indicated by one or more frustums) less than the maximum distance (i.e. comfort boundary and/or clipping plane(s)) [Col. 14, lines 6-26]; moreover, generating parallax imaging within clipping planes [Col. 14, line 48 to Col. 15, line 30], as illustrated within Fig. 9; additionally, eyepoint distance(s) in relation with parallax region(s) [Col. 20, lines 22-47], as illustrated within Fig. 11). Regarding claim 11, Clemens in view of Vesely, Lanman, Beith, and Song further discloses the method of claim 7, wherein determining when the user is looking at a location in a digital scene that has a virtual distance greater than the maximum distance (Clemens; determining when the user is looking at a location in a digital scene that has a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. comfort boundary and/or clipping plane(s)) [as addressed within the parent claim(s)]) comprises: tracking a gaze of the user to determine the location (Clemens; determining when the user is looking at a location [as addressed above] comprises tracking a gaze of the user to determine the location [Col. 9, line 63 to Col. 10, line 33 and Col. 11, lines 9-32]); and determining the virtual distance of the location (Clemens; determining when the user is looking at a location [as addressed above] comprises determining the virtual distance (as indicated by one or more frustums) of the location [Col. 10, lines 16-39 and Col. 14, line 39 to Col. 15, line 30], as illustrated within Fig. 9; additionally, eyepoint distance(s) [Col. 20, lines 22-47], as illustrated within Fig. 11; wherein, a far comfort stereo plane can be determined [Col. 15, lines 58-66]). Claim(s) 9 and 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Clemens in view of Vesely, Lanman, Beith, and Song as applied to claim(s) 7 above, and further in view of Saigo et al., US PGPUB No. 20130194663 A1, hereinafter Saigo. Regarding claim 9, Clemens in view of Vesely, Lanman, Beith, and Song further discloses the method of claim 7, wherein: the first format is a stereoscopic format (Clemens; the 1st format (corresponding to a manner in which to present data) is a stereoscopic format; moreover, parallax level/disparity is based in part on the location of a virtual object relative to the clipping planes [Col. 14, lines 6-47]; [Col. 14, line 48 to Col. 15, line 30]); and the second format comprises rendering the digital scene for a perspective (Clemens; the 2nd format (corresponding to another manner in which to present data) comprises rendering the digital scene for a perspective in relation with a constant image or non-rendering [Col. 14, lines 6-26 and Col. 16, lines 17-54]; additionally, single point of view or eye image [Col. 5, lines 32-55 and Col. 6, lines 1-26] in relation with rendering a scene [Col. 10, lines 16-33]). Clemens as modified by Vesely, Lanman, Beith, and Song fails to disclose rendering the digital scene for a single eye. However, Saigo teaches rendering the digital scene for a single eye (Saigo; rendering the digital scene for a single eye [¶ 0039-0041]; moreover, turning off stereoscopic imaging in relation with a suitable distance [¶ 0045-0046]). Clemens in view of Vesely, Lanman, Beith, and Song and Saigo are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Vesely, Lanman, Beith, and Song, to incorporate rendering the digital scene for a single eye (as taught by Saigo), in order to provide improved techniques of stereoscopic displaying/imaging using a head-mount-display (Saigo; [¶ 0014-0015]). Regarding claim 12, Clemens in view of Vesely, Lanman, Beith, and Song discloses the method of claim 7, further comprising rendering based on the user changing the location at which the user is looking (Vesely; rendering based on the user changing the location at which the user is looking [Col. 3, lines 4-11 and Col. 4, lines 3-15]). Clemens in view of Vesely, Lanman, Beith, and Song fails to disclose switching the rendering format. However, Saigo teaches switching the rendering format based on the user changing the location at which the user is looking (Saigo; switching the rendering format based on the user changing the location at which the user is looking [¶ 0014-0015 and ¶ 0037-0041]; moreover, stereoscopic imaging in relation with distance [¶ 0045-0046]). Clemens in view of Vesely, Lanman, Beith, and Song and Saigo are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Vesely, Lanman, Beith, and Song, to incorporate switching the rendering format based on the user changing the location at which the user is looking (as taught by Saigo), in order to provide improved techniques of stereoscopic displaying/imaging using a head-mount-display (Saigo; [¶ 0014-0015]). Regarding claim 13, the rejection of claim 13 is addressed within the rejection of claim 7, due to the similarities claim 13 and claim 7 share, therefore refer to the rejection of claim 7 regarding the rejection of claim 13; however, the subject matter/limitations not addressed by claim 7 is/are addressed below. Clemens discloses a non-transitory machine-readable storage medium encoded with instructions executable by a processor (Clemens; a non-transitory machine-readable storage medium encoded with instructions [Col. 2, lines 26-46, Col. 5, lines 9-19, and Col. 6, lines 48-51] executable by a processor [Col. 6, lines 59-67, Col. 7, lines 39-51, and Col. 8, lines 21-56]), the machine-readable storage medium comprising instructions (Clemens; the machine-readable storage medium comprising instructions [as addressed above]) to: render portions of the digital scene that have a virtual distance less than the maximum distance stereoscopically (Clemens; instructions [as addressed above] to render portions of the digital scene that have a virtual distance (as indicated by one or more frustums) less than the maximum distance (i.e. comfort boundary and/or (far) clipping plane) stereoscopically [Col. 16, lines 4-54] responsive to an implicit determination that the user is looking at a location (given eye tracking) in the digital scene [Col. 10, lines 16-37 and Col. 20, lines 22-47] that has a virtual distance (as indicated by one or more frustums) less than the maximum distance (i.e. comfort boundary and/or (far) clipping plane) [Col. 14, lines 6-38]; wherein, determination that the user is looking further corresponds to eye tracking [Col. 9, line 63 to Col. 10, line 39 and Col. 11, lines 9-32]; moreover, between rendering planes [Col. 14, line 48 to Col. 15, line 30]); render portions of the digital scene that have a virtual distance greater than the maximum distance stereoscopically (Clemens; instructions [as addressed above] to render portions of the digital scene that have a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. (near) clipping plane(s)) stereoscopically [Col. 14, lines 6-47] in responsive to an implicit determination that the user is looking at a location (given eye tracking) in the digital scene [Col. 10, lines 16-37 and Col. 20, lines 22-47] that has a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. (near) clipping plane) [Col. 14, line 48 to Col. 15, line 30]; wherein, determination that the user is looking further corresponds to eye tracking [Col. 9, line 63 to Col. 10, line 39 and Col. 11, lines 9-32]); and render portions of the digital scene that have a virtual distance greater than the maximum distance as a perspective (Clemens; instructions [as addressed above] to render portions of the digital scene that have a virtual distance (as indicated by one or more frustums) greater than the maximum distance (i.e. (far) clipping plane) as a perspective in relation with a constant image or non-rendering [Col. 14, lines 6-26 and Col. 16, lines 4-54]; additionally, single point of view or eye image [Col. 5, lines 32-55 and Col. 6, lines 1-26] in relation with rendering a scene [Col. 10, lines 16-33]). Clemens as modified by Vesely fails to disclose a single eye image. Saigo further teaches rendering the digital scene as a single eye (Saigo; rendering the digital scene as a single eye [¶ 0039-0041]; moreover, turning off stereoscopic imaging in relation with a suitable distance [¶ 0045-0046]). Clemens in view of Vesely and Lanman and Saigo are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens as modified by Vesely and Lanman, to incorporate rendering the digital scene as a single eye (as taught by Saigo), in order to provide improved techniques of stereoscopic displaying/imaging using a head-mount-display (Saigo; [¶ 0014-0015]). (further refer to the rejection of claim 7) Regarding claim 14, Clemens in view of Vesely, Lanman, Beith, Song, and Saigo further discloses the non-transitory machine-readable storage medium of claim 13, wherein the maximum distance is user specific and display specific (Clemens; the maximum distance (i.e. comfort boundary and/or clipping plane(s)) is user specific and display specific [Col. 9, line 38 to Col. 10, line 39 and Col. 12, lines 9-52]; furthermore, display defined characteristic [Col. 7, lines 10-30], and user controlled view point [Col. 11, lines 34-48]). Regarding claim 15, Clemens in view of Vesely, Lanman, Beith, Song, and Saigo further discloses the non-transitory machine-readable storage medium of claim 13, wherein stereoscopically rendering comprises generating a left eye image and a right eye image and presenting the images to a corresponding eye (Clemens; stereoscopically rendering comprises generating a left eye image and a right eye image and presenting the images to a corresponding eye [Col. 8, line 55 to Col. 9, line 2 and Col. 14, lines 6-38]). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Clemens in view of Vesely, Lanman, Beith, Song, and Saigo as applied to claim(s) 13 above, and further in view of Kim. Regarding claim 16, Clemens in view of Vesely, Lanman, Beith, Song, and Saigo further discloses the non-transitory machine-readable storage medium of claim 13, wherein single eye image rendering comprises generating an image from a viewpoint (Saigo; single eye (i.e. non-stereoscopic, 2D) image rendering comprises generating an image from a viewpoint [¶ 0039-0041]; moreover, turning off stereoscopic imaging in relation with a suitable distance [¶ 0045-0046]). Clemens in view of Vesely, Lanman, Beith, Song, and Saigo fails to explicitly disclose rendering comprises generating an image from a viewpoint of either a left eye of the user or a right eye of the user and displaying, on the display device, the image to both the left eye of the user and a right eye of the user. However, Kim teaches wherein single eye image rendering comprises generating an image from a viewpoint of either a left eye of the user or a right eye of the user and displaying, on the display device, the image to both the left eye of the user and a right eye of the user (Kim; the single eye image (i.e. 2D left image (Di_1) or 2D right image (Di_2)) rendering comprises generating an image from a viewpoint of either a left eye of the user or a right eye of the user and displaying, on the display device, the image to both the left eye of the user and a right eye of the user (corresponding to a 2D image) [¶ 0101 and ¶ 0106-0108]). Clemens in view of Vesely, Lanman, Beith, Song, and Saigo and Kim are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Clemens in view of Vesely, Lanman, Beith, Song, and Saigo, to incorporate wherein single eye image rendering comprises generating an image from a viewpoint of either a left eye of the user or a right eye of the user and presenting the image to both the left eye of the user and a right eye of the user (as taught by Kim), in order to provide improved realistic stereoscopic imaging based on viewpoints of a user (Kim; [¶ 0005-0006, ¶ 0009, and ¶ 0016-0017]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Charles Lloyd Beard whose telephone number is (571)272-5735. The examiner can normally be reached Monday - Friday, 8:00 AM - 5: 00 PM, alternate Fridays EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CHARLES LLOYD. BEARD Primary Examiner Art Unit 2611 /CHARLES L BEARD/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Apr 14, 2023
Application Filed
Dec 28, 2024
Non-Final Rejection — §103
Apr 02, 2025
Response Filed
Apr 14, 2025
Final Rejection — §103
Jun 18, 2025
Response after Non-Final Action
Jul 17, 2025
Request for Continued Examination
Jul 18, 2025
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §103
Oct 29, 2025
Response Filed
Feb 07, 2026
Final Rejection — §103
Mar 31, 2026
Applicant Interview (Telephonic)
Mar 31, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579729
VOLUMETRIC VIDEO SUPPORTING LIGHT EFFECTS
2y 5m to grant Granted Mar 17, 2026
Patent 12548225
AUDIO OR VISUAL INPUT INTERACTING WITH VIDEO CREATION
2y 5m to grant Granted Feb 10, 2026
Patent 12519924
MULTI-PERSPECTIVE AUGMENTED REALITY EXPERIENCE
2y 5m to grant Granted Jan 06, 2026
Patent 12511801
GENERATING VIDEO STREAMS TO DEPICT BOT PERFORMANCE DURING AN AUTOMATION RUN
2y 5m to grant Granted Dec 30, 2025
Patent 12513279
STEREOSCOPIC VIDEO DISPLAY DEVICE, STEREOSCOPIC VIDEO DISPLAY METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+36.1%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month