Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are currently under review.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on April 7, 2025 is being considered by the examiner.
Claim Objections
Claims 1-2 and 16 are objected to because of the following informalities: typographic errors. Appropriate correction is required.
Claim 1, line 2: “an optical system forming a plurality of focal planes;”
Claim 1, line 3: “a sensor obtaining [[user]] gaze information of a user”
Claim 1, line 4: “a processor selecting one of the plurality of focal planes based on the gaze information;”
Claim 1, line 5: “changing [[the]]a focus of the optical system to form focus on [[the]]a selected focal plane, and”
Claim 1, line 7: “a display outputting the binocular disparity focal image under [[the]] control of the processor,”
Claim 2, line 2: “wherein the comfortable viewing zone is defined based on [[the]]a size of an allowable circle”
Claim 2, line 3: “of confusion that settles on [[the]]a retina of a human eye.”
Claim 14, line 2: “wherein the pre-trained deep learning model includes Z-buffer algorithms and ray tracing”
Claim 15, line 2: “wherein during training, the pre-trained deep learning model utilizes dynamic foveated rendering”
Claim 15, line 6: “[[the]]a center of the binocular disparity focal image with high resolution and [[the]]a periphery with”
Claim 16, line 3: “forming, by an optical system of the extended reality image device, a plurality of focal”
Claim 16, line 6: “selecting, by a processor of the extended reality image device, one of the plurality of focal planes”
Claim 16, line 7: “based on the gaze information of a user”
Claim 16, line 8: “changing, by the processor, [[the]]a focus of the optical system to form focus on [[the]]a”
Claim 17, line 2: “wherein the comfortable viewing zone is defined based on [[the]]a size of an allowable circle”
Claim 17, line 3: “of confusion that settles on [[the]]a retina of a human eye.”
Claim 20, line 5: “wherein during training, the pre-trained deep learning model uses dynamic foveated rendering”
Claim 20, line 9: “[[the]]a center of the binocular disparity focal image with high resolution and [[the]]a periphery with”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-12, 16-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Held et al. (Pub. No.: US 2023/0103091 A1) in view of Ollila (Pub. No.: US 2021/0243384 A1).
With respect to Claim 1, Held teaches a mixed reality image device (figs. 1 and 22-23: mixed reality or virtual reality HMD; ¶38, “images may be rendered on the display device 105 along with computer-generated virtual images that augment the captured images of the physical environment” – augmented reality; ¶90), including: an optical system (fig. 7, item 705, 710, 715, 720, 730, and 735; ¶51; ¶61 or fig. 11, item 720, 730, 735, 1105 and 1120; ¶70-71) forming plurality of focal planes; a sensor (fig. 1, item 135; ¶39) obtaining user gaze information; a processor (fig. 1, item 125 and fig 12, item 1210 and 715; fig. 23, item 2220; ¶97, “The HMD device 2200 can further include a controller 2220 such as one or more processors having a logic subsystem 2222 and a data storage subsystem 2224 in communication with the sensors, gaze detection subsystem 2210, display subsystem 2204, and/or other components through a communications subsystem 2226”) selecting one of the plurality of focal planes based on the gaze information changing the focus of the optical system to form focus on the selected focal plane, and generating a binocular disparity focal image (figs. 9-10; ¶66, “The synchronization enables construction of temporally multiplexed scenes with correct focus cues so that focal distances in the scene are presented with the birefringent lens 710 in the correct state”; ¶67-68, selecting between focal plane at d1 or d2); and a display outputting the binocular disparity focal image under the control of the processor (¶68, “The mixed-reality display system thus reproduces correct focus cues, including blur and binocular disparity, to thereby stimulate natural accommodation to converge to an appropriate focal distance to create sharp retinal images”), wherein a comfortable viewing zone for the user exists within a plurality of depth of fields by the plurality of focal planes (¶6, “The time response of the FLC modulator enables rapid state switching to construct a temporally multiplexed mixed-reality scene having appropriate focus cues to provide a comfortable visual experience no matter where in the scene the HMD user is accommodating”).
Held does not mention the mixed or virtual reality image device can be implemented in an extended reality image device.
Ollila teaches an extended reality image device (fig. 1, item 100; ¶55; ¶170), including: an optical system (fig. 1, item 104; ¶170) forming plurality of focal planes; a sensor (fig. 1, item 108); a processor (fig. 1, item 106); and a display (fig. 1, item 102).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mixed reality image device of Held, such that mixed reality image device is implemented in an extended reality image device, as taught by Ollila so as to be used in a variety of applications.
With respect to Claim 2, claim 1 is incorporated, Held does not mention wherein the comfortable viewing zone is defined based on the size of allowable circle of confusion that settles on the retina of a human eye.
Ollila teaches an extended reality image device (fig. 1, item 100; ¶55; ¶170), including: an optical system (fig. 1, item 104; ¶170) forming plurality of focal planes; a sensor (fig. 1, item 108); a processor (fig. 1, item 106); and a display (fig. 1, item 102), wherein a comfortable viewing zone is defined based on the size of allowable circle of confusion that settles on the retina of a human eye (¶86; ¶94, “an optimal step size that is required for a given speed of autofocusing, is dependent on focal length of camera optics (i.e., the different focal lengths of the optical element)”; ¶95, “the at least one focusing parameter is calculated based upon at least one of: a required blur value, a required final size of a circle of confusion, a focal length of the optical element, a required full displacement of the optical element”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mixed reality image device of Held, wherein the comfortable viewing zone is defined based on the size of allowable circle of confusion that settles on the retina of a human eye, as taught by Ollila so as to provide an extended reality image device that has a higher autofocus speed and improved output image (¶6).
With respect to Claim 3, claim 2 is incorporated, Held does not teach wherein the size of allowable circle of confusion is calculated in advance based on physiological surveys or diffraction relationships.
Ollila teaches an extended reality image device (fig. 1, item 100; ¶55; ¶170), including: an optical system (fig. 1, item 104; ¶170) forming plurality of focal planes; a sensor (fig. 1, item 108); a processor (fig. 1, item 106); and a display (fig. 1, item 102), wherein a comfortable viewing zone is defined based on the size of allowable circle of confusion that settles on the retina of a human eye (¶86; ¶94, “an optimal step size that is required for a given speed of autofocusing, is dependent on focal length of camera optics (i.e., the different focal lengths of the optical element)”; ¶95, “the at least one focusing parameter is calculated based upon at least one of: a required blur value, a required final size of a circle of confusion, a focal length of the optical element, a required full displacement of the optical element”); wherein the size of allowable circle of confusion is calculated in advance based on physiological surveys or diffraction relationships (¶95, “the at least one focusing parameter is calculated based upon at least one of: a required blur value, a required final size of a circle of confusion, a focal length of the optical element, a required full displacement of the optical element”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the mixed reality image device of Held, wherein the size of allowable circle of confusion is calculated in advance based on physiological surveys or diffraction relationships, as taught by Ollila so as to provide an extended reality image device that has a higher autofocus speed and improved output image (¶6).
With respect to Claim 6, claim 1 is incorporated, Held teaches wherein the comfortable viewing zone exists within the plurality of depth of fields by the plurality of focal planes (¶6, “The time response of the FLC modulator enables rapid state switching to construct a temporally multiplexed mixed-reality scene having appropriate focus cues to provide a comfortable visual experience no matter where in the scene the HMD user is accommodating”; ¶48 – a comfortable viewing zone is a focus that provides a comfortable visual experience).
With respect to Claim 7, claim 6 is incorporated, Held teaches wherein the optical system is set to have the comfortable viewing zone to exist within the plurality of depth of fields by the plurality of focal planes (¶6; ¶48, the varying focal cues correspond to depth of fields of the plurality of focal palnes).
With respect to Claim 8, claim 1 is incorporated, Held teaches wherein the optical system includes a depth-variable lens module for changing the focus (figs. 7 and 12, items 735, 705, and 710; ¶61; ¶63, “propagation of linearly polarized light through the FLC modulator 705 and birefringent lens 710 to focus light at different focal planes”; ¶66; ¶72).
With respect to Claim 9, claim 8 is incorporated, Held teaches wherein the depth-variable lens module includes at least one geometric phase lens that varies the focus of the optical system according to polarization control (¶62; ¶64; ¶70).
With respect to Claim 10, claim 9 is incorporated, Held teaches wherein each geometric phase lens is composed of a birefringence material and forms two focal planes (¶61; ¶63; ¶65-66).
With respect to Claim 11, claim 10 is incorporated, Held teaches wherein the optical system includes a visualization lens module, which is integrally formed with each geometric phase lens and visualizes the binocular disparity focal image on the selected focal plane (¶68, “The mixed-reality display system thus reproduces correct focus cues, including blur and binocular disparity, to thereby stimulate natural accommodation to converge to an appropriate focal distance to create sharp retinal images”; ¶75, “Proper continuous alignment of the user's eye with the display system can ensure that a display of virtual images in the different focal planes is correctly rendered with the appropriate focus cues including accurate binocular disparity and occlusion of real and virtual objects”).
With respect to Claim 12, claim 1 is incorporated, Held teaches wherein the processor generates the binocular disparity focal image by depth rendering (¶46; ¶68; ¶75).
With respect to Claim 16, Held teaches a method (fig. 17; ¶80; ¶98; ¶109) for providing an image in a mixed reality image device (figs. 1 and 22-23: mixed reality or virtual reality HMD; ¶38, “images may be rendered on the display device 105 along with computer-generated virtual images that augment the captured images of the physical environment” – augmented reality; ¶90) comprising: forming, by an optical system (fig. 7, item 705, 710, 715, 720, 730, and 735: optical system; ¶51; ¶61 or fig. 11, item 720, 730, 735, 1105 and 1120; ¶70-71) of the mixed reality image device, plurality of focal planes; obtaining, by a sensor (fig. 1, item 135; ¶39) of the mixed reality image device, user gaze information; selecting, by a processor (fig. 1, item 125 and fig 12, item 1210 and 715; fig. 23, item 2220; ¶97, “The HMD device 2200 can further include a controller 2220 such as one or more processors having a logic subsystem 2222 and a data storage subsystem 2224 in communication with the sensors, gaze detection subsystem 2210, display subsystem 2204, and/or other components through a communications subsystem 2226”) of the mixed reality image device, one of the focal planes based on the gaze information (figs. 9-10; ¶66, “The synchronization enables construction of temporally multiplexed scenes with correct focus cues so that focal distances in the scene are presented with the birefringent lens 710 in the correct state”; ¶67-68, selecting between focal plane at d1 or d2); changing, by the processor, the focus of the optical system to form focus on the selected focal plane (¶66-67); generating, by the processor, a binocular disparity focal image (¶68, “The mixed-reality display system thus reproduces correct focus cues, including blur and binocular disparity, to thereby stimulate natural accommodation to converge to an appropriate focal distance to create sharp retinal images”); outputting, by a display of the mixed reality image device, the binocular disparity focal image, wherein a comfortable viewing zone for the user exists within a plurality of depth of fields by the plurality of focal planes (¶6, “The time response of the FLC modulator enables rapid state switching to construct a temporally multiplexed mixed-reality scene having appropriate focus cues to provide a comfortable visual experience no matter where in the scene the HMD user is accommodating”; ¶68).
Held does not mention the mixed reality image device can be implemented in an extended reality image device.
Ollila teaches a method (fig. 12; ¶204) for providing an image in an extended reality image device (fig. 1, item 100; ¶55; ¶170), including: an optical system (fig. 1, item 104; ¶170) forming plurality of focal planes; a sensor (fig. 1, item 108); a processor (fig. 1, item 106); and a display (fig. 1, item 102).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of Held, such that mixed reality image device is implemented in an extended reality image device, as taught by Ollila so as to be used in a variety of applications.
With respect to Claim 17, claim 16 is incorporated, Held does not teach wherein the comfortable viewing zone is defined based on the size of allowable circle of confusion that settles on the retina of a human eye.
Ollila teaches a method (fig. 12; ¶204) for providing an image in an extended reality image device (fig. 1, item 100; ¶55; ¶170), including: an optical system (fig. 1, item 104; ¶170) forming plurality of focal planes; a sensor (fig. 1, item 108); a processor (fig. 1, item 106); and a display (fig. 1, item 102), wherein a comfortable viewing zone is defined based on the size of allowable circle of confusion that settles on the retina of a human eye (¶86; ¶94, “an optimal step size that is required for a given speed of autofocusing, is dependent on focal length of camera optics (i.e., the different focal lengths of the optical element)”; ¶95, “the at least one focusing parameter is calculated based upon at least one of: a required blur value, a required final size of a circle of confusion, a focal length of the optical element, a required full displacement of the optical element”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of Held, wherein the comfortable viewing zone is defined based on the size of allowable circle of confusion that settles on the retina of a human eye, as taught by Ollila so as to provide an extended reality image device that has a higher autofocus speed and improved output image (¶6).
With respect to Claim 19, claim 16 is incorporated, Held teaches wherein the comfortable viewing zone exists within the plurality of depth of fields by the plurality of focal planes (¶6, “The time response of the FLC modulator enables rapid state switching to construct a temporally multiplexed mixed-reality scene having appropriate focus cues to provide a comfortable visual experience no matter where in the scene the HMD user is accommodating”; ¶48 – a comfortable viewing zone is a focus that provides a comfortable visual experience).
Claims 4-5 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Held and Ollila as applied to claim 3 above, and further in view of Kim (Pub. No.: US 2019/0163133 A1).
With respect to Claim 4, claim 3 is incorporated, Held and Ollila combined do not teach wherein the size of allowable circle of confusion is calculated in advance based on average human visual acuity and pupil size.
Kim teaches a hologram generating apparatus (fig. 3; ¶57), wherein the size of allowable circle of confusion is calculated in advance based on average human visual acuity and pupil size (¶59, “In order to correspond to the human visual modeling system, the threshold value Δx of the size of the confusion circle of the sensor 330 can be set to a threshold value of the confusion circle size of the human eye”).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the extended reality image device of Held and Ollila, wherein the size of allowable circle of confusion is calculated in advance based on average human visual acuity and pupil size, as taught by Kim so as to quickly acquire image information for a three-dimensional object by utilizing the concept of depth of field based on modeling of the human visual system (¶34).
With respect to Claim 5, claim 4 is incorporated, Held and Ollila combined do not teach wherein the size of the allowable circle of confusion is between 10 micrometers and 15 micrometers.
Kim teaches a hologram generating apparatus (fig. 3; ¶57), wherein the size of allowable circle of confusion is calculated in advance based on average human visual acuity and pupil size (¶59, “In order to correspond to the human visual modeling system, the threshold value Δx of the size of the confusion circle of the sensor 330 can be set to a threshold value of the confusion circle size of the human eye”); wherein the size of the allowable circle of confusion is between 10 micrometers and 15 micrometers (¶20; ¶63-66, setting specific distances of the equation allows for the allowable size of the circle of confusion to be between 10 micrometers and 15 micrometers).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the extended reality image device of Held and Ollila, wherein the size of the allowable circle of confusion is between 10 micrometers and 15 micrometers, as taught by Kim so as to quickly acquire image information for a three-dimensional object by utilizing the concept of depth of field based on modeling of the human visual system (¶34).
With respect to Claim 18, claim 17 is incorporated, Held and Ollila combined do not teach wherein the size of allowable circle of confusion is calculated in advance based on physiological surveys or diffraction relationships, and the size of the allowable circle of confusion is between 10 micrometers and 15 micrometers.
Kim teaches a hologram generating apparatus (fig. 3; ¶57) and a method for providing an image (fig. 2; ¶51), wherein the size of allowable circle of confusion is calculated in advance based on physiological surveys or diffraction relationships (¶59, “In order to correspond to the human visual modeling system, the threshold value Δx of the size of the confusion circle of the sensor 330 can be set to a threshold value of the confusion circle size of the human eye”), and the size of the allowable circle of confusion is between 10 micrometers and 15 micrometers (¶20; ¶63-66 setting specific distances of the equation allows for the allowable size of the circle of confusion to be between 10 micrometers and 15 micrometers).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Held and Ollila, wherein the size of allowable circle of confusion is calculated in advance based on physiological surveys or diffraction relationships, and the size of the allowable circle of confusion is between 10 micrometers and 15 micrometers, as taught by Kim so as to quickly acquire image information for a three-dimensional object by utilizing the concept of depth of field based on modeling of the human visual system (¶34).
Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Held and Ollila as applied to claim 12 above, and further in view of Valli et al. (Pub. No.: US 2024/0430394 A1) hereinafter referred to as Valli.
With respect to Claim 13, claim 12 is incorporated, Held and Ollila combined do not teach wherein the processor uses a pre-trained deep learning model to generate the binocular disparity focal image.
Valli teaches a system (fig. 1A, item 100; ¶68); a sensor (fig. 1A, item 122; ¶74) obtaining user gaze information; a processor (¶69, server has a processor); and a display (fig. 1A, HMD on user; ¶68; ¶81) outputting a binocular disparity focal image under the control of the processor (¶80; ¶128-129); wherein the processor generates the binocular disparity focal image by depth rendering (¶65, z-buffering; ¶76-77); wherein the processor uses a pre-trained deep learning model to generate the binocular disparity focal image (¶65; ¶76-77; ¶88, z-buffering and ray tracing).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined extended reality image device of Held and Ollila, wherein the processor uses a pre-trained deep learning model to generate the binocular disparity focal image, as taught by Valli so as to support 3D motion parallax or natural relations between objects of a captured scene (¶66).
With respect to Claim 14, claim 13 is incorporated, Held and Ollila combined do not teach wherein the deep learning model includes Z-buffer algorithms and ray tracing algorithms.
Valli teaches a system (fig. 1A, item 100; ¶68); a sensor (fig. 1A, item 122; ¶74) obtaining user gaze information; a processor (¶69, server has a processor); and a display (fig. 1A, HMD on user; ¶68; ¶81) outputting a binocular disparity focal image under the control of the processor (¶80; ¶128-129); wherein the processor generates the binocular disparity focal image by depth rendering (¶65, z-buffering; ¶76-77); wherein the processor uses a pre-trained deep learning model to generate the binocular disparity focal image (¶65; ¶76-77; ¶88, z-buffering and ray tracing); wherein the deep learning model includes Z-buffer algorithms (¶57; ¶65; ¶76-77; ¶162) and ray tracing algorithms (¶88; ¶234-235).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined extended reality image device of Held and Ollila, wherein the deep learning model includes Z-buffer algorithms and ray tracing algorithms, as taught by Valli so as to support 3D motion parallax or natural relations between objects of a captured scene (¶66).
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Held, Ollila, and Valli, as applied to claim 14 above, and further in view of Ratcliff et al. (Pub. No.: US 2019/0086679 A1) hereinafter referred to as Ratcliff.
With respect to Claim 15, claim 14 is incorporated, Held, Ollila, and Valli combined do not teach wherein during training, the deep learning model utilizes dynamic foveated rendering produced first binocular disparity focal image as input data and outputs second binocular disparity focal image generated based on ray tracing algorithms, and wherein the dynamic foveated rendering includes a rendering operation that forms the center of the binocular disparity focal image with high resolution and the periphery with low resolution.
Ratcliff teaches a head mounted image device (fig. 1, item 100; ¶26-27), including: an optical system forming plurality of focal planes (¶31; ¶69, “A focus-tunable design allows for presenting multiple image planes at different virtual distances creating an appearance of the volumetric image rather than just a single image surface”); a sensor (¶94; ¶112; ¶127) obtaining user gaze information; a processor (fig. 15, item 1502; ¶122) generating a binocular disparity focal image (¶69); and a display (fig. 15, item 5126) outputting the binocular disparity focal image under the control of the processor; wherein during training, the deep learning model utilizes dynamic foveated rendering produced first binocular disparity focal image as input data and outputs second binocular disparity focal image generated based on ray tracing algorithms (¶94; ¶96-100), and wherein the dynamic foveated rendering includes a rendering operation that forms the center of the binocular disparity focal image with high resolution and the periphery with low resolution (¶112-113; ¶127).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined extended reality image device of Held, Ollila, and Valli, wherein during training, the deep learning model utilizes dynamic foveated rendering produced first binocular disparity focal image as input data and outputs second binocular disparity focal image generated based on ray tracing algorithms, and wherein the dynamic foveated rendering includes a rendering operation that forms the center of the binocular disparity focal image with high resolution and the periphery with low resolution, as taught by Ratcliff, so as to save processing resources (¶112).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Held and Ollila as applied to claim 16 above, further in view of Valli and in view of Ratcliff.
With respect to Claim 20, claim 16 is incorporated, Held and Ollila combined do not teach comprising: a step of generating the binocular disparity focal image by depth rendering; and a step of generating the binocular disparity focal image using a pre-trained deep learning model that includes Z-buffer algorithms and ray tracing algorithms.
Valli teaches a method (fig. 30A; ¶289) for providing an image in an image device, comprising: obtaining, by a sensor (fig. 1A, item 122; ¶74), user gaze information (fig. 30A, item 3008; ¶293); generating, by a processor (¶69, server has a processor), an image (fig. 30A, items 3010, 3012, and 3014; ¶294-296; ¶298); outputting, by a display (fig. 1A, HMD on user; ¶68; ¶81; ¶298), binocular disparity focal image; the method further comprising: a step of generating the binocular disparity focal image by depth rendering (¶65, z-buffering; ¶76-77; ¶294-296); and a step of generating the binocular disparity focal image (¶65; ¶76-77; ¶88, z-buffering and ray tracing) using a pre-trained deep learning model that includes Z-buffer algorithms (¶57; ¶65; ¶76-77; ¶162; ¶295-296) and ray tracing algorithms (¶88; ¶234-235).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Held and Ollila, further comprising: a step of generating the binocular disparity focal image by depth rendering; and a step of generating the binocular disparity focal image using a pre-trained deep learning model that includes Z-buffer algorithms and ray tracing algorithms, as taught by Valli so as to support 3D motion parallax or natural relations between objects of a captured scene (¶66).
Held, Ollila, and Valli combined do not teach wherein during training, the deep learning model uses dynamic foveated rendering produced first binocular disparity focal image as input data and outputs second binocular disparity focal image generated based on ray tracing algorithms, and wherein the dynamic foveated rendering includes a rendering operation that forms the center of the binocular disparity focal image with high resolution and the periphery with low resolution.
Ratcliff teaches a method (fig. 14; ¶106) for providing an image in a head mounted image device (fig. 1, item 100; ¶26-27) comprising: forming, by an optical system, a plurality of focal planes (¶31; ¶69, “A focus-tunable design allows for presenting multiple image planes at different virtual distances creating an appearance of the volumetric image rather than just a single image surface”); obtaining, by a sensor (¶94; ¶112; ¶127) eye tracking information; generating, by a processor (fig. 15, item 1502; ¶122), a binocular disparity focal image (¶69); outputting, by a display (fig. 15, item 5126), the binocular disparity focal image; wherein during training, the deep learning model utilizes dynamic foveated rendering produced first binocular disparity focal image as input data and outputs second binocular disparity focal image generated based on ray tracing algorithms (¶94; ¶96-100), and wherein the dynamic foveated rendering includes a rendering operation that forms the center of the binocular disparity focal image with high resolution and the periphery with low resolution (¶112-113; ¶127).
Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined method of Held, Ollila, and Valli, wherein during training, the deep learning model utilizes dynamic foveated rendering produced first binocular disparity focal image as input data and outputs second binocular disparity focal image generated based on ray tracing algorithms, and wherein the dynamic foveated rendering includes a rendering operation that forms the center of the binocular disparity focal image with high resolution and the periphery with low resolution, as taught by Ratcliff, so as to save processing resources (¶112).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA V Bocar whose telephone number is (571)272-0955. The examiner can normally be reached Monday - Friday 8:30am to 5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr A Awad can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONNA V Bocar/ Examiner, Art Unit 2621