DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on February 19, 2026 was filed after the mailing date of the application on May 23, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-3, 5-12, 14-17, and 19-22 have been considered but are moot because new grounds of rejection are made in view of Ninan (US 20240272712A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 5, 6, 8, 15, and 20-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Apodaca (US 20180286106A1) and Ninan (US 20240272712A1).
As per Claim 1, Apodaca teaches a method comprising: at a device (100) including a display (110A), one or more processors (102), and non-transitory memory (104) (computing system 100 includes one or more processor(s) 102 and a system memory 104, display device 110A, [0031]): obtaining gaze information; obtaining, based on the gaze information (focal point image plane 602 represents a plane in the scene that is perpendicular to the direction of view for the scene at a depth in the z-axis, [0129], a depth of field represents a range of distances along the z-axis both in front of and behind the focal point image plane 602, [0130]), a first resolution function and a second resolution function different than the first resolution function; rendering a first layer (601) based on first virtual content and the first resolution function (user interface (UI) image plane 601 to permit graphics objects that typically contain text-based objects be rendered at a higher resolution, [0134]; Fig. 6 shows that the user interface image plane 601 is perpendicular to the direction of view at a depth that is closest to the user); rendering a second layer (604) based on second virtual content and the second resolution function (graphical objects located behind the focal point image plane 602 at a depth along the z-axis that is beyond the depth of field are rendered into a background image plane 604 that utilizes a lower resolution for graphical objects, [0132]); compositing the first layer (601) and the second layer (604) into an image; and displaying, on the display, the image (Fig. 6 shows that a user views the user interface image plane 601 composited with the background image plane 604).
However, Apodaca does not teach wherein the first resolution function has a first value at first location and a second value less than the first value at a second location. However, Ninan teaches the first resolution function has a first value at first location (one or more screen image portions corresponding to the real-time foveal or focus vision field portion of the viewer in the screen images may be streamed with relatively high quality video/image data and rendered on the screen image display (104) with relatively high spatial resolution, [0112]) and a second value less than the first value at a second location (non-foveal-vision image portions corresponding to the real-time non-foveal or non-focus vision field portions of the viewer in the screen images may be streamed with relatively low quality video/image data and rendered on the screen image display (104) with relatively low spatial resolution, [0113]); rendering a first layer (screen image 104) based on first virtual content and the first resolution function (3D objects are rendered by the AR image display (102) and the screen image display (104), [0039], viewer can track any of these 3D objects as if such 2D object were actually present in the 3D physical space, as the viewer moves, all of the depicted objects in the rendered images may move with the viewer, through the combination of the AR images and the screen images, the viewer can get a psychovisual feeling of the 3D object being floating around, [0050], [0112, 0113]); rendering a second layer (AR image 102) based on second virtual content and the second resolution function (for example, the screen image display (104) may be capable of rendering finer spatial resolution as compared with the AR image display (102), [0088], [0039, 0050]); compositing the first layer and the second layer into an image (screen images can be rendered on the screen image display (104) while AR images can be rendered on the AR image display (102), the viewer can simultaneously watch the screen images and see additional 3D objects in the AR images rendered with the AR image display (102), [0049], Fig. 1A).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Apodaca so that the first resolution function has a first value at first location and a second value less than the first value at a second location because Ninan suggests that this way, image portions corresponding to the real-time foveal or focus vision field portion of the viewer in the images are rendered on the display with high resolution [0112] and images portions corresponding to the real-time non-foveal or non-focus vision field portions of the viewer in the images are rendered on the display with low resolution in order to save processing power and render more quickly [0113].
As per Claim 5, Apodaca teaches wherein the first layer (601) is a foreground layer and the second layer is a background layer (604) (shown in Fig. 6).
As per Claim 6, Apodaca teaches wherein the first virtual content (601) includes a user interface element of an application [0134].
As per Claim 8, Apodaca teaches wherein the second virtual content (604) includes a virtual environment ([0132], experiencing an immersive environment such as a virtual reality environment, [0146]).
As per Claim 15, Claim 15 is similar in scope to Claim 1, except that Claim 15 has the additional limitation wherein the first resolution function is a function of location on the display. Apodaca does not teach wherein the first resolution function is a function of location on the display. However, Ninan teaches wherein the first resolution function is a function of location on the display [0112, 0113]. This would be obvious for the reasons given in the rejection for Claim 1. Thus, Claim 15 is rejected under the same rationale as Claim 1 along with this teaching from Ninan.
As per Claim 20, Claim 20 is similar in scope to Claim 1, except that Claim 20 is directed to a non-transitory memory storing one or more programs, which, when executed by one or more processors of a device, cause the device to perform the method, the gaze information indicating a gaze location; wherein the first resolution function varies as a function of distance from the gaze location. Apodaca teaches a non-transitory memory storing one or more programs, which, when executed by one or more processors of a device, cause the device to perform the method (the illustrated method may be implemented as modules in a set of logic instructions stored in a computer-readable storage medium, [0139]).
However, Apodaca does not teach wherein the first resolution function varies as a function of distance from the gaze location. However, Ninan teaches obtaining gaze information indicating a gaze location; wherein the first resolution function varies as a function of distance from the gaze location [0112, 0113]. This would be obvious for the reasons given in the rejection for Claim 1. Thus, Claim 20 is rejected under the same rationale as Claim 1 along with this teaching from Ninan.
As per Claim 21, Apodaca does not teach wherein one or more processors are to obtain the first resolution function using a function of location on the display with a set of parameters set to a first set of values and to obtain the second resolution function using the function with the set of parameters set to a second set of values different than the first set of values. However, Ninan teaches screen image portions corresponding to the real-time foveal or focus vision field portion of the viewer in the screen images are rendered on the screen image display (104) with high spatial resolution [0112]. Non-foveal-vision image portions corresponding to the real-time non-foveal or non-focus vision field portions of the viewer in the screen images are rendered on the screen image display (104) with low spatial resolution [0113]. Ninan teaches for example, the screen image display (104) may be capable of rendering finer spatial resolution as compared with the AR image display (102) [0088]. Thus, for the first resolution function, the foveal vision field portions are set to high resolution and the non-foveal-vision image portions are set to low resolution, and this is a set of parameters set to a first set of values. For the second resolution function, the resolution is set to lower resolution, and this the set of parameters set to a second set of values (lower resolution) different than the first set of values, because the second set of values is a lower resolution than the first set of values. Thus, Ninan teaches wherein one or more processors are to obtain the first resolution function using a function of location on the display with a set of parameters set to a first set of values [0112, 0113] and to obtain the second resolution function using the function with the set of parameters set to a second set of values different than the first set of values [0088]. This would be obvious for the reasons given in the rejection for Claim 1.
As per Claim 22, Apodaca does not teach wherein one of the second set of values is different than a corresponding one of the first set of values and others of the second set of values are the same as corresponding others of the first set of values. However, Ninan teaches screen image portions corresponding to the real-time foveal or focus vision field portion of the viewer in the screen images are rendered on the screen image display (104) with high spatial resolution [0112]. Non-foveal-vision image portions corresponding to the real-time non-foveal or non-focus vision field portions of the viewer in the screen images are rendered on the screen image display (104) with low spatial resolution [0113]. Ninan teaches for example, the screen image display (104) may be capable of rendering finer spatial resolution as compared with the AR image display (102) [0088]. Thus, for the first resolution function, the foveal vision field portions are set to high resolution and the non-foveal-vision image portions are set to low resolution, and this is a set of parameters set to a first set of values. For the second resolution function, the resolution is set to lower resolution, and this the set of parameters set to a second set of values. Thus, for the foveal vision field portions, the second set of values is a lower resolution than a corresponding one of the first set of values. Thus, it would have been obvious to one of ordinary skill in the art that for the non-foveal-vision image portions, the second set of values could be the same low resolution as corresponding others of the first set of values. Thus, Ninan teaches wherein one of the second set of values is different than a corresponding one of the first set of values and others of the second set of values are the same as corresponding others of the first set of values [0112, 0113, 0088]. This would be obvious for the reasons given in the rejection for Claim 1.
Claim(s) 1 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brannan (US 20210278678A1) and Ninan (US 20240272712A1).
As per Claim 1, Brannan teaches a method comprising: at a device (300) including a display (330), one or more processors (310), and non-transitory memory (340) (computer system 300 comprising hardware elements, the hardware elements include one or more central processing units 310, output devices 330 (display device), computer system 300 may also include storage devices 340, [0029]): obtaining gaze information (positioning of the foveal region within the full image is dictated by a gaze point obtained from an eye tracker, [0019]); obtaining, based on the gaze information, a first resolution function and a second resolution function different than the first resolution function; rendering a first layer based on first virtual content and the first resolution function; rendering a second layer based on second virtual content and the second resolution function (background image has a lower resolution than the foreground image, foreground image has second dimensions that are equal to a foveated region located within the image display region, [0015]); compositing the first layer and the second layer into an image (generates a composite image by scaling up the background image to the image display region and by overlaying the foreground image, [0015]); and displaying, on the display, the image (presents the composite image in the image display region of the display, [0015]).
However, Brannan does not teach wherein the first resolution function has a first value at first location and a second value less than the first value at a second location. However, Ninan teaches the first resolution function has a first value at first location [0112] and a second value less than the first value at a second location [0113]; rendering a first layer (screen image 104) based on first virtual content and the first resolution function [0039, 0050, 0112, 0113]; rendering a second layer (AR image 102) based on second virtual content and the second resolution function [0088, 0039, 0050]; compositing the first layer and the second layer into an image [0049] (Fig. 1A).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brannan so that the first resolution function has a first value at first location and a second value less than the first value at a second location because Ninan suggests that this way, image portions corresponding to the real-time foveal or focus vision field portion of the viewer in the images are rendered on the display with high resolution [0112] and images portions corresponding to the real-time non-foveal or non-focus vision field portions of the viewer in the images are rendered on the display with low resolution in order to save processing power and render more quickly [0113].
As per Claim 10, Brannan teaches further comprising: rendering a third layer (foreground for right eye) based on the first virtual content and a third resolution function (high resolution); rendering a fourth layer (background for right eye) based on the second virtual content and a fourth resolution function (low resolution); compositing the third layer and the fourth layer into an additional image (right eye image); and displaying, on the display, the additional image concurrently with the image (left eye image) (foveated rendering can be performed per eye gazing at a separate display (for stereoscopic display, such as in a virtual reality headset, where a left image and a right image are displayed from the left eye and right eye, respectively), the foveated rendering involves a low resolution background image, a high resolution foreground, and a corresponding composite image per eye (a total of two background images, two foreground images, and two composite images), in the case of stereoscopic display, the embodiments similarly apply by using a second set of images for the second eye, [0036]).
Claim(s) 2, 3, 16, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Apodaca (US 20180286106A1) and Ninan (US 20240272712A1) in view of Nagamine (US 20160366425A1).
As per Claim 2, Apodaca and Ninan are relied upon for the teachings as discussed above relative to Claim 1.
However, Apodaca and Ninan do not teach wherein the second resolution function has a lower maximum than the first resolution function. However, Nagamine teaches wherein the second resolution function has a lower maximum than the first resolution function (changes the resolution of the high-quality video image to 640x360, and changes the resolution of the low-quality video image to 160x90, i.e., changes the setting so as the lower the maximum resolution, [0113]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Apodaca and Ninan so that the second resolution function has a lower maximum than the first resolution function because Nagamine suggests that this ensures that resolution does not exceed the maximum resolution, and thus the resolution is not more than necessary, which reduces power consumption and reduces the bandwidth that is used [0003, 0113].
As per Claim 3, Apodaca and Ninan do not teach wherein the second resolution function has a lower minimum, faster drop-off, or narrower width than the first resolution function. However, Nagamine teaches wherein the second resolution function has a lower minimum, faster drop-off, or narrower width than the first resolution function (changes the resolution of the high-quality video image to 640x360, and changes the resolution of the low-quality video image to 160x90, [0113]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Apodaca and Ninan so that the second resolution function has a lower minimum, faster drop-off, or narrower width than the first resolution function because Nagamine suggests that the width is not more than necessary, which reduces power consumption and reduces the bandwidth that is used [0003, 0113].
As per Claims 16-17, these claims are similar in scope to Claims 2-3 respectively, and therefore are rejected under the same rationale.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Apodaca (US 20180286106A1) and Ninan (US 20240272712A1) in view of Kim (US 20230222957A1).
Apodaca and Ninan are relied upon for the teachings as discussed above relative to Claim 6.
However, Apodaca and Ninan do not teach wherein at least one of the first resolution function and the second resolution function is based on a type of the application. However, Kim teaches wherein at least one of the first resolution function and the second resolution function is based on a type of the application (determine, to the higher resolution, the resolution of an application, which needs to be displayed on the display with the higher image quality in which the application includes an application (document application) in a type requiring higher readability due to a large number of texts provided in an execution screen, or an application (video application) in a type requiring higher visibility, determine, to a lower resolution, the resolution of an application in a type, in which image quality (resolution) displayed on the display is relatively less important (bank application, navigation application, music applications), [0072]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Apodaca and Ninan so that at least one of the first resolution function and the second resolution function is based on a type of the application because Kim suggests that this way, applications in a type requiring higher readability or higher visibility have the higher resolution, and applications in a type in which image quality is relatively less important have the lower resolution to reduce power consumption [0008, 0072].
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Apodaca (US 20180286106A1) and Ninan (US 20240272712A1) in view of Abrams (US 20180165596A1).
As per Claim 9, Apodaca and Ninan are relied upon for the teachings as discussed above relative to Claim 1.
However, Apodaca and Ninan do not teach wherein at least one of the first resolution function and the second resolution function is based on a power consumption. However, Abrams teaches wherein at least one of the first resolution function and the second resolution function is based on a power consumption (rendering computer graphics avatar during a teleconference, tailor the resolution of the avatar based on the graphical processing power associated with the user platform, [0068]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Apodaca and Ninan so that at least one of the first resolution function and the second resolution function is based on a power consumption because Abrams suggests that this optimizes the resolution based on the graphical processing power associated with the user platform [0068].
26. As per Claim 19, Claim 19 is similar in scope to Claim 9, and therefore is rejected under the same rationale.
27. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brannan (US 20210278678A1) and Ninan (US 20240272712A1) in view of Hamilton (US 20200275076A1).
Brannan and Ninan are relied upon for the teachings as discussed above relative to Claim 10.
However, Brannan and Ninan do not expressly teach wherein the third resolution function and the fourth resolution function are a common resolution function. However, Hamilton teaches wherein the third resolution function and the fourth resolution function are a common resolution function (based on a physical observer’s distance to the display we can create an equation which estimates the observed resolution as a function of distance of the objects from the observer, [0156]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brannan and Ninan so that the third resolution function and the fourth resolution function are a common resolution function because Hamilton suggests that the resolutions can be calculated by using this common resolution function, so that the closer the objects are, the higher the calculated resolution is, so that the objects that the user does not care about that are far away have a lower resolution, and thus this reduces the power consumption [0156].
28. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brannan (US 20210278678A1) and Ninan (US 20240272712A1) in view of Kamariotis (US 20060150224A1) and Nagamine (US 20160366425A1).
Brannan and Ninan are relied upon for the teachings as discussed above relative to Claim 10.
However, Brannan and Ninan do not teach wherein the third resolution function and the fourth resolution function are different resolution functions. However, Kamariotis teaches wherein the resolution function is obtained with a formula with the maximum resolution and the minimum resolution (subtract max resolution from the min resolution, [0054]). This would be obvious for the reasons given in the rejection for Claim 4.
However, Brannan, Ninan, and Kamariotis do not teach wherein the third resolution function and the fourth resolution function are different resolution functions. However, Nagamine teaches the third resolution function and the fourth resolution function have different maximum resolutions (changes the resolution of the high-quality video image to 640x360, and changes the resolution of the low-quality video image to 160x90, i.e., changes the setting so as the lower the maximum resolution, [0113]). Since Kamariotis teaches wherein the resolution function is obtained with a formula with the maximum resolution and the minimum resolution [0054], this teaching from Nagamine can be implemented into the device of Kamariotis so that the third resolution function and the fourth are different resolution functions, because they have different maximum resolutions, and thus putting those different maximum resolutions into the formula would result in different resolution functions. This would be obvious for the reasons given in the rejection for Claim 2.
29. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Apodaca (US 20180286106A1) and Ninan (US 20240272712A1) in view of Schluessler (US 20180284872A1).
Apodaca and Ninan are relied on for the teachings as discussed above relative to Claim 1.
However, Apodaca and Ninan do not teach further comprising: rendering a third layer based on third virtual content; extrapolating a fourth layer based on the second layer; compositing the third layer and the fourth layer into an additional image; and displaying, on the display after displaying the image, the additional image. However, Schluessler teaches further comprising: rendering a third layer (foreground) based on third virtual content; extrapolating (predicting) a fourth layer based on the second layer (background); compositing the third layer and the fourth layer into an additional image; and displaying, on the display after displaying the image (prior frame), the additional image (current frame that is predicted based on the prior frame) (decouple and separate the foreground and the background to render the foreground and the background at different frame rates, apply predictive techniques to reduce motion artifacts, [0178]; prior frame may be predictive of the current frame rendering behavior, [0234]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Apodaca and Ninan to include rendering a third layer based on third virtual content; extrapolating a fourth layer based on the second layer; compositing the third layer and the fourth layer into an additional image; and displaying, on the display after displaying the image, the additional image as suggested by Schluessler. Schluessler suggests that by decoupling rendering of the foreground from the background, the whole frame buffer does not need to be rendered at the same frame rate. The objects in front may be rendered at a higher frequences and the objects at the back may be rendered at a lower frequency, based on an assumption that the objects at the front are what the user cares about, and thus this reduces the power consumption [0181].
Allowable Subject Matter
30. Claim 13 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JH
/JONI HSU/Primary Examiner, Art Unit 2611