Ur meNotice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/12/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 20 is objected because it should be depended on a computer readable storage medium independent claim 19, instead of a system independent claim 10.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “sparsely” in claim 1 is a relative term which renders the claim indefinite. The term “sparsely” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. First, no indication is provided in the specification that the term “sparsely” is a term in the art. Next, a review of the specification reveals that the term “sparsely” has been stated in the context of populating data but nowhere in the specification provides any standard or test how to determine the data are populated “sparsely.” For the reason, claim 1 is indefinite.
Independent claims 10 and 19 recite the same term and are rejected for the same reason. Dependent claims 2-9, 11-18, and 20 are rejected as they depend on independent claims 1, 10 and 19 respectively.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 19-20 recite "A computer-readable storage medium …”. The broadest reasonable interpretation of a claim drawn to a computer readable medium or its variation thereof covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 US.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 Us. C. § 101, Aug. 24, 2009; p. 2. Applicant is advised to narrow the claim to cover only statutory embodiments. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4-5, 9-11, 13-14, and 18-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Turner et al. (US 10553016 B2) (Hereinafter referred to as Turner).
Regarding claim 1, Turner discloses A method for selectively upsampling sensor data for display via an artificial reality (XR) system, the method comprising: (Abstract, “A display device, such as a head mounted device (HMD), displays a virtual scene…. The processor is further configured to render pixels of the set of those world-aligned viewing frustums that overlap with an output field of view and upsample the rendered pixels to generate values of display pixels for presentation by the display device.")
capturing, using one or more image capturing devices, visual data of real-world surroundings of the XR system; (Col 3 Lines 3-6, “The image acquisition and display system 100 includes an image acquisition device 104 that is used to acquire two-dimensional (2-D) images of a scene for presentation to a user via the electronic device 102.”)
assigning portions of the captured visual data to two or more categories;
selectively upsampling, for portions of the captured visual data assigned to a first category of the two or more categories, a resolution of the captured visual data; (Col 4 Line 45-53, “Pixels are rendered at high resolution within the high- acuity regions 210, 218, e.g., by rendering the pixels at a resolution that is equal to the native resolution supported by the display. Pixels in the low- acuity regions 214, 216, 222, 224 are rendered at lower resolutions, thereby reducing the power and computing resources needed to render the pixels. The rendered pixels in the low- acuity regions 214, 216, 222, 224 are subsequently upsampled to generate display pixels at the native resolution of the display,”)
populating a data structure by: sparsely populating a high-quality level of the data structure with the selectively upsampled data; and populating one or more other levels of the data structure using the captured visual data; and (Col 13 Lines 23-37, “the processor of the electronic device upsamples the rendered array of pixels of block 712 to generate values of display pixels for presentation in the final output display image by a display in the processing unit. For example, the rendered pixels can be upsampled to the native resolution of the display…. In this manner, the low acuity screens that make up the low acuity regions are upsampled and the display pixel values are merged with display pixels of a high acuity screen (not discussed herein) to generate a merged, full-resolution image.”)
rendering, by the XR system, a passthrough visual display of the captured real-world surroundings of the XR system, wherein portions of the passthrough visual display that correspond to the captured visual data assigned to the first category are rendered using the sparsely populated high-quality level of the data structure. (Block 712 in Fig 7; Col 13 Lines 4-8, “At block 712, a processor of the electronic device (e.g., a graphics processor unit) renders an array of pixels for each of the subset of the plurality of fixed viewing frustums identified at block 706 to overlap with the final output display image.")
PNG
media_image1.png
671
428
media_image1.png
Greyscale
Regarding claim 2, Turner discloses The method of claim 1, wherein portions of the passthrough visual display that correspond to the portions of the captured visual data not assigned to the first category are rendered using the populated one or more other levels of the data structure. (Col 4 Lines 48-50, “Pixels in the low- acuity regions 214, 216, 222, 224 are rendered at lower resolutions, thereby reducing the power and computing resources needed to render the pixels.”)
Regarding claim 4, Turner discloses The method of claim 1, further comprising: performing eye tracking of a user of the XR system, wherein the portions of the captured visual data are assigned to the two or more categories based on the performed eye tracking. (Col 4 Lines 29-35, “The display 208 is therefore subdivided into different regions based on a distance from the user's center of gaze, e.g., the eccentricity. For example, the field-of-view for the user's left eye can be subdivided into a high-acuity region 210 that surrounds a central gaze direction 212. The field-of-view for the user's left eye is further subdivided into lower- acuity regions 214, 216 in the visual periphery.”)
Regarding claim 5, Turner discloses The method of claim 4, wherein:
the portions of the captured visual data assigned to the first category correspond to a center region of a user’s field of view relative to the performed eye tracking, the portions of the captured visual data assigned to another category of the two or more categories correspond to a peripheral region of the user’s field of view relative to the performed eye tracking, and the rendered passthrough visual display comprises a foveated display comprising portions with different resolution levels. (FIG. 2; Col 4 Lines 27-43, “The electronic device 202 implements foveated rendering to present images to the user. The display 208 is therefore subdivided into different regions based on a distance from the user's center of gaze, e.g., the eccentricity. For example, the field-of-view for the user's left eye can be subdivided into a high-acuity region 210 that surrounds a central gaze direction 212. The field-of-view for the user's left eye is further subdivided into lower- acuity regions 214, 216 in the visual periphery. Similarly, the field-of-view for the user's right eye can be subdivided into a high acuity region 218 that surrounds a central gaze direction 220 and lower acuity regions 222, 224 in the visual periphery. The central gaze directions 212, 220 can be set equal to the center of a current field-of-view or they can be determined on the basis of eye tracking measurements that detect the central gaze direction of the user's eyes.”)
PNG
media_image2.png
463
650
media_image2.png
Greyscale
Regarding claim 9, Turner discloses The method of claim 1, wherein the passthrough visual display is rendered according to a frequency, and the selective upsampling is performed for a subset of instances rendered according to the frequency or each of the instances rendered according to the frequency. (Col 13 Lines 42-46, “Because the pixel rendering of block 712 is performed from only a limited number of perspectives (i.e., six for a world-aligned cube), upsampling the values of those rendered pixels will create the same pattern of aliasing artifacts within each low acuity screens.”)
Regarding claim 10, the claim 10 is similar in scope to claim 1 and is rejected under the same rationale.
Regarding claim 11, the claim 11 is similar in scope to claim 2 and is rejected under the same rationale.
Regarding claim 13, the claim 13 is similar in scope to claim 4 and is rejected under the same rationale.
Regarding claim 14, the claim 13 is similar in scope to claim 5 and is rejected under the same rationale.
Regarding claim 18, the claim 18 is similar in scope to claim 9 and is rejected under the same rationale.
Regarding claim 19, the claim 19 is similar in scope to claim 1 and is rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 7-8, 12, 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Turner et al. (US 10553016 B2) (Hereinafter referred to as Turner) in view of Szabolcs et al. (GB 2605472 A) (Hereinafter referred to as Szabolcs)
Regarding claim 3, Turner discloses The method of claim 1, to selectively upsample the captured visual data. (Col 3 Lines 3-6 of Turner, “The image acquisition and display system 100 includes an image acquisition device 104 that is used to acquire two-dimensional (2-D) images of a scene for presentation to a user via the electronic device 102.”; Abstract, “The processor is further configured to render pixels of the set of those world-aligned viewing frustums that overlap with an output field of view and upsample the rendered pixels to generate values of display pixels for presentation by the display device.”)
However, Turner does not explicitly disclose wherein bicubic sampling is performed.
Szabolcs more explicitly teaches, in the context of upsampling sensor data, wherein bicubic sampling is performed. (para. [00131] of Szabolcs, “the mipmaps may be sampled using bicubic sampling. Bilinear, trilinear and bicubic sampling are commonly is used in texture-sampling applications of mipmaps”)
As both Turner and Szabolcs are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein bicubic sampling is performed, in the context of upsampling sensor data, by Turner according to the teaching of Szabolcs in order to conserve power and computing resources used to generate digital images (Background of Turner).
Regarding claim 7, Turner discloses The method of claim 1, for the portions of the captured visual data assigned to the first category. (Col 4 Lines 29-35 of Turner, “The display 208 is therefore subdivided into different regions based on a distance from the user's center of gaze, e.g., the eccentricity. For example, the field-of-view for the user's left eye can be subdivided into a high-acuity region 210 that surrounds a central gaze direction 212.)
However, Turner does not explicitly disclose wherein the data structure comprises a hierarchical Mipmap, and the high-quality level of the hierarchical Mipmap is sparsely populated via the selective upsampling.
Szabolcs more explicitly teaches, in the context of upsampling sensor data, wherein the data structure comprises a hierarchical Mipmap, and the high-quality level of the hierarchical Mipmap is sparsely populated via the selective upsampling (para. [00131] of Szabolcs, “the mipmap pyramid may be sampled bilinearly at a single level of detail. In still other examples, the mipmaps may be sampled between levels using trilinear sampling.”)
As both Turner and Szabolcs are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the data structure comprises a hierarchical Mipmap, and the high-quality level of the hierarchical Mipmap is sparsely populated via the selective upsampling, in the context of upsampling sensor data, by Turner according to the teaching of Szabolcs in order to conserve power and computing resources used to generate digital images (Background of Turner).
Regarding claim 8, Turner disclose using the captured visual data and/or (Col 4 Lines 29-35 of Turner, “The display 208 is therefore subdivided into different regions based on a distance from the user's center of gaze, e.g., the eccentricity. For example, the field-of-view for the user's left eye can be subdivided into a high-acuity region 210 that surrounds a central gaze direction 212.)
However, Turner does not explicitly disclose wherein the one or more other levels of the hierarchical Mipmap are populated the captured visual data after downsampling.
Szabolcs more explicitly teaches, in the context of upsampling sensor data, wherein the one or more other levels of the hierarchical Mipmap are populated the captured visual data after downsampling. (para. [00127] of Szabolcs, “Mipmapping as such is known in the computer graphics literature, where it has been applied in the context of texture sampling. It uses a scale space pyramid, in which each level of the pyramid (sometimes referred to in the art as a "chain") is produced by downsampling the preceding level.”)
As both Turner and Szabolcs are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the one or more other levels of the hierarchical Mipmap are populated the captured visual data after downsampling, in the context of upsampling sensor data, by Turner according to the teaching of Szabolcs in order to conserve power and computing resources used to generate digital images (Background of Turner).
Regarding claim 12, the claim 12 is similar in scope to claim 3 and is rejected under the same rationale.
Regarding claim 16, the claim 16 is similar in scope to claim 7 and is rejected under the same rationale.
Regarding claim 17, the claim 17 is similar in scope to claim 8 and is rejected under the same rationale.
Regarding claim 20, the claim 20 is similar in scope to claim 7 and is rejected under the same rationale.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Turner et al. (US 10553016 B2) (Hereinafter referred to as Turner) in view of Price et al. (US 12080012 B2) (Hereinafter referred to as Price)
Regarding claim 6, Turner does not explicitly disclose further comprising:
displaying, via display hardware of the XR system, the rendered passthrough visual display, wherein a resolution of the one or more image capturing devices is less than a resolution of the display hardware.
However, Price more explicitly teaches, in the context of upsampling sensor data, further comprising: displaying, via display hardware of the XR system, the rendered passthrough visual display, wherein a resolution of the one or more image capturing devices is less than a resolution of the display hardware. (Col 5 Lines 38-47 of Price, “a high-resolution parallax-corrected image may be generated...In this regard, an HMD may implement a low-resolution stereo camera pair for capturing low-resolution images (for generating low-resolution depth maps)”)
As both Turner and Price are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include further comprising: displaying, via display hardware of the XR system, the rendered passthrough visual display, wherein a resolution of the one or more image capturing devices is less than a resolution of the display hardware, in the context of upsampling sensor data, by Turner according to the teaching of Price in order to conserve power and computing resources used to generate digital images (Background of Turner).
Regarding claim 15, the claim 15 is similar in scope to claim 6 and is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hyorim Park whose telephone number is (571)272-3859. The examiner can normally be reached Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Hyorim Park/Examiner, Art Unit 2619
/JASON CHAN/Supervisory Patent Examiner, Art Unit 2619