Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to the application filed July 26, 2024, claims 1-15 are presented for examination. Claims 1, 14 and 15 are independent claims.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119(a)-(d), and based on application # 2312638.6 filed in United Kingdom on August 18, 2023 which papers have been placed of record in the file.
Oath/Declaration
The Office acknowledges receipt of a properly signed Oath/Declaration submitted August 21, 2024.
Information Disclosure Statement
The Applicant’s Information Disclosure Statement filed (July 26, 2024 and May 29, 2025) has been received, entered into the record, and considered.
Drawings
The drawings filed July 26, 2024 are accepted by the examiner.
Abstract
An abstract has not been filed. The abstract should be limited to 150 words. Correction is required. See MPEP § 608.01(b).
Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words. It is important that the abstract not exceed 150 words in length since the space provided for the abstract on the computer tape used by the printer is limited. The form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. implied language.
Claim Rejections - 35 USC § 101
7. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 14 recites a computer program comprising computer executable instructions. See MPEP § 2106.01.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
9. Claim 15 in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “configured” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “configured” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the words “an identification processor configured to, a first geometry processor configured to, a storage processor configured to, a second geometry processor configured to, a lighting processor configured to ” in claim 15, with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 3, 4, 5, 6, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Aksoy et al (IDS Prior art US 11721064 B1) in view of Fursund (IDS Prior art US 20210383599 A1).
As to Claim 1:
Aksoy et al. discloses a method for rendering a virtual environment (Aksoy, see Abstract, where Aksoy discloses that a method includes a server generating first shading information for visible portions of objects relative to a first viewpoint, storing the first shading information in a texture atlas, and sending the texture atlas to a client device. The method also includes determining a first
subset of the visible portions of the objects for which shading information is to be re-generated and a second subset for which elements of the first shading information are to be reused, generating second shading information for the first subset relative to a second viewpoint, updating the
texture atlas to include the second shading information for the first subset and the elements of the first shading information for the second subset, and sending the updated texture atlas to the client device. The updated texture atlas is configured for rendering images of the visible portions of the
objects from multiple viewpoints), the method comprising: identifying one or more static elements in the virtual environment (Aksoy, see column 16 lines 43-54, where Aksoy discloses that the application may override the recommended re-shading list in response to detecting a motion in the application that will affect the shading of one or more visible portions of the objects or triangles that were not recommended for reshading. In particular embodiments, a motion vector may be generated from the contents of the texture as it is being rendered and, based on the magnitude of the motion vector
( e.g., if the magnitude is above or below a predetermined threshold), the shading of the affected portion of the scene maybe updated at a higher or lower rate. For example, if the magnitude of the motion vector is very small, any change in shading may be imperceptible); determining, for a first frame having a first camera position (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated) a geometry of the static elements in the virtual environment (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated); storing the geometry of the static elements for the first frame (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated); determining, for a second frame having the first camera position, a geometry of at least part of the virtual environment based, at least in part, on the stored geometry of the static elements for the first frame (Aksoy, see column 1 lines 43-51, where Aksoy discloses that reducing the computational burden of real-time graphics rendering by eliminating the need to perform a shading operation on every object, or portion thereof, separately for each video frame in which it is visible, resulting in smoother videos and
reduced power consumption. The method determines which objects that are visible in a scene, or portions thereof, should be re-shaded for a current frame and which can be rendered using shading results from a previous frame); and determining, for the second frame, lighting for the at least part of the virtual environment at least in part based on the geometry of the at least part of the virtual environment determined for the second frame, to render the at least part of the virtual environment (Aksoy, see column 15 lines 46-59, where Aksoy discloses that when the relative positions of two or more visible objects, or portions thereof, change, this might also trigger a re-shading of one or more
of the objects, or portions thereof. Similarly, even if a particular object is stationary, there might be another object affecting its shading, such as by casting a shadow onto the surface of the stationary object that might trigger an update. For example, portions of the stationary object may need to be re-shaded if the silhouette or edge of the shadow moves across the surface of the stationary object to make sure the shadow appears to be moving across the surface relatively smoothly. However, if the stationary object is completely in shadow or completely not in shadow, there would be no need
for re-shading).
Aksoy differs from the claimed subject matter in that Aksoy discloses a camera (Aksoy, see column 4 lines 19-25, where Aksoy discloses that the headset 104 may include an audio device that may provide audio artificial reality content to the user 102. The headset 104 may include one or more cameras which 20 can capture images and videos of environments. The headset 104 may include an eye tracking system to determine the vergence distance of the user 102. The headset 104 may be referred as a head-mounted display (HDM). The controller 106 may comprise a trackpad and one or more buttons. The 25 controller 106 may receive inputs from the user 102 and relay the inputs to the computing system 108), Aksoy does not explicitly disclose virtual camera.
However in an analogous art, Fursund discloses virtual camera (Fursund, see paragraph [0041], where Fursund discloses that the lighting indication for the visible surface for a pixel may be determined by interpolating in 2 dimensions between the directional representations of lighting for a set of low-level probe positions for the pixel. Selection of valid local probes for interpolation may involve determining whether they are similar to the pixel's visible surface with regard to world-space position or depth from the virtual camera).
It would have been obvious to one of ordinary skill in the art to modify the invention of Aksoy with Fursund. One would be motivated to modify Aksoy by disclosing virtual camera as taught by Fursund, and thereby the accuracy of the determination of the lighting on the surfaces of the geometry of interest can be improved. (Fursund, see paragraph [0085]).
As to Claim 2:
Aksoy in view of Fursund discloses that the method of claim 1, wherein the method is for generating a light probe in a virtual environment, and wherein the at least part of the virtual environment is rendered to one or more faces of the light probe (Fursund, see paragraph [0007] and [0069], where Fursund discloses that "Light probes" are directional representations of lighting at particular probe positions in the space of a scene which is being rendered. For example, a directional representation of lighting may be implemented as a spherical harmonic function which describes the lighting at the corresponding probe position in terms of spherical harmonic components. Once the light probes have been determined for a frame then the lighting at a surface which is visible in a pixel can be determined based on the lighting from the light probes that are near the point in space visible in the pixel. For example, an indication of the lighting for a particular pixel may be determined based on a weighted average of a set of nearest light probes to the visible point or object. Multiple probes may contribute values to the pixel through interpolation or a weighted
average. The weights for the weighted average may be based on the distance between the pixel's visible position and the respective light probe position. The set of nearest light probes may include light probes for which the distance to the pixel position in world space is below a threshold value. The set of nearest light probes may include a predetermined number of light probes, for example the ten nearest light probes to the pixel position may be included in the set. The set of light probes might only include light probes which are visible from the pixel position using line-of-sight calculations. The selection scheme may take into account an interval (e.g. an amount of time, or a
number of frames) since the rays were most recently selected for a probe position. This can ensure that some ray directions are not neglected for a long time. Furthermore, the selection scheme may select directions for rays to be traced for a probe position based on directions selected for nearby
probe positions).
As to Claim 3:
Aksoy in view of Fursund discloses that the method of claim 1, further comprising identifying one or more dynamic elements in the virtual environment (Aksoy, see column 15, line 63, column 16, line 5, where Aksoy discloses animated character with head rotation and column 16, lines 47-52: motion detection).
As to Claim 4:
Aksoy in view of Fursund discloses that the method of claim 3, wherein determining the geometry of the at least part of the virtual environment for the second frame comprises: determining a geometry of the dynamic elements for the second frame, and combining the stored geometry of the static elements for the first frame and the determined geometry of the dynamic elements for the second frame (Aksoy, see column 16, line 47-57, where Aksoy discloses that portions of the scene with high motion are rendered with a higher update rate than static portions).
As to Claim 5:
Aksoy in view of Fursund discloses that the method of claim 4, wherein combining the stored geometry of the static elements and the determined geometry of the dynamic elements is in dependence on the relative depth, from the first virtual camera position, of the static and dynamic elements (Aksoy, see column 12, lines 15-20, where Aksoy discloses rendering of subsequent meshes into initial depth
map; column 13, line 24-31: standard is relying on the previous depth map for rendering).
As to Claim 6:
Aksoy in view of Fursund discloses that the method of claim 3, wherein the step of determining geometry for the second frame is performed in dependence on detecting movement of at least one of the dynamic elements between the first frame and the second frame (Aksoy, see column 15, line 63, column 16, line 5, where Aksoy discloses animated character with head rotation and column 16, lines 47-52: motion detection).
As to Claim 14:
Aksoy et al. discloses a computer program comprising computer executable instructions adapted to cause a computer system to perform a method for rendering a virtual environment (Aksoy, see Abstract, where Aksoy discloses that a method includes a server generating first shading information for visible portions of objects relative to a first viewpoint, storing the first shading information in a texture atlas, and sending the texture atlas to a client device. The method also includes determining a first subset of the visible portions of the objects for which shading information is to be re-generated and a second subset for which elements of the first shading information are to be reused, generating second shading information for the first subset relative to a second viewpoint, updating the texture atlas to include the second shading information for the first subset and the elements of the first shading information for the second subset, and sending the updated texture atlas to the client device. The updated texture atlas is configured for rendering images of the visible portions of the objects from multiple viewpoints), the method comprising: identifying one or more static elements in the virtual environment (Aksoy, see column 16 lines 43-54, where Aksoy discloses that the application may override the recommended re-shading list in response to detecting a motion in the application that will affect the shading of one or more visible portions of the objects or triangles that were not recommended for reshading. In particular embodiments, a motion vector may be generated from the contents of the texture as it is being rendered and, based on the magnitude of the motion vector ( e.g., if the magnitude is above or below a predetermined threshold), the shading of the affected portion of the scene maybe updated at a higher or lower rate. For example, if the magnitude of the motion vector is very small, any change in shading may be imperceptible); determining, for a first frame having a first camera position (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated), a geometry of the static elements in the virtual environment (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated); storing the geometry of the static elements for the first frame (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated); determining, for a second frame having the first camera position, a geometry of at least part of the virtual environment based, at least in part, on the stored geometry of the static elements for the first frame (Aksoy, see column 1 lines 43-51, where Aksoy discloses that reducing the computational burden of real-time graphics rendering by eliminating the need to perform a shading operation on every object, or portion thereof, separately for each video frame in which it is visible, resulting in smoother videos and reduced power consumption. The method determines which objects that are visible in a scene, or portions thereof, should be re-shaded for a current frame and which can be rendered using shading results from a previous frame); and determining, for the second frame, lighting for the at least part of the virtual environment at least in part based on the geometry of the at least part of the virtual environment determined for the second frame, to render the at least part of the virtual environment (Aksoy, see column 15 lines 46-59, where Aksoy discloses that when the relative positions of two or more visible objects, or portions thereof, change, this might also trigger a re-shading of one or more of the objects, or portions thereof. Similarly, even if a particular object is stationary, there might be another object affecting its shading, such as by casting a shadow onto the surface of the stationary object that might trigger an update. For example, portions of the stationary object may need to be re-shaded if the silhouette or edge of the shadow moves across the surface of the stationary object to make sure the shadow appears to be moving across the surface relatively smoothly. However, if the stationary object is completely in shadow or completely not in shadow, there would be no need for re-shading).
Aksoy differs from the claimed subject matter in that Aksoy discloses a camera (Aksoy, see column 4 lines 19-25, where Aksoy discloses that the headset 104 may include an audio device that may provide audio artificial reality content to the user 102. The headset 104 may include one or more cameras which 20 can capture images and videos of environments. The headset 104 may include an eye tracking system to determine the vergence distance of the user 102. The headset 104 may be referred as a head-mounted display (HDM). The controller 106 may comprise a trackpad and one or more buttons. The 25 controller 106 may receive inputs from the user 102 and relay the inputs to the computing system 108), Aksoy does not explicitly disclose virtual camera.
However in an analogous art, Fursund discloses virtual camera (Fursund, see paragraph [0041], where Fursund discloses that the lighting indication for the visible surface for a pixel may be determined by interpolating in 2 dimensions between the directional representations of lighting for a set of low-level probe positions for the pixel. Selection of valid local probes for interpolation may involve determining whether they are similar to the pixel's visible surface with regard to world-space position or depth from the virtual camera).
It would have been obvious to one of ordinary skill in the art to modify the invention of Aksoy with Fursund. One would be motivated to modify Aksoy by disclosing virtual camera as taught by Fursund, and thereby the accuracy of the determination of the lighting on the surfaces of the geometry of interest can be improved. (Fursund, see paragraph [0085]).
As to Claim 15:
Aksoy et al. discloses a system for rendering a virtual environment (Aksoy, see Abstract, where Aksoy discloses that a method includes a server generating first shading information for visible portions of objects relative to a first viewpoint, storing the first shading information in a texture atlas, and sending the texture atlas to a client device. The method also includes determining a first
subset of the visible portions of the objects for which shading information is to be re-generated and a second subset for which elements of the first shading information are to be reused, generating second shading information for the first subset relative to a second viewpoint, updating the
texture atlas to include the second shading information for the first subset and the elements of the first shading information for the second subset, and sending the updated texture atlas to the client device. The updated texture atlas is configured for rendering images of the visible portions of the
objects from multiple viewpoints), the system comprising: an identification processor configured to identify one or more static elements in the virtual environment (Aksoy, see column 16 lines 43-54, where Aksoy discloses that the application may override the recommended re-shading list in response to detecting a motion in the application that will affect the shading of one or more visible portions of the objects or triangles that were not recommended for reshading. In particular embodiments, a motion vector may be generated from the contents of the texture as it is being rendered and, based on the magnitude of the motion vector ( e.g., if the magnitude is above or below a predetermined threshold), the shading of the affected portion of the scene maybe updated at a higher or lower rate. For example, if the magnitude of the motion vector is very small, any change in shading may be imperceptible); a first geometry processor configured to determine, for a first frame having a first camera position (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated), a geometry of the static elements in the virtual environment (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated); a storage processor configured to store the geometry of the static elements for the first frame (Aksoy, see column 15 lines 31-39, where Aksoy discloses that one factor that contributes to a determination about which triangles to re-shade and which triangles to reuse is geometry, e.g., the relative geometries of two or more of the visible objects, or respective portions thereof. A list of triangles that were visible in the previous frame is saved. For the current frame, depending on changes to the user's viewpoint and object changes, a new list of triangles that are visible may be estimated); a second geometry processor configured to determine, for a second frame having the first camera position, a geometry of at least part of the virtual environment based, at least in part, on the stored geometry of the static elements for the first frame (Aksoy, see column 1 lines 43-51, where Aksoy discloses that reducing the computational burden of real-time graphics rendering by eliminating the need to perform a shading operation on every object, or portion thereof, separately for each video frame in which it is visible, resulting in smoother videos and reduced power consumption. The method determines which objects that are visible in a scene, or portions thereof, should be re-shaded for a current frame and which can be rendered using shading results from a previous frame); and a lighting processor configured to determine, for the second frame, lighting for the at least part of the virtual environment at least in part based on the geometry of the at least part of the virtual environment determined for the second frame, to render the at least part of the virtual environment (Aksoy, see column 15 lines 46-59, where Aksoy discloses that when the relative positions of two or more visible objects, or portions thereof, change, this might also trigger a re-shading of one or more
of the objects, or portions thereof. Similarly, even if a particular object is stationary, there might be another object affecting its shading, such as by casting a shadow onto the surface of the stationary object that might trigger an update. For example, portions of the stationary object may need to be re-shaded if the silhouette or edge of the shadow moves across the surface of the stationary object to make sure the shadow appears to be moving across the surface relatively smoothly. However, if the stationary object is completely in shadow or completely not in shadow, there would be no need
for re-shading).
Aksoy differs from the claimed subject matter in that Aksoy discloses a camera (Aksoy, see column 4 lines 19-25, where Aksoy discloses that the headset 104 may include an audio device that may provide audio artificial reality content to the user 102. The headset 104 may include one or more cameras which 20 can capture images and videos of environments. The headset 104 may include an eye tracking system to determine the vergence distance of the user 102. The headset 104 may be referred as a head-mounted display (HDM). The controller 106 may comprise a trackpad and one or more buttons. The 25 controller 106 may receive inputs from the user 102 and relay the inputs to the computing system 108), Aksoy does not explicitly disclose virtual camera.
However in an analogous art, Fursund discloses virtual camera (Fursund, see paragraph [0041], where Fursund discloses that the lighting indication for the visible surface for a pixel may be determined by interpolating in 2 dimensions between the directional representations of lighting for a set of low-level probe positions for the pixel. Selection of valid local probes for interpolation may involve determining whether they are similar to the pixel's visible surface with regard to world-space position or depth from the virtual camera).
It would have been obvious to one of ordinary skill in the art to modify the invention of Aksoy with Fursund. One would be motivated to modify Aksoy by disclosing virtual camera as taught by Fursund, and thereby the accuracy of the determination of the lighting on the surfaces of the geometry of interest can be improved. (Fursund, see paragraph [0085]).
Allowable Subject Matter
Claims 7-13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Referring to claim 7 and dependent claims 8-9, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “determining a geometry of the at least part of the virtual environment for the first frame; wherein, upon detecting that the dynamic elements have not moved between the first frame and the second frame, determining lighting for the second frame is at least partly based on the determined geometry of the at least part of the virtual environment for the first frame.”
Referring to claim 10, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “wherein determining lighting for the at least part of the virtual environment for the second frame based on the determined geometry of the at least part of the virtual environment comprises: determining one or more output pixels for output to a display, and performing a lighting operation for each output pixel.”
Referring to claim 11 and dependent claim 12, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “determining a virtual camera position for the second frame, and upon detecting that the second frame has a second, different, virtual camera position: determining, for the second frame, a geometry of the static elements in the virtual environment, and storing the geometry of the static elements for the second frame”.
Referring to claim 13, the following is a statement of reasons for the indication of allowable subject matter: the prior art fail to suggest limitations “identifying one or more fixed elements in the virtual environment, a fixed element being fixed relative to a virtual camera irrespective of the position of the virtual camera; determining, for the first frame, a geometry of the fixed elements in the virtual environment; storing the geometry of the fixed elements for the first frame; and determining the geometry for the second frame at least partly based on the stored geometry of the fixed elements for the first frame.
Conclusion
The prior art made of record and not relied upon is considered pertinent to
applicant's disclosure. Bradley (US 12236517 B2) discloses techniques are disclosed for generating photorealistic images of objects, such as heads, from multiple viewpoints. In some embodiments, a morphable radiance field (MoRF) model that generates images of heads includes an identity model that maps an identifier (ID) code associated with a head into two codes: a deformation ID code encoding a geometric deformation from a canonical head geometry, and a canonical ID code encoding a canonical appearance within a shape-normalized space. The MoRF model also includes a deformation field model that maps a world space position to a shape-normalized space position based on the deformation ID code. Further, the MoRF model includes a canonical neural radiance field (NeRF) model that includes a density multi-layer perceptron (MLP) branch, a diffuse MLP branch, and a specular MLP branch that output densities, diffuse colors, and specular colors, respectively. The MoRF model can be used to render images of heads from various viewpoints.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NELSON ROSARIO whose telephone number is (571)270-1866. The examiner can normally be reached on Monday through Friday, 7:30am- 5:00pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached on (571) 270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NELSON M ROSARIO/Primary Examiner, Art Unit 2624