Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This action is in response to Applicant’s amendments/remarks received on October 23,2025.
3. Claims 1-7 and 14-26 are pending in this application.
4. Claims 1 and 14 have been amended. Claims 8-13 have been canceled and new claims 21-26 are presented for examination.
Response to Arguments
4. Applicant's arguments filed October 23,2025 have been fully considered but they are deemed moot in view of the necessitated new grounds of rejection.
Double Patenting
5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
6. Claims 1-7 and 14-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 and 14-20 of U.S. Patent No. 12,075,018 B2 in view of Molyneaux et al.(US 2021/0142581 A1)(hereinafter Molyneaux).
With regards to claims 1 and 14, U.S. Patent No. 12,075,018 B2 claimed in claims 1 and 14, a system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising: receiving, from a computing device, virtual environment information associated with a virtual environment rendered by a remote virtual environment renderer of the computing device, wherein the virtual environment information comprises a depth buffer and a color buffer associated with a presenter perspective in the virtual environment, wherein the depth buffer is processed based on at least one of a projection matrix or a view matrix; generating a mesh for the virtual environment based on the depth buffer and the color buffer, wherein the mesh is generated based on scaling information of the depth buffer and textured according to the color buffer; and rendering, for display to a user, the generated mesh according to a viewer perspective of the virtual environment; and a method for generating a virtual environment based on virtual environment information, the method comprising: receiving, from a computing device, virtual environment information associated with a virtual environment rendered by a remote virtual environment renderer of the computing device, wherein the virtual environment information comprises a depth buffer and a color buffer associated with a presenter perspective in the virtual environment wherein the depth buffer is processed based on at least one of a projection matrix or a view matrix; generating a mesh for the virtual environment based on the depth buffer and the color buffer wherein the mesh is generated based on scaling information of the depth buffer and textured according to the color buffer; and rendering, for display to a user, the generated mesh according to a viewer perspective of the virtual environment.
U.S. Patent No. 12,075,018 B2 is silent to wherein the depth buffer further comprises a plurality of pixels and associated plurality of values in which the associated value is indicative of a coordinate value of each of the plurality of pixels such that the plurality of pixels is processed to generate a three-dimensional coordinate value.
However, Molyneaux, from the same field of endeavor, discloses an electronic system that comprises a sensor configured to capture depth information about one or more physical objects in a scene, and an application configured to execute computer executable instructions to render a virtual object in the scene. The depth information indicates distance between the user and the one or more physical objects. The depth information is transmitted to a remote service. The application receives from the remote service a depth buffer of a surface of the one or more physical objects in the scene, and portions of the virtual object is occluded by the surface. In some embodiments, the depth buffer of the surface is generated by the remote service based on the depth information and low-level data of a 3D reconstruction of the scene. In some embodiments, the depth information comprises a depth image having a plurality of pixels. The depth image may be stored in computer memory in any convenient way that captures distance between some reference point and surfaces in the scene 400. In some embodiments, the depth image may be represented as values in a plane parallel to an x-axis and y-axis, as illustrated in FIG. 9, with the reference point being the origin of the coordinate system. Locations in the X-Y plane may correspond to directions relative to the reference point and values at those pixel locations may indicate distance from the reference point to the nearest surface in the direction indicated by the coordinate in the plane. Such a depth image may include a grid of pixels (not shown) in the plane parallel to the x-axis and y-axis [See Molyneaux: at least Figs. 3-36,38-54B, par. 23-26,117, 160, 162-164,170, 185, 246, 262-265, 304, 346,353, 365-371].
One of ordinary skill in the art has been motivated to combine the system and method taught by U.S. Patent No. 12,075,018 B2 with Molyneaux’s depth buffer 3D pixel and coordinate characterization because this combination has the benefit of incorporating depth buffer pixel values to generate the mesh for rendering a virtual object in a scene in accordance to a user or viewer perspective.
With regards to claim 2, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 2, wherein the set of operations further comprises: receiving updated virtual environment information from the computing device, wherein the updated virtual environment information corresponds to a changed presenter perspective of three-dimensional geometry of the virtual environment, wherein the updated virtual environment information comprises at least one of an updated depth buffer or an updated color buffer; and rendering an updated mesh according to the updated virtual environment information.
With regards to claim 3, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 3, wherein the set of operations further comprises: receiving an indication of user input to change the viewer perspective; and rendering the generated mesh according to the changed viewer perspective.
With regards to claim 4, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 4, wherein: the virtual environment information further comprises a projection matrix associated with the computing device; and the mesh is generated based on the projection matrix associated with the computing device.
With regards to claim 5, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 4, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 5, wherein the mesh is generated further based on a projection matrix different from the projection matrix associated with the computing device.
With regards to claim 6, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 6, wherein: the virtual environment information further comprises additional information of at least one of audio, a chat feed, or a player activity feed; and the set of operations further comprises processing the additional information of the virtual environment information to provide the additional information to the user.
With regards to claim 7, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 7, wherein: the player perspective and the viewer perspective are different; and the depth buffer and the color buffer are of a remote virtual environment renderer of the computing device.
With regards to claim 15, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 14, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 15, further comprising: receiving updated virtual environment information from the computing device, wherein the updated virtual environment information corresponds to a changed presenter perspective of three-dimensional geometry of the virtual environment, wherein the updated virtual environment information comprises at least one of an updated depth buffer or an updated color buffer; and rendering an updated mesh according to the updated virtual environment information.
With regards to claim 16, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 14, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 16, further comprising: receiving an indication of user input to change the viewer perspective; and rendering the generated mesh according to the changed viewer perspective.
With regards to claim 17, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 14, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 17, wherein: the virtual environment information further comprises a projection matrix associated with the computing device; and the mesh is generated based on the projection matrix associated with the computing device.
With regards to claim 18, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 17, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 18, wherein the mesh is generated further based on a projection matrix different from the projection matrix associated with the computing device.
With regards to claim 19, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 14, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 19, wherein: the virtual environment information further comprises additional information of at least one of audio, a chat feed, or a player activity feed; and the method further comprises processing the additional information of the virtual environment information to provide the additional information to the user.
With regards to claim 20, U.S. Patent No. 12,075,018 B2 and Molyneaux teach all of the limitations of claim 14, and are analyzed as previously discussed with respect to that claim. Further, U.S. Patent No. 12,075,018 B2 claimed in claim 20, wherein: the player perspective and the viewer perspective are different; and the depth buffer and the color buffer are of a remote virtual environment renderer of the computing device.
Claim Rejections - 35 USC § 103
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
9. Claims 1-5, 7, 14-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over PAJAROLA etal.(PAJAROLA, etal. ,"Depth-Mesh Objects: Fast Depth-Image Meshing and Warping", In UCI-ICS Technical Report No. 3-02, February 1, 2003, pp.1-11) (hereinafter Pajarola) in view of Molyneaux et al.(US 2021/0142581 A1)(hereinafter Molyneaux).
Regarding claims 1 and 14, Pajarola discloses a system and a method for generating a virtual environment based on virtual environment information [See Pajarola: abstract, sections 3-5 regarding depth-image meshing rendering technique for hardware system to support per-pixel weight blending of multiple depth-images in real-time] comprising / the method comprising:
at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations [See Pajarola: section 5 and Tables 1-2 regarding “Table 1 shows timing results for the generation of depth meshes with our approach given a depth-image size of 5132 = 263169 pixels. The timing was performed on a Dell 2.2GHz Pentium4 using a nVIDIA GeForce4 4600Ti with the Detonator 4.0 drivers…”] , the set of operations comprising:
receiving, from a computing device, virtual environment information associated with a virtual environment, wherein the virtual environment information comprises a depth buffer and a color buffer associated with a presenter perspective [See Pajarola: On section 3.1 regarding “Given the depth values in the z buffer for a particular reference image. Further on section 4.1 second paragraph, regarding “The depth-mesh generation and segmentation is performed in the reference view coordinate system.”(the reference view represents the presenter perspective)] in the virtual environment[See Pajarola: On section 1 second paragraph regarding “We present an improved depth image warping technique based on adaptive triangulation and simplification of the depth-buffer, and rendering this depth-mesh with the color texture of the depth-image (see also [MMB97])…(It is implicit that the color texture is contained within a color buffer associated with the depth buffer)];
generating a mesh for the virtual environment based on the depth buffer and the color buffer [See Pajarola: On section 3.1 regarding where a depth mesh is generated. “a quadtree based multiresolution triangulation hierarchy [Paj02] can be constructed on the grid of pixels of the reference depth-image. We call this triangulation of a depth-image a depth-mesh, and the representation of an object by multiple depth-image triangulations a depth-mesh object. In the following we explain how a single depth-mesh is initialized from a given depth-image, and how an adaptively triangulated depth-mesh is generated at rendering time…”. Further, on section 3.2 first paragraph regarding “We use the restricted quadtree triangulation method presented in [Paj98] to generate a simplified triangulation of the z-buffer, called a depth-mesh…”. Also in section 4.1 regarding step 1: ”Select n reference depth-meshes Mi (i = 1…n) and textures Ti to be used for the current view, and calculate their positional blending weights wi with respect to the current viewpoint e…”(Textures are the content of the color buffer and the depth meshes are associated to corresponding textures, that is the generation of the depth meshes is based on the depth buffer and the color buffer)]; and
rendering, for display to a user, the generated mesh according to a viewer perspective[See Pajarola: On section 4.1 third paragraph regarding “Blending of n reference depth-meshes to synthesize a new view…”(the new view represents the viewer perspective)] of the virtual environment[See Pajarola: On section 4.1 first paragraph regarding “The approximate depth-image consisting of a segmented triangulation of the depth-buffer, as outlined in the previous section, is rendered using the color values of the reference frame-buffer as texture”. Further, refer to Figure 8 regarding depth-mesh rendering and blending stages where depth-meshes are rendered at display.] .
Pajarola does not explicitly disclose wherein the depth buffer further comprises a plurality of pixels and associated plurality of values in which the associated value is indicative of a coordinate value of each of the plurality of pixels such that the plurality of pixels is processed to generate a three-dimensional coordinate value.
However, Molyneaux, from the same field of endeavor, teaches wherein the depth buffer further comprises a plurality of pixels and associated plurality of values in which the associated value is indicative of a coordinate value of each of the plurality of pixels such that the plurality of pixels is processed to generate a three-dimensional coordinate value[See Molyneaux: at least Figs. 3-36,38-54B, par. 23-26,117, 160, 162-164,170, 185, 246, 262-265, 304, 346,353, 365-371 regarding an electronic system that comprises a sensor configured to capture depth information about one or more physical objects in a scene, and an application configured to execute computer executable instructions to render a virtual object in the scene. The depth information indicates distance between the user and the one or more physical objects. The depth information is transmitted to a remote service. The application receives from the remote service a depth buffer of a surface of the one or more physical objects in the scene, and portions of the virtual object is occluded by the surface. In some embodiments, the depth buffer of the surface is generated by the remote service based on the depth information and low-level data of a 3D reconstruction of the scene. In some embodiments, the depth information comprises a depth image having a plurality of pixels. The depth image may be stored in computer memory in any convenient way that captures distance between some reference point and surfaces in the scene 400. In some embodiments, the depth image may be represented as values in a plane parallel to an x-axis and y-axis, as illustrated in FIG. 9, with the reference point being the origin of the coordinate system. Locations in the X-Y plane may correspond to directions relative to the reference point and values at those pixel locations may indicate distance from the reference point to the nearest surface in the direction indicated by the coordinate in the plane. Such a depth image may include a grid of pixels (not shown) in the plane parallel to the x-axis and y-axis…].
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Pajarola with Molyneaux teachings by including “wherein the depth buffer further comprises a plurality of pixels and associated plurality of values in which the associated value is indicative of a coordinate value of each of the plurality of pixels such that the plurality of pixels is processed to generate a three-dimensional coordinate value” because this combination has the benefit of providing incorporating depth buffer pixel values to generate the mesh for rendering a virtual object in a scene in accordance to a user or viewer perspective.
Regarding claims 2 and 15, Pajarola and Molyneaux teach all the limitations of claims 1 and 14, and are analyzed as previously discussed with respect to those claims. Further on, Pajarola teaches wherein the set of operations further comprises: / further comprising: receiving updated virtual environment information from the computing device [See Pajarola: On section 3.1 regarding “Given the depth values in the z buffer for a particular reference image. Also in section 1.1 regarding “Due to its efficiency, our approach is applicable in various rendering systems such as [RP94, SLS+96, SS96, ACW+99] which update image-based scene representations frequently at run-time”. (That is, updated depth buffer and color buffer is received)]; and rendering an updated mesh according to the updated virtual environment information [See Pajarola: On section 4.1 third paragraph regarding “Blending of n reference depth-meshes to synthesize a new view…”(the new view can represent the viewer perspective or whoever is the origin of the new view.). On section 4.1 first paragraph regarding “The approximate depth-image consisting of a segmented triangulation of the depth-buffer, as outlined in the previous section, is rendered using the color values of the reference frame-buffer as texture”. Also in section 1.1 regarding “Due to its efficiency, our approach is applicable in various rendering systems such as [RP94, SLS+96, SS96, ACW+99] which update image-based scene representations frequently at run-time”.] .
Regarding claims 3 and 16, Pajarola and Molyneaux teach all the limitations of claims 1 and 14, and are analyzed as previously discussed with respect to those claims. Further on, Pajarola teaches wherein the set of operations further comprises: / further comprising: receiving an indication of user input to change the viewer perspective [See Pajarola: On section 1.1 regarding “Due to its efficiency, our approach is applicable in various rendering systems such as [RP94, SLS+96, SS96, ACW+99] which update image-based scene representations frequently at run-time”. Further, on section 4.1, “Depth-image warping can efficiently be performed by hardware supported rendering of textured polygons instead of projecting every single pixel from a reference depth-image to new views. The approximate depth-image consisting of a segmented triangulation of the depth-buffer, as outlined in the previous section, is rendered using the color values of the reference frame-buffer as texture…” (A new view represents the viewer perspective and an indication of a change in view is implicit because the image-based scene representations are updated at run-time)]; and rendering the generated mesh according to the changed viewer perspective [See Pajarola: On section 4.1 third paragraph regarding “Blending of n reference depth-meshes to synthesize a new view…”(the new view represents the viewer perspective.). On section 4.1 first paragraph regarding “The approximate depth-image consisting of a segmented triangulation of the depth-buffer, as outlined in the previous section, is rendered using the color values of the reference frame-buffer as texture”. Also in section 1.1 regarding “Due to its efficiency, our approach is applicable in various rendering systems such as [RP94, SLS+96, SS96, ACW+99] which update image-based scene representations frequently at run-time”.].
Regarding claims 4 and 17, Pajarola and Molyneaux teach all the limitations of claims 1 and 14, and are analyzed as previously discussed with respect to those claims. Further on, Pajarola teaches wherein: the virtual environment information further comprises a projection matrix associated with the computing device; and the mesh is generated based on the projection matrix associated with the computing device [See Pajarola: On section 4.1 second paragraph, regarding “The depth-mesh generation and segmentation is performed in the reference view coordinate system. Whenever a depth-mesh has to be rendered, the coordinate system transformation of that reference view is used as model-view transformation to place the depth-mesh correctly in the world coordinate system” (the coordinated system transformation is a projection matrix)].
Regarding claims 5 and 18, Pajarola and Molyneaux teach all the limitations of claims 4 and 17, and are analyzed as previously discussed with respect to those claims. Further on, Pajarola teaches wherein the mesh is generated further based on a projection matrix different from the projection matrix associated with the computing device [See Pajarola: On section 4.1 second paragraph, regarding “The depth-mesh generation and segmentation is performed in the reference view coordinate system. Whenever a depth-mesh has to be rendered, the coordinate system transformation of that reference view is used as model-view transformation to place the depth-mesh correctly in the world coordinate system” (the coordinated system transformation is a projection matrix and being different)].
Regarding claims 7 and 20, Pajarola and Molyneaux teach all the limitations of claims 1 and 14, and are analyzed as previously discussed with respect to those claims. Further on, Pajarola teaches wherein: the player perspective and the viewer perspective are different[See Pajarola: On section 4.1 third paragraph, “We present a novel and highly efficient blending algorithm that exploits graphics hardware acceleration and that supports per-pixel weighted blending of reference depth-images. Blending of n reference depth-meshes to synthesize a new view.” Also in Figure 7, multiple reference views are used.(One new view can be associated with a different viewer or player or whoever is the origin of the new view)]; and the depth buffer and the color buffer are of a remote virtual environment renderer of the computing device[See Pajarola: On section 3.1 regarding “Given the depth values in the z buffer for a particular reference image. Also in section 1.1 regarding “Due to its efficiency, our approach is applicable in various rendering systems such as [RP94, SLS+96, SS96, ACW+99] which update image-based scene representations frequently at run-time”. Also in Figure 7, multiple reference views are used. (That is, depth buffer and color buffer can be received for every reference view)].
12. Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over PAJAROLA etal.(PAJAROLA, etal. ,"Depth-Mesh Objects: Fast Depth-Image Meshing and Warping", In UCI-ICS Technical Report No. 3-02, February 1, 2003, pp.1-11) (hereinafter Pajarola) in view of Molyneaux et al.(US 2021/0142581 A1)(hereinafter Molyneaux) and in further view of Swann et al.(US 2021/0178266 A1)(hereinafter Swann).
Regarding claims 6 and 19, Pajarola and Molyneaux teach all the limitations of claims 1 and 14 and are analyzed as previously discussed with respect to those claims.
Pajarola and Molyneaux do not explicitly disclose wherein: the virtual environment information further comprises additional information of at least one of audio, a chat feed, or a player activity feed; and the set of operations further comprises / the method further comprises processing the additional information of the virtual environment information to provide the additional information to the user.
However, Swann teaches wherein: the virtual environment information further comprises additional information of at least one of audio, a chat feed, or a player activity feed; and the set of operations further comprises / the method further comprises processing the additional information of the virtual environment information to provide the additional information to the user[See Swann: at least Figs. 4A-4B, 6, 8-11, 13A-13C, par. 31, 33-35, 233, 243-250 regarding Video and audio may likewise be presented to a head mounted display unit 53 worn by a user 60. The operating system provides the user with a graphical user interface such as the PlayStation Dynamic Menu. The menu allows the user to access operating system features and to select games and optionally other content. As was described previously herein in relation to the 2D map, a set of transformations transcribe 3D points within the game world to 2D points within the captured image, according to a standard pipeline for such games. These typically include transforming co-ordinates of elements in the game environment through a camera matrix (or ‘view matrix’), and typically also through a perspective projection matrix (or ‘clip matrix’). Other transforms may also be performed, for example placing local model co-ordinates within a world co-ordinate system as a preparatory step.(This transformation of coordinates is associated with the head mounted display unit 53 and user 60 game interaction). (Thus, additional information obtained by the rendering of the virtual environment may include audio along with video, player activity and view matrix)].
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Pajarola and Molyneaux with Swann teachings by including “wherein: the virtual environment information further comprises additional information of at least one of audio, a chat feed, or a player activity feed; and the set of operations further comprises / the method further comprises processing the additional information of the virtual environment information to provide the additional information to the user” because this combination has the benefit of providing additional information to improve the rendering of a virtual environment.
13. Claims 21, 23 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Molyneaux et al.(US 2021/0142581 A1)(hereinafter Molyneaux) in view of Du et al.(US 2020/0066026 A1)(hereinafter Du).
Regarding claim 21, Molyneaux discloses a method for generating virtual environment information [See Molyneaux: at least Figs. 3-36,38-54B, par. 6-26 regarding method of operating a computing system to render a virtual object in a scene by generating surface information from the depth information, the generating comprising updating the surface information in real time as the scene and field of view changes; and computing, from the surface information and information about a location of the virtual object in the scene, portions of the virtual object to render.], the method comprising:
selecting a set of pixels from a depth buffer [See Molyneaux: at least Figs. 3-36,38-54B, par. 6-26, 99, 102, 158-161, 165, 370 regarding generating the surface information comprises filtering the depth information to generate a depth map, the depth map comprising a plurality of pixels, each pixel indicating a distance to a point of the physical object; selectively acquiring low-level data of a 3D reconstruction of the physical object; and generating the surface information based on the depth map and the selectively-acquired low-level data of the 3D reconstruction of the physical object.. the depth buffer of the surface is generated by the remote service based on the depth information and low-level data of a 3D reconstruction of the scene.];
generating associated three-dimensional coordinates associated with the selected set of pixels[See Molyneaux: at least Figs. 3-36,38-54B, par. 6-26, 99, 102, 117, 158-165,185,246, 262-265, 304, 346, 353, 365-371 regarding the depth buffer of the surface is generated by the remote service based on the depth information and low-level data of a 3D reconstruction of the scene. In some embodiments, the depth image may be represented as values in a plane parallel to an x-axis and y-axis, as illustrated in FIG. 9, with the reference point being the origin of the coordinate system. Locations in the X-Y plane may correspond to directions relative to the reference point and values at those pixel locations may indicate distance from the reference point to the nearest surface in the direction indicated by the coordinate in the plane. Such a depth image may include a grid of pixels (not shown) in the plane parallel to the x-axis and y-axis…];
generating a geometry for the set of pixels associated with the three-dimensional coordinates [See Molyneaux: at least Figs. 3-36,38-54B, par. 6-26, 99, 102, 117, 158-165,185, 188-189,200-203, 214, 246, 262-265, 304, 346, 353, 365-371 regarding Geometries (e.g., planes) in a scene may be obtained in XR systems to support applications, for example, a wall to place a virtual screen, and/or a floor to navigate a virtual robot. A common representation of a scene's geometry is a mesh, which may comprise groups of connected triangles having vertices and edges. Conventionally, a geometry in a scene is obtained by generating a mesh for the scene and searching the geometry in the mesh, which takes time to process, e.g., a few seconds, and doesn't indicate relationships among geometries requested by different queries…]; and
generating a mesh comprising the generated geometry [See Molyneaux: at least Figs. 3-36,38-54B, par. 6-26, 99, 102, 117, 126-129, 146-147, 155, 158-165,185, 188-189,200-203, 212-250, 262-265, 304, 346, 353, 365-371 regarding Geometries (e.g., planes) in a scene may be obtained in XR systems to support applications, for example, a wall to place a virtual screen, and/or a floor to navigate a virtual robot. A common representation of a scene's geometry is a mesh, which may comprise groups of connected triangles having vertices and edges. Conventionally, a geometry in a scene is obtained by generating a mesh for the scene and searching the geometry in the mesh, which takes time to process, e.g., a few seconds, and doesn't indicate relationships among geometries requested by different queries…].
Molyneaux does not explicitly disclose generating a mesh comprising the generated geometry, in which the mesh is subsequently textured and presented to a viewer.
However, generating a mesh and providing texture for viewer presentation was well known in the art at the time of the invention was filed as evident from the teaching of Du[See Du: at least Figs. 1-18m par. 6-12, 29-34,59, 69-77, 91-109,115,118, 139-145, 154-159, 161, 182-187, 189-191, regarding a component of a computer system uses texture maps for a current frame, depth maps for the current frame, and/or model data for a dynamic 3D model. Each of the texture maps for the current frame can include texture values (e.g., color values) captured from a different input viewpoint in a computer-represented environment. Each of the depth maps for the current frame can include depth values captured from one of the different input viewpoints. The model data can include points (e.g., vertices of triangles of a mesh) of the dynamic 3D model of the computer-represented environment. a rendering component of a computer system performs operations to texture a dynamic 3D model of a computer-represented environment. The rendering component receives texture maps and model data for a current frame… Finally, the rendering component renders a view of the textured (with applied texture values), dynamic 3D model from an output viewpoint. As used herein, the term “dynamic 3D model” encompasses triangular meshes of vertices and other deformable, volumetric representations in a 3D computer graphics environment or other computer-represented environment.].
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Molyneaux with Du teachings by including “generating a mesh comprising the generated geometry, in which the mesh is subsequently textured and presented to a viewer” because this combination has the benefit of providing texture details to improve the rendering of a virtual scene by reducing blurring and avoiding noticeable seams[See Du: at least par. 5-8, 29, 91, 98-101].
Regarding claim 23, Molyneaux and Du teach all of the limitations of claim 21, and are analyzed as previously discussed with respect to that claim. Further on, Molyneaux and Du teach wherein updated virtual environment information is transmitted based on receiving the user input to change the presenter perspective[See Molyneaux: at least par. 6, 97-99, 112, 126, 134, 140-142regarding The user 30 positions the AR display system at positions 34, and the AR display system records ambient information of a passable world (e.g., a digital representation of the real objects in the physical world that can be stored and updated with changes to the real objects in the physical world) relative to the positions 34 such as pose relation to mapped features or directional audio inputs. The positions 34 are aggregated to data inputs 36 and processed at least by a passable world module 38, which may be implemented, for example, by processing on a remote processing module 72 of FIG. 3… See Du: at least par. 11, 59, 69, 76-86, 99, 103-113, 182-187 regarding Specifically, for a given frame (e.g., associated with a timestamp or time slice), the fusion component (280) combines depth maps from different input viewpoints for the given frame to generate a 3D model for the given frame, which may include updating a reference 3D model (which is based on 3D model data for one or more previous frames) and estimating a current 3D model (based on the depth maps for the given frame). For example, the 3D model is a mesh of vertices for triangles or other volumetric representation. In some example implementations, the fusion component (280) runs as software on a separate computer system or separate virtual machine. Model data for the dynamic 3D model generated by the fusion component is transmitted (e.g., over a network) to one or more rendering components (290), which can also be called renderers or rendering engines… The rendering component receives a stream of model data (601) for a dynamic 3D model, with the model data (601) being updated on a frame-by-frame basis.].
Regarding claim 24, Molyneaux and Du teach all of the limitations of claim 21, and are analyzed as previously discussed with respect to that claim. Further on, Molyneaux and Du teach further comprising: obtaining additional depth buffer information associated with a perspective other than the presenter perspective; and transmitting, to the computing device, the additional depth buffer information [See Molyneaux: at least Figs. 3-36,38-54B, par. 23-26,112, 117, 160, 162-164,170, 185, 246, 262-265, 304, 346,353, 365-371 regarding an electronic system that comprises a sensor configured to capture depth information about one or more physical objects in a scene, and an application configured to execute computer executable instructions to render a virtual object in the scene. The depth information indicates distance between the user and the one or more physical objects. The depth information is transmitted to a remote service.. The reconstruction filter 4902 may provide the updated depth map to an occlusion service 4910. The occlusion service 4910 may compute occlusion data based on the updated depth map and information about a location of a virtual object in the scene. The occlusion data may be depth buffers of surfaces in the physical world. The depth buffers may store depths of pixels…See Du: at least par. 205-206 regarding After projecting at least some points of a dynamic 3D model to locations in the view from the perspective of the output viewpoint, and assigning corresponding texture values (applied to the points of the 3D model) to the locations in the view, the rendering component can add a background image to the texture values of the view. For example, the rendering component can store the background image in a depth buffer and layer the texture values of the view, in a screen buffer, over the background image. The rendering component can select the background image from a library of background images, which can include paintings, stock scenery, or other imagery. Or, the rendering component can select the background image from a video sequence (e.g., part of an animation sequence or film)…].
13. Claims 22 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Molyneaux et al.(US 2021/0142581 A1)(hereinafter Molyneaux) in view of Du et al.(US 2020/0066026 A1)(hereinafter Du) and in further view of Swann et al.(US 2021/0178266 A1)(hereinafter Swann).
Regarding claim 22, Molyneaux and Du teach all the limitations of claim 21 and are analyzed as previously discussed with respect to that claim.
Molyneaux and Du do not explicitly disclose wherein the generated virtual environment information further comprises additional information obtained from the virtual environment renderer of at least one of a view matrix, audio, a chat feed, or a player activity feed.
However, Swann teaches wherein the generated virtual environment information further comprises additional information obtained from the virtual environment renderer of at least one of a view matrix, audio, a chat feed, or a player activity feed [See Swann: at least Figs. 4A-4B, 6, 8-11, 13A-13C, par. 31, 33-35, 233, 243-250 regarding Video and audio may likewise be presented to a head mounted display unit 53 worn by a user 60. The operating system provides the user with a graphical user interface such as the PlayStation Dynamic Menu. The menu allows the user to access operating system features and to select games and optionally other content. As was described previously herein in relation to the 2D map, a set of transformations transcribe 3D points within the game world to 2D points within the captured image, according to a standard pipeline for such games. These typically include transforming co-ordinates of elements in the game environment through a camera matrix (or ‘view matrix’), and typically also through a perspective projection matrix (or ‘clip matrix’). Other transforms may also be performed, for example placing local model co-ordinates within a world co-ordinate system as a preparatory step.(This transformation of coordinates is associated with the head mounted display unit 53 and user 60 game interaction). (Thus, additional information obtained by the rendering of the virtual environment may include audio along with video, player activity and view matrix)].
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Molyneaux and Du with Swann teachings by including “wherein the generated virtual environment information further comprises additional information obtained from the virtual environment renderer of at least one of a view matrix, audio, a chat feed, or a player activity feed” because this combination has the benefit of providing additional information to improve the rendering of a virtual environment.
Regarding claim 25, Molyneaux and Du teach all the limitations of claim 21 and are analyzed as previously discussed with respect to that claim.
Molyneaux and Du do not explicitly disclose wherein a projection matrix is associated with at least one of a virtual reality headset or a user-configurable field of view.
However, associating the projection matrix to a virtual reality headset or in accordance to a user-configurable field of view was well known in the art at the time of the invention was filed as evident from the teaching of Swann[See Swann: at least par. 34, 35, 233, 243-250 regarding Video and audio may likewise be presented to a head mounted display unit 53 worn by a user 60. The operating system provides the user with a graphical user interface such as the PlayStation Dynamic Menu. The menu allows the user to access operating system features and to select games and optionally other content. As was described previously herein in relation to the 2D map, a set of transformations transcribe 3D points within the game world to 2D points within the captured image, according to a standard pipeline for such games. These typically include transforming co-ordinates of elements in the game environment through a camera matrix (or ‘view matrix’), and typically also through a perspective projection matrix (or ‘clip matrix’). Other transforms may also be performed, for example placing local model co-ordinates within a world co-ordinate system as a preparatory step.(This transformation of coordinates is associated with the head mounted display unit 53 and user 60 game interaction)].
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Molyneaux and Du with Swann teachings by including “wherein a projection matrix is associated with at least one of a virtual reality headset or a user-configurable field of view” because this combination has the benefit of providing coordinate transformation corrections based on a virtual reality headset when generating a virtual environment.
14. Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Molyneaux et al.(US 2021/0142581 A1)(hereinafter Molyneaux) in view of Du et al.(US 2020/0066026 A1)(hereinafter Du) and in further view of Venshtain et al.(US 2021/0248819 A1)(hereinafter Venshtain).
Regarding claim 26, Molyneaux and Du teach all of the limitations of claim 23, and are analyzed as previously discussed with respect to that claim.
Molyneaux and Du do not explicitly disclose wherein the user input to change the presenter perspective is received from a presenter computing device.
However, Venshtain, from the same field of endeavor, teaches wherein the user input to change the presenter perspective is received from a presenter computing device [See Venshtain: at least Fig. 15 and par. 186-187 regarding In the particular example that is shown in FIG. 15, there is a depiction 1502 in which the only virtual element is the 3D presenter persona 116. Certainly one or more additional virtual elements could be depicted in various different embodiments. In this example, then, the 3D presenter persona 116 is depicted as standing on the (real) ground 1504 in front of some (real) trees 1510 and some (real) clouds 1508 against the backdrop of the (real) sky 1506. In this simple example, the viewer has chosen to view the lecture by the presenter 102 from a location out in nature, but of course this is presented merely by way of example and not limitation…]
Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Molyneaux and Du with Venshtain teachings by including “wherein the user input to change the presenter perspective is received from a presenter computing device” because this combination has the benefit of providing an alternate presenter view or perspective when rendering the virtual environment to a user.
Conclusion
15. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
16. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ana Picon-Feliciano/Examiner, Art Unit 2482
/CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482