Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Status of Claims
Claims 1-19 are currently pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on March 20 2024 is hereby acknowledged. All references have been considered by the examiner. Initialed copies of the PTO-1449 are included in this correspondence.
Specification
The specification is objected to at the abstract and paragraphs [0010], [0012] and [0013] due to minor informalities:
a). Abstract line 7: “rendering in real-time a second object of the plurality of graphic” shall be “rendering in real-time a second object of the plurality of graphic objects”;
b). [0010] lines 7-8: “rendering in real-time a second object of the plurality of graphic” shall be “rendering in real-time a second object of the plurality of graphic objects”;
c). [0012] lines 7-8: “render in real-time a second object of the plurality of graphic” shall be “render in real-time a second object of the plurality of graphic objects”; and
d). [0013] lines 9-10: “render in real-time a second object of the plurality of graphic” shall be “render in real-time a second object of the plurality of graphic objects”.
Corrections are required.
Claim Objection
Claims 1, 10 and 11 are objected to due to minor informalities:
a). Claim 1 line 9 recites “rendering in real-time a second object of the plurality of graphic” shall be “rendering in real-time a second object of the plurality of graphic objects”;
b). Claim 10 line 12 recites “render in real-time a second object of the plurality of graphic” shall be “render in real-time a second object of the plurality of graphic objects”; and
c). Claim 11 line 12 recites “render in real-time a second object of the plurality of graphic” shall be “render in real-time a second object of the plurality of graphic objects”.
Corrections are required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2(c) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3 and 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Sandrew et al. (2015/0249815) in view of Dunn et al. (2019/0295310) and further in view of Xie et al. (2025/0142039).
Regarding claim 1, Sandrew teaches a method for rendering a stereoscopic virtual reality environment (e.g., A method that enables creation of a 3D virtual reality environment from a series of 2D images of a scene. Embodiments map 2D images onto a sphere to create a composite spherical image, divide the composite image into regions, and add depth information to the regions. Sandrew: Abstract L.1-5. The final composite image and depth information are projected back onto one or more spheres, and then projected onto left and right eye image planes to form a 3D stereoscopic image for a viewer of the virtual reality environment. Sandrew: Abstract L.9-13), comprising:
receiving a digital three-dimensional environment, the digital three dimensional environment including a plurality of graphic objects (e.g., In step 101, multiple 2D images are obtained of environment 100, yielding a set of 2D images 102. Environment 100 may for example be a room, an office, a building, a floor, a house, a factory, a landscape, or any other scene or combination of scenes for which a virtual reality experience is to be created. This environment may be real, or it may itself be virtual or computer generated, or it may be a mix of real elements and computer generated elements. Sandrew: [0038] L.4-11. Table 3 shows that the room can include a chair and a table);
selecting a first location in the digital three dimensional environment, the first location represented by a set of unique coordinates (e.g., In FIG. 4 2D images 401 and 402 are projected onto the sphere 403. Sphere 403 has center point c 404, and radius R 405. Image 401 was obtained using a camera with a viewer located at point v1 406; image 402 was obtained using a camera with a viewer located at point v2 410. The orientation of the planes of images 401 and 402 correspond to the orientation of the cameras used to capture those images. Each point on the 2D images 401 and 402 is projected to the sphere along a ray from the camera's viewer. For example, point p 407 is projected onto point q 408 on sphere 403. Since the ray from v1 through p is parameterized as {v1+t(p-v1): t.gtoreq.0}, point q can be obtained easily by finding the parameter t such that |v1+t(p-v1)-c|=R. Sandrew: [0053] and Fig. 4; reproduced below for reference.
PNG
media_image1.png
712
878
media_image1.png
Greyscale
Point p 407 is taken as a first location);
determining a first spherical projection plane from the first location; pre-rendering a first object of the plurality of graphic objects on the first spherical projection plane (e.g., In FIG. 4 2D images 401 and 402 are projected onto the sphere 403. Sphere 403 has center point c 404, and radius R 405. Image 401 was obtained using a camera with a viewer located at point v1 406; image 402 was obtained using a camera with a viewer located at point v2 410. The orientation of the planes of images 401 and 402 correspond to the orientation of the cameras used to capture those images. Each point on the 2D images 401 and 402 is projected to the sphere along a ray from the camera's viewer. For example, point p 407 is projected onto point q 408 on sphere 403. Since the ray from v1 through p is parameterized as {v1+t(p-v1): t >= 0}, point q can be obtained easily by finding the parameter t such that |v1+t(p-v1)-c|=R. Sandrew: [0053] and Fig. 4. The first object is projected on the sphere at point q 408);
rendering in real-time a second object of the plurality of graphic objects (e.g., It can be seen from Fig. 4 that “?” is a second object projected onto the sphere. These stereo images must be generated dynamically in approximately real-time as the viewer moves through the virtual reality environment. Sandrew: [0008] L.4-6); and
updating a framebuffer (see 1_1 below) with the second object and the first object, based on a shader output (e.g., FIG. 6 illustrates an unwrapped image obtained from a spherical projection via step 107--unwrap onto plane image. Converting the spherical image to a plane unwrapped image amounts to reversing the projections illustrated in FIG. 4 using a single projection of the sphere onto a plane. Sandrew: [0055] L.1-5. FIG. 9 illustrates an embodiment of step 114--generating stereo images. In this example the unwrapped image from FIG. 6 is combined with the depth map from FIG. 8 to generate left and right eye images, which are superimposed here on the same anaglyph image. This anaglyph image provides a 3D stereoscopic view of the scene when viewed through anaglyph glasses with different color filters in the two lenses. The amount of shift between left eye and right eye images for each pixel is a function of the depth map for that pixel. Sandrew: [0058]. See 1_2 below).
While Sandrew does not explicitly teach, Dunn teaches:
(1_1). updating a framebuffer (e.g., The resultant cube map with nearly uniform pixel density may be stored in computer memory and used to render one or more geometric primitives. The primitives are rasterized to create the frame buffer image ( e.g., a bitmap) for display using rasterizer 540. The rasterized image is output to display device 550 for display Dunn: [0051] L.16-21. Therefore, the unwrapped image of Sandrew is updated into the frame buffer for display);
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Dunn into the teaching of Sandrew so that unwrapped images are updated into the frame buffer for display on a display device.
While the combined teaching of Sandrew and Dunn does not explicitly teach, Xie teaches:
(1_2). based on a shader output (e.g., all shaders that involve coordinates of a vertex of the object in the raster rendering engine, including a shadow rendering component and a related screen post-treatment shader, need to be modified. Furthermore, it needs to ensure that a mesh model of the object does not include a triangular patch with an extremely large area. In a case that a user is not going to modify or is unable to modify all the shaders that involve the coordinates of the vertex in the raster rendering engine; Xie: [0006] L.1-9; without modifying all shaders that involve the coordinates of the vertex of the object in the raster rendering engine, the omni-directional stereo panoramic image can be obtained by performing the image deformation process and splicing process on the original images based on the depth information and the internal and external parameters of the cameras. Therefore, tensile distortion defects in a depth-image-based rendering process are reduced, a high-quality real-time rendering for omni-directional stereo videos is achieved, the operation efficiency is further improved, the operation cost is reduced, and the user experience is enhanced. Xie: [0043] L.13-24. Therefore, images to be displayed are processed output of shaders);
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Xie into the combined teaching of Sandrew and Dunn because the images to be displayed are processing output of shaders.
Regarding claim 2, the combined teaching of Sandrew, Dunn and Xie teaches the method of claim 1, further comprising: determining a second spherical projection plane from the first location, wherein the second spherical projection plane is closer than the first spherical projection to the first location; and rendering a third object on the second spherical projection plane (e.g., FIG. 8 illustrates an embodiment of step 111 - generating depth information for the points of the regions defined in step 109. In the example shown in FIG. 8, the depth information is encoded as a depth map, with points closer to the viewer shown with darker shades of grey, and points further from the viewer shown with lighter shades of grey. For example, the front edge 801 of the table in the center of the room has a dark shade since it is close to a viewer in or near the center of the room; the wall 802 behind the couch has a lighter shade since it is further from the viewer. Operators may assign depth information to individual pixels, or they may use the region masks to assist in defining depth information by positioning and rotating the regions in three-dimensional space. Numerical depth information that is not visible, for example compressed or encoded may also be utilized. Sandrew: [0088]).
Regarding claim 3, the combined teaching of Sandrew, Dunn and Xie teaches the method of claim 1, further comprising: pre-rendering the first object as a texture map projected onto the first spherical projection plane (e.g., In step 109, the unwrapped plane image 108 is divided into regions 110. This step may be done by one or more operators, or it may be assisted by software. For example, software may tentatively generate region boundaries based on shapes, colors, or textures of objects in the unwrapped image 108. Sandrew: [0045]).
Regarding claim 10, the claim is a computer-readable medium of method claim 1. The claim is similar in scope to claim 1 and it is rejected under similar rationale as claim 1.
Dunn teaches that “In the example of FIG. 1, the exemplary computer system 112 includes a central processing unit (CPU) 101 for running software applications and optionally an operating system. Random access memory 102 and read-only memory 103 store applications and data for use by the CPU 101. Data storage device 104 provides non-volatile storage for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM or other optical storage devices. The optional user inputs 106 and 107 comprise devices that communicate inputs from one or more users to the computer system 112 (e.g., mice, joysticks, cameras, touch screens, and/or microphones).” (Dunn: [0031]).
Regarding claims 11-13, the claims are system claims of method claims 1-3 respectively. The claims are similar in scope to claims 1-3 respectively and they are rejected under similar rationale as claims 1-3 respectively.
Dunn teaches that “In the example of FIG. 1, the exemplary computer system 112 includes a central processing unit (CPU) 101 for running software applications and optionally an operating system. Random access memory 102 and read-only memory 103 store applications and data for use by the CPU 101. Data storage device 104 provides non-volatile storage for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM or other optical storage devices. The optional user inputs 106 and 107 comprise devices that communicate inputs from one or more users to the computer system 112 (e.g., mice, joysticks, cameras, touch screens, and/or microphones).” (Dunn: [0031]).
Allowable Subject Matter
Claim(s) 4-9 and 14-19 is/are objected to being dependent upon rejected base claim. The claim would be allowable if rewritten in independent form including all the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter in claim 4: The prior art of record, either individually or in combination, fails to teach the claimed limitation in the following:
providing depth information of the second object to a shader processing circuitry; and
configuring the shader processing circuitry to determine occlusion of the second object based on depth information of the first spherical projection plane.
as recited in claim 4.
The following is a statement of reasons for the indication of allowable subject matter in claim 5: The prior art of record, either individually or in combination, fails to teach the claimed limitation in the following:
providing depth information of the second object to a shader processing circuitry; and
configuring the shader processing circuitry to determine a shadow of the second object based on depth information of the first spherical projection plane.
as recited in claim 5.
The following is a statement of reasons for the indication of allowable subject matter in claim 6: The prior art of record, either individually or in combination, fails to teach the claimed limitation in the following:
determining a third spherical projection plane between the first spherical projection plane and a second spherical projection plane; rendering in real-time a third object of the plurality of graphic objects;
configuring a shader processing circuitry to determine a second shader output based on the third object, the second object, the first object; and
updating the framebuffer based on the second shader output.
as recited in claim 6.
Claims 7-9 are dependent from claim 6 and they are objected under similar rationale as claim 6.
Regarding claims 14-19, the claims are similar in scope to claims 4-9 respectively and they are objected under similar rationale as claims 4-9 respectively.
Conclusion
The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure:
a). Pitts (2016/0307372) teaches that “A capture system may capture light-field data representative of an environment for use in virtual reality, augmented reality, and the like. The system may have a plurality of light-field cameras arranged to capture a light-field volume within the environment, and a processor. The processor may use the light-field volume to generate a first virtual view depicting the environment from a first virtual viewpoint. The light-field cameras may be arranged in a tiled array to define a capture surface with a ring-shaped, spherical, or other arrangement. The processor may map the pixels captured by the image sensors to light rays received in the light-field volume, and store data descriptive of the light rays in a coordinate system representative of the light-field volume.” (Pitts: Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SING-WAI WU whose telephone number is (571)270-5850. The examiner can normally be reached 9:00am - 5:30pm (Central Time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SING-WAI WU/Primary Examiner, Art Unit 2611