Prosecution Insights
Last updated: April 19, 2026
Application No. 18/582,835

ENVIRONMENT CAPTURE AND RENDERING

Final Rejection §103
Filed
Feb 21, 2024
Examiner
TRUONG, KARL DUC
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
15 granted / 29 resolved
-10.3% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
85.3%
+45.3% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 29 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the amendment filed on 26th February, 2026. Claims 1, 4, 10, 20, and 39 have been amended. Claims 9, 13, 15, 19, 21-38, and 40-57 have been cancelled. Claims 58-60 have been added. Claims 1-8, 10-12, 14, 16-18, 20, 39, and 58-60 remain rejected in the application. Applicant's amendments to the claims have overcome each and every objection previously set forth in the non-final office action mailed 28th October, 2025. Response to Arguments Applicant's arguments with respect to Claims 1, 20, and 39 filed on 26th February, 2026, with respect to the rejection under 35 U.S.C. § 103, regarding that the prior art does not teach the limitation(s): "obtaining a 3D mesh corresponding to the 3D point cloud, wherein the 3D mesh is different than the 3D point cloud" and "selecting a subset of points of the 3D point cloud based on the 3D mesh" have been fully considered, but are moot because of new grounds for rejection. It has now been taught by the combination of Ponto and Anthony. Regarding arguments to Claims 2-8, 10-12, 14, 16-18, and 58-60, they directly/indirectly depend on independent Claims 1, 20, and 39 respectively. Applicant does not argue anything other than independent Claims 1, 20, and 39. The limitations in those claims, in conjunction with combination, was previously established as explained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 8, 20, 39, and 58 are rejected under 35 U.S.C. 103 as being unpatentable over Ponto et al. (US 20210209738 A1, previously cited), hereinafter referenced as Ponto, in view of Anthony et al. (US 20180025530 A1), hereinafter referenced as Anthony. Regarding Claim 1, Ponto discloses a method (Ponto, [0054]: teaches a process <read on method> for hierarchical progressive point cloud rendering) comprising: at a processor (Ponto, [0048]: teaches system 100 including central processing unit (CPU) 106 <read on processor>): obtaining a three-dimensional (3D) point cloud of a physical environment (Ponto, [0067]: teaches generating various synthetic 3D point clouds of a scene of varying resolution; Note: it should be noted that although "physical environment" is not expressly stated, it is common in the art to generate 3D point clouds of real-world environments), the 3D point cloud comprising points each having a 3D location and representing an appearance of a portion of the physical environment (Ponto, [0053]: teaches each node of an octree can represent an area of space <read on appearance of portion of physical environment> within a 3D point cloud, where "if the viewing angle and/or position of a virtual camera (or cameras for applications such as a CAVE), some nodes represent portions of the scene that do not intersect the view frustum of the virtual camera, and many of the points associated with these nodes can be excluded from consideration for inclusion in the image being rendered for presentation"; Note: it should be noted that although "3D location" is not expressly stated, one skilled in the art would understand that points in 3D point clouds have ( x , y , z ) coordinates, which is a 3D position/location); [[obtaining a 3D mesh corresponding to the 3D point cloud, wherein]] [[the 3D mesh is different than the 3D point cloud;]] selecting a subset of points of the 3D point cloud [[based on the 3D mesh]] (Ponto, [0043]: teaches selecting subsets of points from representations of the 3D point cloud at various resolutions); and generating a two-dimensional (2D) view of the 3D point cloud from a viewpoint using the subset of the points of the 3D point cloud (Ponto, [0102]: teaches projecting a grid <read on using subset of points> of an evaluated octant from 3D to 2D on a plane parallel to a plane represented by the frame buffer object, where "based on the orientation of the octant with respect to the viewing frustum <read on viewpoint>, process 1000 can determine how many exterior voxels of the octant are visible <read on generating 2D view>"). However, Ponto does not expressly disclose obtaining a 3D mesh corresponding to the 3D point cloud, wherein the 3D mesh is different than the 3D point cloud; and selecting a subset of points of the 3D point cloud based on the 3D mesh. Anthony discloses obtaining a 3D mesh corresponding to the 3D point cloud (Anthony, [0097]: teaches comparing mesh model 240 <read on 3D mesh> to a point cloud <read on 3D point cloud> that corresponds to object 216 to determine regions of misalignment/inaccuracy), wherein the 3D mesh is different than the 3D point cloud (Anthony, [0097]: teaches the point cloud <read on 3D point cloud> being generated separately from the mesh model <read on 3D mesh>); and selecting a subset of points of the 3D point cloud based on the 3D mesh (Anthony, [0118]: teaches selecting points <read on subset of points> of point cloud 740 <read on 3D point cloud> that are to be connected to form a mesh <read on 3D mesh>, which represents real-world object 216). Anthony is analogous art with respect to Ponto because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a system that generates low and high quality models from 3D point clouds as taught by Anthony into the teaching of Ponto. The suggestion for doing so would allow the user to compare the models with the point clouds with the generated models to deduce imperfections, thereby resulting in a more accurate 3D mesh generation. Therefore, it would have been obvious to combine Anthony with Ponto. Regarding Claim 20, it recites the limitations that are similar in scope to Claim 1, but in a system. As shown in the rejection, the combination of Ponto and Anthony discloses the limitations of Claim 1. Additionally, Ponto discloses a system (Ponto, [0048]: teaches system 100) comprising: memory (Ponto, [0048]: teaches system 100 including cache memory 108 and main memory/non-volatile storage 104); and one or more processors at a device coupled to the memory (Ponto, [0048]: teaches system 100 including a central processing unit (CPU) 106 <read on processor> that accesses cache memory 108), wherein the memory comprises program instructions that, when executed on the one or more processors, cause the system to perform operations (Ponto, [0110]: teaches a non-transitory computer readable media being used for storing instructions that performs/executes functions/processes <read on operations>; [0107]: teaches process 1000 being performed by a CPU <read on processor>) comprising:… Thus, Claim 20 is met by Ponto according to the mapping presented in the rejection of Claim 1, given the method corresponds to a system. Regarding Claim 39, it recites the limitations that are similar in scope to Claim 1, but in a non-transitory computer-readable storage medium. As shown in the rejection, the combination of Ponto and Anthony discloses the limitations of Claim 1. Additionally, Ponto discloses a non-transitory computer-readable storage medium, storing program instructions executable via one or more processors to perform operations (Ponto, [0110]: teaches a non-transitory computer readable media being used for storing instructions that performs/executes functions/processes <read on operations>; [0107]: teaches process 1000 being performed by a CPU <read on processor>) comprising:… Thus, Claim 39 is met by Ponto according to the mapping presented in the rejection of Claim 1, given the method corresponds to a non-transitory computer-readable storage medium. Regarding Claim 2, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein the subset of points is selected by excluding points of the 3D point cloud that are determined to be occluded [[based on the 3D mesh]] (Ponto, [0053]: teaches nodes (or octants) <read on points> in an octree presenting an area of space within a point cloud, where nodes that represent potions of the scene that do not intersect the view frustum of the virtual camera are excluded <read on determined occluded points>). However, Ponto does not expressly disclose excluding points of the 3D point cloud that are determined to be occluded based on the 3D mesh. Anthony discloses excluding points of the 3D point cloud that are determined to be occluded based on the 3D mesh (Anthony, [0118]: teaches selecting points <read on subset of points> of point cloud 740 <read on 3D point cloud> that are to be connected to form a mesh <read on 3D mesh>, which represents real-world object 216). Anthony is analogous art with respect to Ponto because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a system that generates low and high quality models from 3D point clouds as taught by Anthony into the teaching of Ponto. The suggestion for doing so would allow the user to compare the models with the point clouds with the generated models to deduce imperfections, thereby resulting in a more accurate 3D mesh generation. Therefore, it would have been obvious to combine Anthony with Ponto. Regarding Claim 3, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein generating the 2D view comprises projecting the subset of points into a 2D display space to generate the 2D view (Ponto, [0102]: teaches projecting a grid <read on projecting subset of points> of an evaluated octant from 3D to 2D on a plane parallel to a plane <read on 2D display space> represented by the frame buffer object, where "based on the orientation of the octant with respect to the viewing frustum, process 1000 can determine how many exterior voxels of the octant are visible <read on generate 2D view>"). Regarding Claim 5, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses enhancing the 2D view by replacing corresponding 2D points representing an identified 2D surface using a planar element in the 2D view (Ponto, [0061]: teaches for a given frame <read on 2D view>, "process 300 <read on enhancing> can replace (e.g., by overwriting) the point <read on replacing corresponding 2D points> corresponding to each pixel in a particular frame buffer object corresponding to a particular level when a new point copied from the CPU cache for that level is closer to the camera than the point from the last frame," where "in order to avoid introducing noise when reprojecting these points (e.g., if the offset were recursed upon), the stored world coordinates for the point are not changed to add the random offset"; [0061]: further teaches "when the camera is close to a surface <read on identified 2D surface using planar element>, the random offset can allow walls to appear filled in by causing some points that would otherwise correspond to the same pixel to correspond to different pixels, whereas without the offset these walls may appear to be semitransparent"; Note: it should be noted that "enhancing the 2D view" is being interpreted as inpainting/recovering holes in the 2D view after replacing the 2D planes (i.e., walls, floors, etc.)). Regarding Claim 8, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein the 2D view of the 3D point cloud is separately generated for each frame of an extended reality experience (Ponto, [0109]: teaches rendering two images of differing viewpoints in parallel <read on separate 2D view generation for each frame> for a head mounted display <read on extended reality experience>). Regarding Claim 58, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein selecting the subset of points of the 3D point cloud is based on depth information [[of 3D mesh]] (Ponto, [0058]: teaches "the points can be selected for each frame from the whole dataset of points included in visible octants (i.e., selection with replacement)"; [0059]: teaches the points in the frame buffer including a first binding point for geometry <read on 3D mesh> and color, and the a second binding point for depth <read on depth information>; Note: it should be noted that the binding points are being interpreted to be used for a 3D model as the first binding point is for geometry and color). However, Ponto does not expressly disclose depth information of 3D mesh. Anthony discloses depth information of 3D mesh (Anthony, [0118]: teaches selecting points of point cloud 740 that are to be connected to form a mesh <read on 3D mesh>, which represents real-world object 216; Note: it should be noted that it is common in the art to have captured point clouds include depth data/information). Anthony is analogous art with respect to Ponto because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a system that generates low and high quality models from 3D point clouds as taught by Anthony into the teaching of Ponto. The suggestion for doing so would allow the user to compare the models with the point clouds with the generated models to deduce imperfections, thereby resulting in a more accurate 3D mesh generation. Therefore, it would have been obvious to combine Anthony with Ponto. Claims 4, 6-7, 18, and 59-60 are rejected under 35 U.S.C. 103 as being unpatentable over Ponto et al. (US 20210209738 A1, previously cited), hereinafter referenced as Ponto, in view of Anthony et al. (US 20180025530 A1), hereinafter referenced as Anthony as applied to Claims 1 and 5 above respectively, and further in view of Chupeau et al. (US 20220159231 A1, previously cited), hereinafter referenced as Chupeau. Regarding Claim 4, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein generating the 2D view comprises: [[removing the occluded points of the 3D point cloud to obtain the subset of points; and]] generating the 2D view by projecting the subset of points based on the viewpoint (Ponto, [0092]: teaches for each octant that is in the viewing frustum, process 900 determines "how many cells map to each pixel in the frame buffer if that octant were used to fill a portion of the frame buffer object," where selecting visible octants based on the projected points per pixel). However, the combination of Ponto and Anthony does not expressly disclose removing the occluded points of the 3D point cloud to obtain the subset of points. Chupeau discloses removing the occluded points of the 3D point cloud to obtain the subset of points (Chupeau, [0089]: teaches a second part <read on subset of points> of a 3D scene being obtained by removing from the 3D scene the points <read on removing occluded points of 3D point cloud> that are visible from the first viewpoint and by projecting the remaining points according to the same point of view, where "second parts 62 comprise texture information of parts of the 3D scene that are complementary <read on occluded points of 3D point cloud> to the part visible from the point of view"). Chupeau is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a video encoder-decoder architecture for projecting 3D scene sequences a point cloud format as taught by Chupeau into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow the system to obtain 3D model data based on given viewpoints, which can be incorporated with the LOD octree architecture for efficient mesh generation, thereby improving overall graphics rendering performance. Therefore, it would have been obvious to combine Chupeau with Ponto, in view of Anthony. Regarding Claim 6, the combination of Ponto and Anthony discloses the method of Claim 5. The combination of Ponto and Anthony does not expressly disclose the limitations of Claim 6; however, Chupeau discloses inpainting the enhanced 2D view to modify color information of the enhanced 2D view (Chupeau, [0095]: teaches an inpainting algorithm using a patched warped, where "a patch 91 is image data representative of a projection 92 of a part 90 of the 3D scene onto an image plane" and "only the color component of points of part 90 are projected onto patch 91"; [0096]: teaches texture patches (i.e., color patches) <read on color information> being post-processed <read on modify> during decoding; FIG. 9 teaches patch 91 being a projection 92 of part 90 of the 3D scene onto an image plane <read on enhanced 2D view>; Note: it should be noted that inpainting is defined in the art as filling in or restoring missing portions of an image). PNG media_image1.png 340 380 media_image1.png Greyscale Chupeau is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a video encoder-decoder architecture for projecting 3D scene sequences a point cloud format as taught by Chupeau into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow the system to obtain 3D model data based on given viewpoints, which can be incorporated with the LOD octree architecture for efficient mesh generation, thereby improving overall graphics rendering performance. Therefore, it would have been obvious to combine Chupeau with Ponto, in view of Anthony. Regarding Claim 7, the combination of Ponto and Anthony discloses the method of Claim 1. The combination of Ponto and Anthony does not expressly disclose the limitations of Claim 7; however, Chupeau discloses inpainting the 2D view to modify color information of the 2D view (Chupeau, [0095]: teaches the inpainting algorithm using the patched warped, where "a patch 91 is image data representative of a projection 92 of a part 90 of the 3D scene onto an image plane" and "only the color component of points of part 90 are projected onto patch 91"; [0096]: teaches texture patches (i.e., color patches) <read on color information> being post-processed <read on modify> during decoding). Chupeau is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a video encoder-decoder architecture for projecting 3D scene sequences a point cloud format as taught by Chupeau into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow the system to obtain 3D model data based on given viewpoints, which can be incorporated with the LOD octree architecture for efficient mesh generation, thereby improving overall graphics rendering performance. Therefore, it would have been obvious to combine Chupeau with Ponto, in view of Anthony. Regarding Claim 18, the combination of Ponto and Anthony discloses the method of Claim 1. The combination of Ponto and Anthony does not expressly disclose the limitations of Claim 18; however, Chupeau discloses wherein a portion of the 2D view of the 3D mesh is removed or visually modified based on the 3D mesh (Chupeau, [0093]: teaches generating a viewport image according to the location and direction of the current point of view, where dis-occluded parts of the scene (i.e., holes) <read on portion of 2D view of 3D mesh> are visible in the viewport image; [0093]: further teaches filling <read on visually modifying portion of 2D view> the dis-occluded holes using a patch-based inpainting algorithm, which uses color patches 62). Chupeau is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a video encoder-decoder architecture for projecting 3D scene sequences a point cloud format as taught by Chupeau into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow the system to obtain 3D model data based on given viewpoints, which can be incorporated with the LOD octree architecture for efficient mesh generation, thereby improving overall graphics rendering performance. Therefore, it would have been obvious to combine Chupeau with Ponto, in view of Anthony. Regarding Claim 59, the combination of Ponto and Anthony discloses the method of Claim 1. The combination of Ponto and Anthony does not expressly disclose the limitations of Claim 59; however, Chupeau discloses wherein selecting the subset of points of the 3D point cloud is based on distances of points of the 3D point cloud from a surface of the 3D mesh (Chupeau, [0037]: teaches points of point cloud 11 being points spread <read on distances of points> on the surface of faces of the mesh <read on 3D mesh>). Chupeau is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a video encoder-decoder architecture for projecting 3D scene sequences a point cloud format as taught by Chupeau into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow the system to obtain 3D model data based on given viewpoints, which can be incorporated with the LOD octree architecture for efficient mesh generation, thereby improving overall graphics rendering performance. Therefore, it would have been obvious to combine Chupeau with Ponto, in view of Anthony. Regarding Claim 60, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein selecting the subset of points of the 3D point cloud comprises identifying 3D points [[for inpainting]] based on depth information in the 3D mesh (Ponto, [0058]: teaches "the points <read on 3D points> can be selected for each frame from the whole dataset of points included in visible octants (i.e., selection with replacement)"; [0059]: teaches the points in the frame buffer including a first binding point for geometry and color, and a second binding point for depth <read on depth information>). However, the combination of Ponto and Anthony does not expressly disclose identifying 3D points for inpainting based on depth information in the 3D mesh. Chupeau discloses identifying 3D points for inpainting based on depth information in the 3D mesh (Chupeau, [0036]: teaches using an inpainting algorithm to paint in occluded regions from a central point of view). Chupeau is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement a video encoder-decoder architecture for projecting 3D scene sequences a point cloud format as taught by Chupeau into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow the system to obtain 3D model data based on given viewpoints, which can be incorporated with the LOD octree architecture for efficient mesh generation, thereby improving overall graphics rendering performance. Therefore, it would have been obvious to combine Chupeau with Ponto, in view of Anthony. Claims 10-12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ponto et al. (US 20210209738 A1, previously cited), hereinafter referenced as Ponto, in view of Anthony et al. (US 20180025530 A1), hereinafter referenced as Anthony as applied to Claim 1 above respectively, and further in view of Kamaraju et al. (US 20220385721 A1, previously cited), hereinafter referenced as Kamaraju. Regarding Claim 10, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein: [[the 3D point cloud and the 3D mesh corresponding to the 3D point cloud are generated in a previous capture session and stored;]] [[obtaining the 3D point cloud comprises accessing the stored 3D point cloud;]] [[obtaining the 3D mesh comprises accessing the stored 3D mesh; and]] the method further comprises rendering the 2D view of the 3D point cloud at a prescribed frame rate in an extended reality environment based on the viewpoint (Ponto, [0045]: teaches rendering point cloud data of a scene at relatively high frame rates, where rendering the images <read on rendering 2D view of 3D point cloud> of detailed portions of a point cloud can be between 70-120 FPS <read on prescribed frame rate> on a display; [0049]: teaches the display being a head mounted display (HMD) that provides a VR/AR experience to the user). However, the combination of Ponto and Anthony does not expressly disclose the 3D point cloud and the 3D mesh corresponding to the 3D point cloud are generated in a previous capture session and stored; obtaining the 3D point cloud comprises accessing the stored 3D point cloud; and obtaining the 3D mesh comprises accessing the stored 3D mesh. Kamaraju discloses the 3D point cloud and the 3D mesh corresponding to the 3D point cloud are generated in a previous capture session and stored (Kamaraju, [0038]: teaches captured video data being merged with previously captured AR data <read on previous captured session>, where "the video and associated AR data may be captured at a previous time, and stored into an appropriate file format that captures the video along with the raw feature points and motion data"; [0024]: teaches a stored and expanded point cloud <read on 3D point cloud> combined with captured images and/or video and any AR information; [0061]: teaches a processed 3D mesh being stored for later retrieval); obtaining the 3D point cloud comprises accessing the stored 3D point cloud (Kamaraju, [0024]: teaches a stored and expanded point cloud <read on 3D point cloud> combined with captured images and/or video and any AR information); and obtaining the 3D mesh comprises accessing the stored 3D mesh (Kamaraju, [0061]: teaches a processed 3D mesh being stored for later retrieval). Kamaraju is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling point cloud data in an AR environment. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to merge previously captured AR data with video as taught by Kamaraju into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow for more accurate real-time updates of the AR environment, thereby yielding predictable results. Therefore, it would have been obvious to combine Kamaraju with Ponto, in view of Anthony. Regarding Claim 11, the combination of Ponto and Anthony discloses the method of Claim 1. Additionally, Ponto further discloses wherein the 3D point cloud is captured at a frame rate at a first electronic device located in the physical environment (Ponto, [0045]: teaches rendering 3D point cloud data of a scene at relatively high frame rates, where rendering the images of detailed portions of a point cloud can be between 70-120 FPS <read on frame rate at first electronic device> on a display; [0049]: teaches the display being a head mounted display (HMD) <read on first electronic device located in physical environment> that provides a VR/AR experience to the user), and wherein [[the 3D mesh corresponding to the 3D point cloud is generated at the frame rate by the first electronic device.]] However, the combination of Ponto and Anthony does not expressly disclose the 3D mesh corresponding to the 3D point cloud is generated at the frame rate by the first electronic device. Kamaraju discloses the 3D mesh corresponding to the 3D point cloud is generated at the frame rate by the first electronic device (Kamaraju, [0038]: teaches a capture device <read on first electronic device> using AR data 202 to generate a 3D mesh, which correlates to a captured point cloud of 3D objects, where "the AR data may be associated with each frame of the video or with a group of frames <read on frame rate by first electronic device>"). Kamaraju is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling point cloud data in an AR environment. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to merge previously captured AR data with video as taught by Kamaraju into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow for more accurate real-time updates of the AR environment, thereby yielding predictable results. Therefore, it would have been obvious to combine Kamaraju with Ponto, in view of Anthony. Regarding Claim 12, the combination of Ponto, Anthony, and Kamaraju discloses the method of Claim 11. Additionally, Ponto further discloses wherein: [[obtaining the 3D point cloud comprises receiving, by a second electronic device, the 3D point cloud from the first electronic device;]] [[obtaining the obtained 3D mesh comprises receiving, by the second electronic device, the 3D mesh from the first electronic device; and]] the method further comprises[[concurrently]] rendering, by the second electronic device, the 2D view of the obtained 3D point cloud at the frame rate (Ponto, [0102]: teaches projecting a grid of an evaluated octant from 3D to 2D on a plane parallel to a plane represented by the frame buffer object, where "based on the orientation of the octant with respect to the viewing frustum, process 1000 can determine how many exterior voxels of the octant are visible"; [0112]: teaches the generation of an initial display based on the initial camera viewpoint being performed in parallel <read on concurrent rendering>). However, the combination of Ponto and Anthony does not expressly disclose obtaining the 3D point cloud comprisesreceiving, by a second electronic device, the 3D point cloud from the first electronic device; obtaining the obtained 3D mesh comprisesreceiving, by the second electronic device, the 3D mesh from the first electronic device; and the method further comprisesconcurrently rendering, by the second electronic device, the 2D view of the obtained 3D point cloud at the frame rate. Kamaraju discloses obtaining the 3D point cloud comprisesreceiving, by a second electronic device, the 3D point cloud from the first electronic device (Kamaraju, [0072]: teaches "the data associated with a physical environment <read on 3D point cloud> may be similar to the data captured by the consumer device 704 <read on first electronic device> of the physical space 706, that is then transmitted via data path 708 to the cloud service 710," such as a server <read on second electronic device>); obtaining the obtained 3D mesh comprisesreceiving, by the second electronic device, the 3D mesh from the first electronic device (Kamaraju, [0072]: teaches "the data associated with a physical environment <read on 3D mesh> may be similar to the data captured by the consumer device 704 <read on first electronic device> of the physical space 706, that is then transmitted via data path 708 to the cloud service 710," such as a server <read on second electronic device>); and the method further comprisesconcurrently rendering, by the second electronic device, the 2D view of the obtained 3D point cloud at the frame rate (Kamaraju, [0026]: teaches the server <read on second electronic device> displaying the rendered 3D model on a device). Kamaraju is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling point cloud data in an AR environment. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to merge previously captured AR data with video as taught by Kamaraju into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow for more accurate real-time updates of the AR environment, thereby yielding predictable results. Therefore, it would have been obvious to combine Kamaraju with Ponto, in view of Anthony. Regarding Claim 14, the combination of Ponto, Anthony, and Kamaraju discloses the method of Claim 12. The combination of Ponto and Anthony does not expressly disclose the limitations of Claim 14; however, Kamaraju discloses wherein the 2D view of the 3D point cloud further comprises a virtual representation of a user of the first electronic device for a multi-user communication session (Kamaraju, [0027]: teaches system 100 capturing image/video that includes AR data, where the captured environment may include one or more 3D objects 108, such as a person <read on virtual representation of user of first electronic device> as shown in FIG. 1). PNG media_image2.png 400 460 media_image2.png Greyscale Kamaraju is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling point cloud data in an AR environment. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to merge previously captured AR data with video as taught by Kamaraju into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow for more accurate real-time updates of the AR environment, thereby yielding predictable results. Therefore, it would have been obvious to combine Kamaraju with Ponto, in view of Anthony. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Ponto et al. (US 20210209738 A1, previously cited), hereinafter referenced as Ponto, in view of Anthony et al. (US 20180025530 A1), hereinafter referenced as Anthony as applied to Claim 1 above respectively, and further in view of Thomas et al. (US 20210142577 A1, previously cited), hereinafter referenced as Thomas. Regarding Claim 16, the combination of Ponto and Anthony discloses the method of Claim 1. The combination of Ponto and Anthony does not expressly disclose the limitations of Claim 16; however, Thomas discloses generating surface normals for polygons identified using vertices in the 3D mesh (Thomas, [0082]: teaches determining surface normals of 3D planar surfaces <read on identified polygons using vertices in 3D mesh> associated with a closed boundary, such as a roof facet of a house, depicted in a 2D image as shown in FIG. 11); and PNG media_image3.png 240 450 media_image3.png Greyscale modifying the 2D view of the 3D point cloud and virtual content for visual effects or user interactions based on the surface normals (Thomas, [0032]: teaches the computer system detecting a specific scene effect <read on visual effects> associated with the original 2D image, where "the detected scene effect can be applied to the 2D image augmented <read on modifying 2D view of 3D point cloud and virtual content> by the synthetic image data to generate the photorealistic image"; [0033]: teaches light source estimation techniques for specific scene effects are based on a given set of known surface normals and corresponding luminance values). Thomas is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely handling point clouds through point reprojection. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to detect scene effects from the original input data as taught by Thomas into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow the system to apply lighting effects based on calculated light source estimations, thereby yielding predictable results. Therefore, it would have been obvious to combine Thomas with Ponto, in view of Anthony. Claim 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ponto et al. (US 20210209738 A1, previously cited), hereinafter referenced as Ponto, in view of Anthony et al. (US 20180025530 A1), hereinafter referenced as Anthony as applied to Claim 1 above respectively, and further in view of Owechko (US 20190035150 A1). Regarding Claim 17, the combination of Ponto and Anthony discloses the method of Claim 1. The combination of Ponto and Anthony does not expressly disclose the limitations of Claim 17; however, Owechko discloses wherein the 3D mesh is generated by executing a meshing algorithm on the 3D point cloud (Owechko, [0061]: teaches the process <read on meshing algorithm> of generating a resolution adaptive mesh <read on 3D mesh> from point clouds), and wherein the 3D mesh is a low-resolution mesh with vertices between 1-6 centimeters apart (Owechko, [0059]: teaches generating a surface representation, including the resolution adaptive mesh <read on low-resolution mesh>, of the object using points clouds, where the triangular mesh V is fitted to the point clouds by finding positions of vertices 606a-606c of each mesh triangle 602a-602n that minimizes the object function E(V,PC) <read on vertices between 1-6 cm apart>; Note: it should be noted that although the distances between vertices (i.e., 1-6 cm) is not explicitly stated, it would be obvious for one skilled in the art to have a mesh be scaled, where the vertices are 1-6 cm apart). Owechko is analogous art with respect to Ponto, in view of Anthony because they are from the same field of endeavor, namely rendering 3D meshes from 3D point clouds. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to generate a resolution adaptive mesh as taught by Owechko into the teaching of Ponto, in view of Anthony. The suggestion for doing so would allow for dynamic mesh rendering to save on rendering performance, thereby yielding improved results. Therefore, it would have been obvious to combine Owechko with Ponto, in view of Anthony. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Anderberg (US 20210072394 A1) discloses enabling colorization and color adjustments on 3D point clouds; Chien et al. (US 20180300937 A1) discloses restoring an occluded background region, which includes detecting surfaces of a point cloud; Kohli et al. (US 20120257814 A1) discloses completing an image using scene geometry data, such as depth information; Li et al. (US 20160078676 A1) discloses receiving and converting a point cloud into a mesh model; and Zhang et al. (US 20180225866 A1) discloses a merged, fused 3D point cloud that includes acquiring multiple sets of images of a scene from varying viewing angles. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARL TRUONG whose telephone number is (703)756-5915. The examiner can normally be reached 10:30 AM - 7:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.D.T./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Feb 21, 2024
Application Filed
Oct 20, 2025
Non-Final Rejection — §103
Jan 22, 2026
Interview Requested
Jan 29, 2026
Examiner Interview Summary
Jan 29, 2026
Applicant Interview (Telephonic)
Feb 26, 2026
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573149
DATA PROCESSING METHOD AND APPARATUS, DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Patent 12561875
ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12494013
AUTODECODING LATENT 3D DIFFUSION MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12456258
SYSTEMS AND METHODS FOR GENERATING A SHADOW MESH
2y 5m to grant Granted Oct 28, 2025
Patent 12444020
FLEXIBLE IMAGE ASPECT RATIO USING MACHINE LEARNING
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
83%
With Interview (+31.0%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 29 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month