Prosecution Insights
Last updated: April 19, 2026
Application No. 18/502,306

Systems and Methods For Content Delivery Acceleration of Virtual Reality and Augmented Reality Web Pages

Non-Final OA §102§103
Filed
Nov 06, 2023
Examiner
WILSON, NICHOLAS R
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Akamai Technologies, Inc.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
1y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
467 granted / 537 resolved
+25.0% vs TC avg
Moderate +12% lift
Without
With
+12.1%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 12m
Avg Prosecution
25 currently pending
Career history
562
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
41.1%
+1.1% vs TC avg
§102
24.0%
-16.0% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 537 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The specification is objected to because it includes Example code. The applicant can file a compact disc with the code to be compliant with 37 CFT 1.53 and 1.96, but it needs to be removed from the specification. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 23-25, 28-31, 34-37, 40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aggarwal et al. (US 2019/0188828)(Hereinafter referred to as Aggarwal). Regarding claim 23, Aggarwal teaches A method performed by one or more computers, each of which comprises circuitry forming one or more processors and memory storing instructions for execution on the one or more processors (The delivery and display of 360-degree video on HMDs present many technical challenges. 360-degree videos are ultra-high resolution spherical videos, which contain an onmi-directional view of a scene. While the 360-degree video demands high bandwidth, at any instant of time users are only viewing a small portion of the video according to the HMD field of view (FOY). See paragraph [0045]), the method comprising: generating image data for each of a plurality of concentric spheres representing a sky, each of the plurality of concentric spheres associated with a respective different resolution (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049])(First sphere represents a first frame at a first time); sending first image data for a first sphere of the plurality of concentric spheres to a user device for creating the sky in a VR/AR experience, which comprises any of a virtual reality (VR) and augmented reality (AR) experience (The 360-degree video is captured in every direction from a unique point, so it is essentially a spherical video. Since related art video encoders operate on a 2D rectangular image, a key step of the encoding process is to project the spherical video onto a planar surface. It is possible to generate the viewport for any position and angle in the sphere without any information loss. The different projections are used to map data for different applications as every projection has its own level of importance and characteristics. See paragraph [0050]); and after sending the first image data, sending second image data for a second sphere of the plurality of concentric spheres to the user device for creating the sky in the VR/AR experience (The method according to one or more embodiments can also be used to provide adaptive streaming video playback over a network. See paragraph [0080])(Second sphere is represented by frame is a frame transferred at second time later than first). Regarding 24, Aggarwal teaches the method of claim 23, wherein the first sphere of the plurality of concentric spheres has a higher resolution in an initial field of view in the VR/AR experience than in other fields of view in the VR/AR experience (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049]) Regarding 25, Aggarwal teaches The method of claim 24, wherein the second sphere of the plurality of concentric spheres has a higher resolution in the other fields of view in the VR/AR experience relative to the first sphere (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049]) (User turns head, second sphere is represented as second time frame with user looking in different direction.). Regarding 28, Aggarwal teaches The method of claim 23, wherein the sending to the first sphere is in response to a request from the user device (a) Hardcoded 3D geometry mesh at the immersive client 300: In this case, the 3D mesh information is hardcoded or pre-stored at the immersive client 300 in advance and the immersive client 300 simply obtains or receives the stream from the server 200 and maps the stream to the 3D mesh. b) The 3D geometry mesh is streamed to the immersive client 300 in a separate connection: In this case, one connection is made to fetch 3D mesh information from the sever 300 and another for the media streams (Texture). c) 3D geometry mesh as a track in the media stream: In this case, the 3D mesh information is a part of a media stream itself, e.g., as a different track. See paragraphs [0124-0126])( In an embodiment, the server 200 may obtain the FOY input from the immersive client 300. The server 200 may obtain quaternion input from an HMD and convert the quaternion input to a unit vector. The server 200 may generate new vectors by rotating the current vector by an angle of FOY/2 in both clockwise and counter clockwise directions. The server 200 may find or determine the intersection point of these vectors with the projection geometry. The server 200 may find or determine an equation of a truncated plane using the above points and the normal vector. The equation and corresponding calculation according to an embodiment are explained in further detail below. The server 200 may apply the desired projection geometry and use the truncated plane equation to find the correct remapping of the existing data, in such a way that all points exceeding the specified threshold are mapped to the plane. The server 200 may send the above remapped data to the client 300. The client 300 may apply the remapped textures on the required or utilized mesh for the display on the display device 360. See paragraph [0127]). Regarding 29, Aggarwal teaches A system, comprising one or more computers in a content delivery network, each of the one or more computers comprising circuitry forming at least one processor and memory storing instructions for execution on the at least one processor to operate the system The delivery and display of 360-degree video on HMDs present many technical challenges. 360-degree videos are ultra-high resolution spherical videos, which contain an onmi-directional view of a scene. While the 360-degree video demands high bandwidth, at any instant of time users are only viewing a small portion of the video according to the HMD field of view (FOY). See paragraph [0045])( FIG. 4 illustrates an overview of a system 1000 for managing the immersive data, according to an embodiment. Referring to FIG. 4, the system 1000 may include the immersive client 300 and the server 200. The immersive client 300 can be, but is not limited to, a cellular phone, a smart phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, an HMD device, smart glasses, or the like. See paragraph [0119] and figure 4) as set forth below: generate image data for each of a plurality of concentric spheres representing a sky, each of the plurality of concentric spheres associated with a respective different resolution (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049])(First sphere represents a first frame at a first time); send first image data for a first sphere of the plurality of concentric spheres to a user device for creating the sky in a VR/AR experience, which comprises any of a virtual reality (VR) and augmented reality (AR) experience (The 360-degree video is captured in every direction from a unique point, so it is essentially a spherical video. Since related art video encoders operate on a 2D rectangular image, a key step of the encoding process is to project the spherical video onto a planar surface. It is possible to generate the viewport for any position and angle in the sphere without any information loss. The different projections are used to map data for different applications as every projection has its own level of importance and characteristics. See paragraph [0050]); and after sending the first image data, send second image data for a second sphere of the plurality of concentric spheres to the user device for creating the sky in the VR/AR experience (The method according to one or more embodiments can also be used to provide adaptive streaming video playback over a network. See paragraph [0080])(Second sphere is represented by frame is a frame transferred at second time later than first). Regarding 30, Aggarwal teaches The system of claim 29, wherein the first sphere of the plurality of concentric spheres has a higher resolution in an initial field of view in the VR/AR experience than in other fields of view in the VR/AR experience (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049]). Regarding 31, Aggarwal teaches The system of claim 30, wherein the second sphere of the plurality of concentric spheres has a higher resolution in the other fields of view in the VR/AR experience relative to the first sphere (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049]) (User turns head, second sphere is represented as second time frame with user looking in different direction.). Regarding 34, Aggarwal teaches The system of claim 29, wherein the sending to the first sphere is in response to a request from the user device (a) Hardcoded 3D geometry mesh at the immersive client 300: In this case, the 3D mesh information is hardcoded or pre-stored at the immersive client 300 in advance and the immersive client 300 simply obtains or receives the stream from the server 200 and maps the stream to the 3D mesh. b) The 3D geometry mesh is streamed to the immersive client 300 in a separate connection: In this case, one connection is made to fetch 3D mesh information from the sever 300 and another for the media streams (Texture). c) 3D geometry mesh as a track in the media stream: In this case, the 3D mesh information is a part of a media stream itself, e.g., as a different track. See paragraphs [0124-0126])( In an embodiment, the server 200 may obtain the FOY input from the immersive client 300. The server 200 may obtain quaternion input from an HMD and convert the quaternion input to a unit vector. The server 200 may generate new vectors by rotating the current vector by an angle of FOY/2 in both clockwise and counter clockwise directions. The server 200 may find or determine the intersection point of these vectors with the projection geometry. The server 200 may find or determine an equation of a truncated plane using the above points and the normal vector. The equation and corresponding calculation according to an embodiment are explained in further detail below. The server 200 may apply the desired projection geometry and use the truncated plane equation to find the correct remapping of the existing data, in such a way that all points exceeding the specified threshold are mapped to the plane. The server 200 may send the above remapped data to the client 300. The client 300 may apply the remapped textures on the required or utilized mesh for the display on the display device 360. See paragraph [0127]). Regarding 35, Aggarwal teaches A non-transitory computer readable medium holding computer program instructions for execution on at least one hardware processor (The memory 230 also stores instructions to be executed by the processor 240. The memory 230 may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory 230 may, in some examples, be considered a non-transitory storage medium. The term "non-transitory" may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted as meaning that the memory 230 is non-movable. In some examples, the memory 230 can be configured to store larger amounts of information than the memory. In certain examples, a non-transitory storage medium may store data that can, over time, change ( e.g., in Random Access Memory (RAM) or cache). See paragraph [0117]), the computer program instructions comprising instructions to: generate image data for each of a plurality of concentric spheres representing a sky, each of the plurality of concentric spheres associated with a respective different resolution (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049])(First sphere represents a first frame at a first time); send first image data for a first sphere of the plurality of concentric spheres to a user device for creating the sky in a VR/AR experience, which comprises any of a virtual reality (VR) and augmented reality (AR) experience (The 360-degree video is captured in every direction from a unique point, so it is essentially a spherical video. Since related art video encoders operate on a 2D rectangular image, a key step of the encoding process is to project the spherical video onto a planar surface. It is possible to generate the viewport for any position and angle in the sphere without any information loss. The different projections are used to map data for different applications as every projection has its own level of importance and characteristics. See paragraph [0050]); and after sending the first image data, send second image data for a second sphere of the plurality of concentric spheres to the user device for creating the sky in the VR/AR experience (The method according to one or more embodiments can also be used to provide adaptive streaming video playback over a network. See paragraph [0080])(Second sphere is represented by frame is a frame transferred at second time later than first). Regarding 36, Aggarwal teaches The non-transitory computer readable medium of claim 35, wherein the first sphere of the plurality of concentric spheres has a higher resolution in an initial field of view in the VR/AR experience than in other fields of view in the VR/AR experience (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049]). Regarding 37, Aggarwal teaches The non-transitory computer readable medium of claim 36, wherein the second sphere of the plurality of concentric spheres has a higher resolution in the other fields of view in the VR/AR experience relative to the first sphere (The 360-degree VR videos are created by mapping a raw 360-degree video as a 3D texture onto a 3D geometry mesh (e.g., a sphere), with the user at the center of that spherical geometry. See paragraph [0048])( Referring to FIG. lA, a left subfigure shows an example 90-degree viewport as projected on a spherical 3D geometry, while the right subfigure shows how the mapping of the viewport corresponds to that of a given frame on a raw 360-degree video. Further, there is a technique of assigning higher quality to parts within the user's viewport, and lower quality to parts that are not within the immediate viewport of the user. This approach also makes it possible to stream tiles inside the viewport at a highest resolution, at or near the native resolution of the HMD. See paragraph [0049]) (User turns head, second sphere is represented as second time frame with user looking in different direction.). Regarding 40, Aggarwal teaches The non-transitory computer readable medium of claim 35, wherein the sending to the first sphere is in response to a request from the user device (a) Hardcoded 3D geometry mesh at the immersive client 300: In this case, the 3D mesh information is hardcoded or pre-stored at the immersive client 300 in advance and the immersive client 300 simply obtains or receives the stream from the server 200 and maps the stream to the 3D mesh. b) The 3D geometry mesh is streamed to the immersive client 300 in a separate connection: In this case, one connection is made to fetch 3D mesh information from the sever 300 and another for the media streams (Texture). c) 3D geometry mesh as a track in the media stream: In this case, the 3D mesh information is a part of a media stream itself, e.g., as a different track. See paragraphs [0124-0126])( In an embodiment, the server 200 may obtain the FOY input from the immersive client 300. The server 200 may obtain quaternion input from an HMD and convert the quaternion input to a unit vector. The server 200 may generate new vectors by rotating the current vector by an angle of FOY/2 in both clockwise and counter clockwise directions. The server 200 may find or determine the intersection point of these vectors with the projection geometry. The server 200 may find or determine an equation of a truncated plane using the above points and the normal vector. The equation and corresponding calculation according to an embodiment are explained in further detail below. The server 200 may apply the desired projection geometry and use the truncated plane equation to find the correct remapping of the existing data, in such a way that all points exceeding the specified threshold are mapped to the plane. The server 200 may send the above remapped data to the client 300. The client 300 may apply the remapped textures on the required or utilized mesh for the display on the display device 360. See paragraph [0127]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 26, 27, 32, 33, 38, 39 is is/are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal et al. (US 2019/0188828)(Hereinafter referred to as Aggarwal) in view of Kuzyakov et al. (US 2018/0302590)(Hereinafter referred to as Kuzyakov). Regarding 26, Aggarwal teaches the method of claim 23, further comprising: instructing the user device to load the first sphere to create the sky, and then replace it with at least a portion of the second sphere (a) Hardcoded 3D geometry mesh at the immersive client 300: In this case, the 3D mesh information is hardcoded or pre-stored at the immersive client 300 in advance and the immersive client 300 simply obtains or receives the stream from the server 200 and maps the stream to the 3D mesh. b) The 3D geometry mesh is streamed to the immersive client 300 in a separate connection: In this case, one connection is made to fetch 3D mesh information from the sever 300 and another for the media streams (Texture). c) 3D geometry mesh as a track in the media stream: In this case, the 3D mesh information is a part of a media stream itself, e.g., as a different track. See paragraphs [0124-0126])( In an embodiment, the server 200 may obtain the FOY input from the immersive client 300. The server 200 may obtain quaternion input from an HMD and convert the quaternion input to a unit vector. The server 200 may generate new vectors by rotating the current vector by an angle of FOY/2 in both clockwise and counter clockwise directions. The server 200 may find or determine the intersection point of these vectors with the projection geometry. The server 200 may find or determine an equation of a truncated plane using the above points and the normal vector. The equation and corresponding calculation according to an embodiment are explained in further detail below. The server 200 may apply the desired projection geometry and use the truncated plane equation to find the correct remapping of the existing data, in such a way that all points exceeding the specified threshold are mapped to the plane. The server 200 may send the above remapped data to the client 300. The client 300 may apply the remapped textures on the required or utilized mesh for the display on the display device 360. See paragraph [0127]), but is silent to serving a script to the user device to execute, the script Kuzyakov teaches presenting virtual spherical 360 degree videos (Some examples of virtual reality content items include spherical videos, half sphere videos (e.g., 180 degree videos), arbitrary partial spheres, 225 degree videos, and 3D 360 videos. Such virtual reality content items need not be limited to videos that are formatted using a spherical shape but may also be applied to immersive videos formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three-dimensional world. See paragraph [0044]) and that javascript can be used to format social networking systems for virtual content presentaiton (In one embodiment, the user device 710 may display content from the external system 720 and/or from the social networking system 730 by processing a markup language document 714 received from the external system 720 and from the social networking system 730 using a browser application 712. The markup language document 714 identifies content and one or more instructions describing formatting or presentation of the content. By executing the instructions included in the markup language document 714, the browser application 712 displays the identified content using the format or presentation described by the markup language document 714. For example, the markup language document 714 includes instructions for generating and displaying a web page having multiple frames that include text and/or image data retrieved from the external system 720 and the social networking system 730. In various embodiments, the markup language document 714 comprises a data file including extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. Additionally, the markup language document 714 may include JavaScript Object Notation (JSON) data, JSON with padding (JSONP), and JavaScript data to facilitate data-interchange between the external system 720 and the user device 710. The browser application 712 on the user device 710 may use a JavaScript compiler to decode the markup language document 714. See paragraph [0081]). Aggarwal and Kuzyakov teach of presenting spherical videos to users and Kuzyakov teaches that JavaScript can be utilized to present information in a browser-based application, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Aggarwal with the JavaScript capabilities of Kuzyakov such that the virtual information can be deployed utilizing standardized scripting and application tools. Regarding 27, Aggarwal in view of Kuzyakov teaches The method of claim 26, wherein the script comprises Javascript inserted into a markup language document before it is served to the user device (Kuzyakov; Some examples of virtual reality content items include spherical videos, half sphere videos (e.g., 180 degree videos), arbitrary partial spheres, 225 degree videos, and 3D 360 videos. Such virtual reality content items need not be limited to videos that are formatted using a spherical shape but may also be applied to immersive videos formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three-dimensional world. See paragraph [0044]) and that javascript can be used to format social networking systems for virtual content presentaiton (Kuzyakov; In one embodiment, the user device 710 may display content from the external system 720 and/or from the social networking system 730 by processing a markup language document 714 received from the external system 720 and from the social networking system 730 using a browser application 712. The markup language document 714 identifies content and one or more instructions describing formatting or presentation of the content. By executing the instructions included in the markup language document 714, the browser application 712 displays the identified content using the format or presentation described by the markup language document 714. For example, the markup language document 714 includes instructions for generating and displaying a web page having multiple frames that include text and/or image data retrieved from the external system 720 and the social networking system 730. In various embodiments, the markup language document 714 comprises a data file including extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. Additionally, the markup language document 714 may include JavaScript Object Notation (JSON) data, JSON with padding (JSONP), and JavaScript data to facilitate data-interchange between the external system 720 and the user device 710. The browser application 712 on the user device 710 may use a JavaScript compiler to decode the markup language document 714. See paragraph [0081]). Regarding 32, Aggarwal teaches The system of claim 29, wherein operation of the system further comprises: instructing the user device to load the first sphere to create the sky, and then replace it with at least a portion of the second sphere (a) Hardcoded 3D geometry mesh at the immersive client 300: In this case, the 3D mesh information is hardcoded or pre-stored at the immersive client 300 in advance and the immersive client 300 simply obtains or receives the stream from the server 200 and maps the stream to the 3D mesh. b) The 3D geometry mesh is streamed to the immersive client 300 in a separate connection: In this case, one connection is made to fetch 3D mesh information from the sever 300 and another for the media streams (Texture). c) 3D geometry mesh as a track in the media stream: In this case, the 3D mesh information is a part of a media stream itself, e.g., as a different track. See paragraphs [0124-0126])( In an embodiment, the server 200 may obtain the FOY input from the immersive client 300. The server 200 may obtain quaternion input from an HMD and convert the quaternion input to a unit vector. The server 200 may generate new vectors by rotating the current vector by an angle of FOY/2 in both clockwise and counter clockwise directions. The server 200 may find or determine the intersection point of these vectors with the projection geometry. The server 200 may find or determine an equation of a truncated plane using the above points and the normal vector. The equation and corresponding calculation according to an embodiment are explained in further detail below. The server 200 may apply the desired projection geometry and use the truncated plane equation to find the correct remapping of the existing data, in such a way that all points exceeding the specified threshold are mapped to the plane. The server 200 may send the above remapped data to the client 300. The client 300 may apply the remapped textures on the required or utilized mesh for the display on the display device 360. See paragraph [0127]), but is silent to serve a script to the user device to execute, the script. Kuzyakov teaches presenting virtual spherical 360 degree videos (Some examples of virtual reality content items include spherical videos, half sphere videos (e.g., 180 degree videos), arbitrary partial spheres, 225 degree videos, and 3D 360 videos. Such virtual reality content items need not be limited to videos that are formatted using a spherical shape but may also be applied to immersive videos formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three-dimensional world. See paragraph [0044]) and that javascript can be used to format social networking systems for virtual content presentaiton (In one embodiment, the user device 710 may display content from the external system 720 and/or from the social networking system 730 by processing a markup language document 714 received from the external system 720 and from the social networking system 730 using a browser application 712. The markup language document 714 identifies content and one or more instructions describing formatting or presentation of the content. By executing the instructions included in the markup language document 714, the browser application 712 displays the identified content using the format or presentation described by the markup language document 714. For example, the markup language document 714 includes instructions for generating and displaying a web page having multiple frames that include text and/or image data retrieved from the external system 720 and the social networking system 730. In various embodiments, the markup language document 714 comprises a data file including extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. Additionally, the markup language document 714 may include JavaScript Object Notation (JSON) data, JSON with padding (JSONP), and JavaScript data to facilitate data-interchange between the external system 720 and the user device 710. The browser application 712 on the user device 710 may use a JavaScript compiler to decode the markup language document 714. See paragraph [0081]). Aggarwal and Kuzyakov teach of presenting spherical videos to users and Kuzyakov teaches that JavaScript can be utilized to present information in a browser-based application, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Aggarwal with the JavaScript capabilities of Kuzyakov such that the virtual information can be deployed utilizing standardized scripting and application tools. Regarding 33, Aggarwal in view of Kuzyakov teaches The system of claim 32, wherein the script comprises Javascript inserted into a markup language document before it is served to the user device (Kuzyakov; Some examples of virtual reality content items include spherical videos, half sphere videos (e.g., 180 degree videos), arbitrary partial spheres, 225 degree videos, and 3D 360 videos. Such virtual reality content items need not be limited to videos that are formatted using a spherical shape but may also be applied to immersive videos formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three-dimensional world. See paragraph [0044]) and that javascript can be used to format social networking systems for virtual content presentaiton (Kuzyakov; In one embodiment, the user device 710 may display content from the external system 720 and/or from the social networking system 730 by processing a markup language document 714 received from the external system 720 and from the social networking system 730 using a browser application 712. The markup language document 714 identifies content and one or more instructions describing formatting or presentation of the content. By executing the instructions included in the markup language document 714, the browser application 712 displays the identified content using the format or presentation described by the markup language document 714. For example, the markup language document 714 includes instructions for generating and displaying a web page having multiple frames that include text and/or image data retrieved from the external system 720 and the social networking system 730. In various embodiments, the markup language document 714 comprises a data file including extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. Additionally, the markup language document 714 may include JavaScript Object Notation (JSON) data, JSON with padding (JSONP), and JavaScript data to facilitate data-interchange between the external system 720 and the user device 710. The browser application 712 on the user device 710 may use a JavaScript compiler to decode the markup language document 714. See paragraph [0081]). Regarding 38, Aggarwal teaches The non-transitory computer readable medium of claim 35, the instructions further comprising instructions to: serve a script to the user device to execute, the script instructing the user device to load the first sphere to create the sky, and then replace it with at least a portion of the second sphere(a) Hardcoded 3D geometry mesh at the immersive client 300: In this case, the 3D mesh information is hardcoded or pre-stored at the immersive client 300 in advance and the immersive client 300 simply obtains or receives the stream from the server 200 and maps the stream to the 3D mesh. b) The 3D geometry mesh is streamed to the immersive client 300 in a separate connection: In this case, one connection is made to fetch 3D mesh information from the sever 300 and another for the media streams (Texture). c) 3D geometry mesh as a track in the media stream: In this case, the 3D mesh information is a part of a media stream itself, e.g., as a different track. See paragraphs [0124-0126])( In an embodiment, the server 200 may obtain the FOY input from the immersive client 300. The server 200 may obtain quaternion input from an HMD and convert the quaternion input to a unit vector. The server 200 may generate new vectors by rotating the current vector by an angle of FOY/2 in both clockwise and counter clockwise directions. The server 200 may find or determine the intersection point of these vectors with the projection geometry. The server 200 may find or determine an equation of a truncated plane using the above points and the normal vector. The equation and corresponding calculation according to an embodiment are explained in further detail below. The server 200 may apply the desired projection geometry and use the truncated plane equation to find the correct remapping of the existing data, in such a way that all points exceeding the specified threshold are mapped to the plane. The server 200 may send the above remapped data to the client 300. The client 300 may apply the remapped textures on the required or utilized mesh for the display on the display device 360. See paragraph [0127]), but is silent to serve a script to the user device to execute, the script. Kuzyakov teaches presenting virtual spherical 360 degree videos (Some examples of virtual reality content items include spherical videos, half sphere videos (e.g., 180 degree videos), arbitrary partial spheres, 225 degree videos, and 3D 360 videos. Such virtual reality content items need not be limited to videos that are formatted using a spherical shape but may also be applied to immersive videos formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three-dimensional world. See paragraph [0044]) and that javascript can be used to format social networking systems for virtual content presentaiton (In one embodiment, the user device 710 may display content from the external system 720 and/or from the social networking system 730 by processing a markup language document 714 received from the external system 720 and from the social networking system 730 using a browser application 712. The markup language document 714 identifies content and one or more instructions describing formatting or presentation of the content. By executing the instructions included in the markup language document 714, the browser application 712 displays the identified content using the format or presentation described by the markup language document 714. For example, the markup language document 714 includes instructions for generating and displaying a web page having multiple frames that include text and/or image data retrieved from the external system 720 and the social networking system 730. In various embodiments, the markup language document 714 comprises a data file including extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. Additionally, the markup language document 714 may include JavaScript Object Notation (JSON) data, JSON with padding (JSONP), and JavaScript data to facilitate data-interchange between the external system 720 and the user device 710. The browser application 712 on the user device 710 may use a JavaScript compiler to decode the markup language document 714. See paragraph [0081]). Aggarwal and Kuzyakov teach of presenting spherical videos to users and Kuzyakov teaches that JavaScript can be utilized to present information in a browser-based application, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Aggarwal with the JavaScript capabilities of Kuzyakov such that the virtual information can be deployed utilizing standardized scripting and application tools. Regarding 39, Aggarwal in view of Kuzyakov teaches The non-transitory computer readable medium of claim 38, wherein the script comprises Javascript inserted into a markup language document before it is served to the user device (Kuzyakov; Some examples of virtual reality content items include spherical videos, half sphere videos (e.g., 180 degree videos), arbitrary partial spheres, 225 degree videos, and 3D 360 videos. Such virtual reality content items need not be limited to videos that are formatted using a spherical shape but may also be applied to immersive videos formatted using other shapes including, for example, cubes, pyramids, and other shape representations of a video recorded three-dimensional world. See paragraph [0044]) and that javascript can be used to format social networking systems for virtual content presentaiton (Kuzyakov; In one embodiment, the user device 710 may display content from the external system 720 and/or from the social networking system 730 by processing a markup language document 714 received from the external system 720 and from the social networking system 730 using a browser application 712. The markup language document 714 identifies content and one or more instructions describing formatting or presentation of the content. By executing the instructions included in the markup language document 714, the browser application 712 displays the identified content using the format or presentation described by the markup language document 714. For example, the markup language document 714 includes instructions for generating and displaying a web page having multiple frames that include text and/or image data retrieved from the external system 720 and the social networking system 730. In various embodiments, the markup language document 714 comprises a data file including extensible markup language (XML) data, extensible hypertext markup language (XHTML) data, or other markup language data. Additionally, the markup language document 714 may include JavaScript Object Notation (JSON) data, JSON with padding (JSONP), and JavaScript data to facilitate data-interchange between the external system 720 and the user device 710. The browser application 712 on the user device 710 may use a JavaScript compiler to decode the markup language document 714. See paragraph [0081]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS R WILSON whose telephone number is (571)272-0936. The examiner can normally be reached M-F 7:30-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (572)-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS R WILSON/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Nov 06, 2023
Application Filed
Jan 13, 2024
Response after Non-Final Action
Oct 30, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602869
APPARATUS, SYSTEMS AND METHODS FOR PROCESSING IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602891
TELEPORTATION SYSTEM COMBINING VIRTUAL REALITY AND AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12579605
INFORMATION PROCESSING DEVICE AND METHOD OF CONTROLLING DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12567215
SYSTEM AND METHOD OF CONTROLLING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561911
3D CAGE GENERATION USING SIGNED DISTANCE FUNCTION APPROXIMANT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.1%)
1y 12m
Median Time to Grant
Low
PTA Risk
Based on 537 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month