Prosecution Insights
Last updated: April 19, 2026
Application No. 18/883,108

DISPLAYING APPLICATIONS IN 3D WITHIN AN EXTENDED REALITY ENVIRONMENT

Non-Final OA §102
Filed
Sep 12, 2024
Examiner
USSERY, CAIDEN ALEXANDER
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§103
50.0%
+10.0% vs TC avg
§102
44.4%
+4.4% vs TC avg
§112
5.6%
-34.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Babu Praveen Et. Al. (Pat. Pub. CN-111684393-A, herein after “Praveen”). Regarding claims 1, 15, & 25, Praveen teaches [a] method comprising: at a head mounted device (HMD) having a processor and a display “Some embodiments relate to a display system for displaying 3D video beyond a display screen surface, the system comprising an augmented reality head mounted display system” (Praveen, Page 5): obtaining content to render within an extended reality (XR) environment “a method for displaying 3D video beyond a display screen surface in a virtual and/or augmented reality environment, the method comprising identifying the 3D video” (Praveen, Page 5) where it is understood that augmented or virtual reality can be considered extended reality; generating, via a rendering framework “… rendering the one or more 3D models along with the rendering of the 3D video at an appropriate trigger time” (Praveen, Page 5), a two-dimensional (2D) rendering of the content “… the first 2D stereoscopic image and the second 2D stereoscopic image originate from two virtual rendering cameras located within different parts of the 3D rendered world” (Praveen, Page 6), wherein the rendering framework generates three-dimensional (3D) information based on the content “… the method may include placing one or more pairs of 2D images in a location within a final stage scene (also sometimes referred to as a final 3D rendered world). Also, the method may include rendering the final scene with two final cameras” (Praveen, Page 6) where a final stage scene, or final 3D rendered world contains the 2D image content; generating a 3D effect for the rendering of the content based on the 3D information “… the 3D video is a stereoscopic 3D video in which the one or more 3D models are generated with animation” (Praveen, Page 5) where an animation of the 3D video is considered a dynamic effect; determining a location of a display region for the content within the XR environment “A user 50 using, for example, a display system 104 of the AR system 100 may be looking at the user's physical environment/landscape 105. The user's physical environment/landscape 105 may include a virtual television 120 displayed on a vertical wall 125. The vertical wall 125 may be any vertical wall in a room in which the user 50 is located” (Praveen, Page 13) where the location of the display region is determined to be a vertical wall which the virtual television is rendered on; and presenting a view of the XR environment “the virtual television 120 may be anchored and/or fixed to a blank vertical wall 125, or displayed over a picture frame (not shown) hanging on the vertical wall in the user's physical environment /landscape” (Praveen, Page 13), wherein the rendering of the content is presented with the 3D effect at the location in the view of the XR environment “The virtual television 120 may be a virtual object on or in which the AR system 100 may display the 3D video 115. Virtual television 120 may be a portal within the user's physical environment/landscape 105” (Praveen, Page 13) where the content captured can be displayed to the user, by the worn device, on the displayed virtual television, which is mounted on the wall. The content may then have a 3D animated effect, visual from the virtual television. PNG media_image1.png 546 842 media_image1.png Greyscale Praveen, Fig. 1, Frame structure (item 102 inside the box) which is placed on the head of the user, in front of the eyes, used to display the virtual/augmented reality to the user. The virtual television (item 102) is shown to the user wearing the frame structure, with the virtual television displaying content such as the 3D video (item 115). In regards to claim 15, claim 1 is substantially similar to claim 15, hence the rejection analysis for claim 1 is also applied to claim 15. Praveen teaches the additional limitations of [a] head mounted device (HMD) comprising: a non-transitory computer-readable storage medium; and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the HMD device to perform operations comprising “According to some embodiments, computing system 1400 performs certain operations by processor 1407 executing one or more sequences of one or more instructions contained in main memory 1408. Such instructions may be read into main memory 1408 from another computer-readable/usable medium, such as static storage device 1409 or disk drive 1410” (Praveen, Page 41): obtaining content to render within an extended reality (XR) environment; generating, via a rendering framework, a two-dimensional (2D) rendering of the content, wherein the rendering framework generates three-dimensional (3D) information based on the content; generating a 3D effect for the rendering of the content based on the 3D information; determining a location of a display region for the content within the XR environment; and presenting a view of the XR environment, wherein the rendering of the content scene is presented with the 3D effect at the location in the view of the XR environment. In regards to claim 25, claim 1 is substantially similar to claim 25, hence the rejection analysis for claim 1 is also applied to claim 25. Praveen teaches the additional limitations of [a] non-transitory computer-readable storage medium storing program instructions executable via one or more processors, of a head mounted device (HMD) electronic device, to perform operations comprising: obtaining content to render within an extended reality (XR) environment; generating, via a rendering framework, a two-dimensional (2D) rendering of the content, wherein the rendering framework generates three-dimensional (3D) information based on the content; generating a 3D effect for the rendering of the content based on the 3D information; determining a location of a display region for the content within the XR environment; and presenting a view of the XR environment, wherein the rendering of the content scene is presented with the 3D effect at the location in the view of the XR environment. Regarding claims 2 & 16, Praveen teaches [t]he method of claim 1, wherein the content is 3D content “The display system 104 is configured to present to the eyes of the user 50 a photo-based radiation pattern that can be comfortably perceived as an augmentation to a physical reality having two-dimensional and three-dimensional content” (Praveen, Page 11). In regards to claim 16, claim 2 is substantially similar to claim 16, hence the rejection analysis for claim 2 is also applied to claim 16. Praveen teaches the additional limitations of [t]he HMD of claim 15, wherein the content is 3D content. Regarding claims 3 & 17, Praveen teaches [t]he method of claim 1, wherein the content is 2.5D content “The display system 104 is configured to present to the eyes of the user 50 a photo-based radiation pattern that can be comfortably perceived as an augmentation to a physical reality having two-dimensional and three-dimensional content” (Praveen, Page 11) since the content may be 2D or 3D, it may be 2.5D, which is known in the art to be a combination of 2D and 3D display. In regards to claim 17, claim 3 is substantially similar to claim 17, hence the rejection analysis for claim 3 is also applied to claim 17. Praveen teaches the additional limitations of [t]he HMD of claim 15, wherein the content is 2.5D content. Regarding claims 4 & 18, Praveen teaches [t]he method of claim 1, wherein the 3D information comprises depth information “The rendering of the 3D video may include first depth information obtained from the 3D video” (Praveen, Page 14), obtained from a depth buffer “the final rendered 3D video displayed to the user 50 may include depth information that helps alleviate the user's accommodation-convergence (accommodation-divergence) problem when the user views the 3D video using the display system 104. By collecting depth information from stereo images and adding it to the depth buffer of the screen, the quality of the generated depth information will be greatly improved based at least in part on the scene and the algorithms that can determine depth at runtime” (Praveen, Page 14), used by the rendering framework to render a 3D scene to a viewing frustum corresponding to the 2D rendering “depth information may be included during rendering of the final 3D video to accommodate the vergence experienced by the user's own visual system (e.g., the user's eyes)” (Praveen, Page 15). In regards to claim 18, claim 4 is substantially similar to claim 18, hence the rejection analysis for claim 4 is also applied to claim 18. Praveen teaches the additional limitations of [t]he HMD of claim 15, wherein the 3D information comprises depth information, obtained from a depth buffer, used by the rendering framework to render a 3D scene to a viewing frustum corresponding to the 2D rendering. Regarding claims 5 & 19, Praveen teaches [t]he method of claim 1, wherein the 3D information comprises color information, obtained from an RGB buffer, used by the rendering framework to apply various colors to differing portions associated with differing depth-enhanced views of the content presented with the 3D effect “The scanning assembly includes one or more light sources (e.g., emitting light of multiple colors in a defined pattern) that generate one or more light beams. The light source may take any of a variety of forms, such as a set of RGB light sources (e.g., laser diodes capable of outputting red, green, and blue light) operable to produce red, green, and blue coherent, collimated light, respectively, according to a defined pixel pattern specified in a corresponding frame of pixel information or data. Lasers offer high color saturation and high energy efficiency” (Praveen, Page 11) additionally, “The 3D model may correspond to a 3D video such that if the 3D video scene is some blue color and the 3D model of the 3D object has the same or substantially similar blue color, the 3D model may not be visible to the user. Thus, the 3D model may be slightly adjusted in terms of color, texture, contrast, or other characteristics to facilitate user detection of the 3D model displayed with the 3D video” (Praveen, Page 17) where color information is be used to contrast objects, or object depth. It is known that an RGB buffer is used to store information regarding color. The object information is stored, and contains data regarding location, animation, color, and other relevant features. In regards to claim 19, claim 5 is substantially similar to claim 19, hence the rejection analysis for claim 5 is also applied to claim 19. Praveen teaches the additional limitations of [t]he HMD of claim 15, wherein the 3D information comprises color information, obtained from an RGB buffer, used by the rendering framework to apply various colors to differing portions associated with differing depth-enhanced views of the content presented with the 3D effect. Regarding claims 6 & 20, Praveen teaches [t]he method of claim 1, wherein the 3D information comprises depth information, obtained from a geometry buffer, used by the rendering framework to generate lighting effects within the 2D rendering of the content “the projection subsystem 110 may take the form of a scanning-based projection device, and the eyepiece may take the form of a waveguide-based display into which light from the projection subsystem 110 is injected to produce, for example, images located at a single optical viewing distance (e.g., arm length) closer than infinity, images located at multiple optical viewing distances or focal planes, and/or image layers stacked at multiple viewing distances or focal planes to represent a volumetric 3D object. The layers in the light field may be stacked close enough together to appear continuous to the human visual system (e.g., one layer is within a cone-shaped interference zone of an adjacent layer). The layers in the light field may be stacked at predetermined depth intervals to produce depth planes at discrete viewing distances, which may be used one at a time or in combination” (Praveen, Page 11) where lighting is based on proximity of the user to the displayed content. It is known that a geometry buffer is used to analyze object features regarding shape or proximity. The object data contains information relating to size or scaling, distance from the viewer, and distance from the portal. In regards to claim 20, claim 6 is substantially similar to claim 20, hence the rejection analysis for claim 6 is also applied to claim 20. Praveen teaches the additional limitations of [t]he HMD of claim 15, wherein the 3D information comprises depth information, obtained from a geometry buffer, used by the rendering framework to generate lighting effects within the 2D rendering of the content. Regarding claims 7 & 21, Praveen teaches [t]he method of claim 1, wherein the display region for the content comprises a portal structure formed within a portion of the XR environment “At 220, a volumetric space for displaying the 3D video in the user's physical environment/landscape may be identified. The volumetric space may be a portal for displaying 3D objects (e.g., 3D video)” (Praveen, Page 2), and wherein the rendering of the content presented with the 3D effect is presented within the portal structure “At 230, the 3D video may be rendered into the volumetric space (e.g., virtual television 120). The virtual television may include a planar surface with a portal within which 3D video may be rendered and ultimately displayed. For example, virtual television may include a boundary that separates a portal (e.g., a virtual television screen) from the television framework itself.” (Praveen, Page 2). In regards to claim 21, claim 7 is substantially similar to claim 21, hence the rejection analysis for claim 7 is also applied to claim 21. Praveen teaches the additional limitations of [t]he HMD of claim 15, wherein the display region for the content comprises a portal structure formed within a portion of the XR environment, and wherein the rendering of the content presented with the 3D effect is presented within the portal structure. Regarding claims 8 & 22, Praveen teaches [t]he method of claim 7, wherein the portal structure is formed from a plurality of portals each placed with respect to a differing point of view of a user such that the portal structure comprises a non-planar structure “The display screen/planar surface 320 may be one or more of: a television, a computer monitor, a display screen of a theater, or any planar or non-planar surface for displaying 3D video thereon, or any combination thereof” (Praveen, Page 16) Additionally, “The progression may correspond to a number of input sources of 3D video to be rendered in the final 3D rendered world, and the progression may determine a number of 3D videos to display to the user from a plurality of locations (e.g., portals) within the user's 3D environment” (Praveen, Page 16) more than one display portal can be shown to the user, and the portals are not confined to being flat. In regards to claim 22, claim 8 is substantially similar to claim 22, hence the rejection analysis for claim 8 is also applied to claim 22. Praveen teaches the additional limitations of [t]he HMD of claim 21, wherein the portal structure is formed from a plurality of portals each placed with respect to a differing point of view of a user such that the portal structure comprises a non-planar structure. Regarding claims 9 & 23, Praveen teaches [t]he method of claim 7, wherein the content scene presented with the 3D effect is formed by placing voxels at different depths within the portal structure with respect to a front surface of the portal structure “In some embodiments, rendering one or more 3D models may be based, at least in part, on: voxel-based video stream. Voxels represent values on a regular grid in three-dimensional space” (Praveen, Page 18) Additionally, “The object may be a rendered virtual object placed within the user's physical environment with the purpose of displaying the object outside of a traditional video screen. The 3D video may be 3D stereoscopic video, voxel-based video, and/or volumetric video.” (Praveen, Page 3) Where voxels are used to render the presented 3D model, which includes depth information. The object may be outside or inside the virtual screen, and by using volumetric pixels, will therefore contain different depths. In regards to claim 23, claim 9 is substantially similar to claim 23, hence the rejection analysis for claim 9 is also applied to claim 23. Praveen teaches the additional limitations of [t]he HMD of claim 21, wherein the content presented with the 3D effect is formed by placing voxels at different depths within the portal structure with respect to a front surface of the portal structure. Regarding claims 10 & 24, Praveen teaches [t]he method of claim 7, wherein the content presented with the 3D effect is formed by placing voxels at different depths extending into the XR environment from a front surface of the portal structure “Figures 3A-3B illustrate examples of 3D images and/or 3D animations escaping from a screen according to some embodiments. Fig. 3A illustrates an intended 3D effect 305 and an actual 3D effect 310 of a conventional stereoscopic 3D video” (Praveen, Page 16) where voxels may be used to display the 3D content “In some embodiments, rendering one or more 3D models may be based, at least in part, on: voxel-based video stream” (Praveen, Page 18). PNG media_image2.png 426 754 media_image2.png Greyscale Praveen, Fig. 3A, with animated 3D content (item 330a) leaving the planar surface (item 320). PNG media_image3.png 421 668 media_image3.png Greyscale Praveen, Fig. 3b, with animated 3D content (item 330a), leaving the planar surface (item 320) at different depths. In regards to claim 24, claim 10 is substantially similar to claim 24, hence the rejection analysis for claim 10 is also applied to claim 24. Praveen teaches the additional limitations of [t]he HMD of claim 21, wherein the content presented with the 3D effect is formed by placing voxels at different depths extending into the XR environment from a front surface of the portal structure. Regarding claim 11, Praveen teaches [t]he method of claim 7, wherein the content scene presented with the 3D effect is formed by placing voxels at different depths within the portal structure and at different depths extending into the XR environment from the front surface of the portal structure “The desired effect of the object 330a may show a 3D animated object displayed outside the planar surface 320 that may animate/move around the user's environment such that if the user moves to a second position having a different perspective of the planar surface 320, the user may see a complete (or relevant portion) 3D representation of the object 330a displayed and positioned outside the planar surface 320” (Praveen, Page 17) “according to some embodiments of the present disclosure, at some portions of the 3D video, a 3D model of one of the fish may be generated for display relative to the 3D video. At some suitable trigger time within the 3D video, the 3D model of the fish may be displayed as swimming within the 3D video, and then the 3D model of the fish may begin to leave the surface of the display screen and swim into the user's physical environment/landscape” (Praveen, Page 17) where the depth of the object can alter within the portal, and outside. It may also utilize voxels to display the 3D model (Praveen, Page 18). Regarding claim 12, Praveen teaches [t]he method of claim 7, wherein the content scene presented with the 3D effect is formed by reprojecting each frame of the content for each eye of a user using the 3D information “… a pair of input images corresponding to an image captured for the left eye and an image captured for the right eye is identified. The pair of input images may be designated to be rendered into designated locations within a scene to be rendered, wherein once the scene is rendered, the pair of input images may be displayed as 3D video within the scene.” (Praveen, Page 16) where the images for a 3D output are captured by two cameras, one for each eye, and output through the user device with respect to each eye’s perspective. Regarding claim 13, Praveen teaches [t]he method of claim 7, wherein the rendering of the content presented with the 3D effect provides a depth-enhanced view of the content within the portal structure “As described above, depth information may be included during rendering of the final 3D video to accommodate the vergence experienced by the user's own visual system (e.g., the user's eyes)” (Praveen, Page 15) additionally, “Depth information may be factored into the rendering of 3D video and one or more 3D models to address the accommodation-convergence coordination problem typically associated with legacy VR systems. The distance from the user using the display system 104 and the 3D model may be taken into account when determining how to display the 3D video and the image or video of the 3D model to the user” (Praveen, Page 17). Regarding claim 14, Praveen teaches [t]he method of claim 13, wherein the 3D effect is provided by providing altered views of the image based on differing viewpoints within the portal structure to provide a parallax effect “if a user were to view an object from a first perspective having a direct frontal view position, the object may appear to be a planar 2D frontal view of the object with the portal framing a border around the object. From this first perspective, the portal may appear to be any shape, such as circular, rectangular, square, polygonal, and the like. Page 13 Method and system for generating and displaying 3D video in virtual, augmented or mixed reality environment CN 111684393 A Continuing with the example, if the user views the object from a second perspective having a side view position, some portions of the object may be visible to the user and other portions of the object may be occluded or invisible depending on the side perspective of the second perspective and based on the position of the object rendered and/or displayed relative to the front surface of the planar surface, such that a larger portion of the object may be seen if the object is placed toward the front of the planar surface, and conversely, a smaller portion of the object may be seen if the object is placed toward the back or rear of the planar surface” (Praveen, Page 14) as perspective or viewpoint of the user changes, the visual portion of the object changes accordingly. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAIDEN ALEXANDER USSERY whose telephone number is (571)272-1192. The examiner can normally be reached Monday - Friday* 7:30AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CAIDEN ALEXANDER USSERY/Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 12, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §102 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month