Prosecution Insights
Last updated: April 19, 2026
Application No. 18/367,660

TECHNIQUES FOR VIEWING 3D PHOTOS AND 3D VIDEOS

Final Rejection §103
Filed
Sep 13, 2023
Examiner
BEARD, CHARLES LLOYD
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
235 granted / 350 resolved
+5.1% vs TC avg
Strong +36% interview lift
Without
With
+36.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
387
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
70.2%
+30.2% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Received 12/18/2025 Claim(s) 1-22 is/are pending. Claim(s) 1, 12, 19, and 20 has/have been amended. Claim(s) 22 has/have been added. The objections to the Specification have been maintained in view of the amendments received on 12/18/2025. The objections to the claim(s) 18 have been withdrawn in view of the amendments received on 12/18/2025. The 35 USC § 112(b) rejection to claim(s) 12 and 13 have been withdrawn in view of the amendments received on 12/18/2025. The 35 U.S.C § 103 rejection to claim(s) 1-21 have been fully considered in view of the amendments received on 12/18/2026 and are fully addressed in the prior art rejection below. Response to Arguments Received 07/15/2023 Regarding independent claim(s) 1, 19, and 20: Applicant’s arguments (Remarks, Page 8: ¶ 4 to Page 12: ¶ 1), filed 12/18/2025, with respect to the rejection(s) of claim(s) 1, 19, and 20 under 35 U.S.C § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of Saito (US PGPUB No. 20130293691 A1), in view of Mitani (US PGPUB No. 20220385885 A1), in view of Aga et al. (US PGPUB No. 20210377515 A1), in view of Son et al. (US PGPUB No. 20100171697 A1), and further in view of Yeatman, JR. et al. (US PGPUB No. 20110222757 A1). Applicant’s arguments (Remarks, Page 12: ¶ 2), filed 12/18/2025, with respect to the rejection(s) of claim(s) 19 and 20 under 35 U.S.C § 103 have been fully considered and are persuasive due claim 19's and claim 20's similarity to claim 1. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of the prior art as mentioned above. Regarding dependent claim(s) 10 and 14-16: Applicant’s arguments (Remarks, Page 12: ¶ 3-4), filed 12/18/2025, with respect to the rejection(s) of claim(s) 10 and 14-16 under 35 U.S.C § 103 have been fully considered and are persuasive due the dependency upon claims 1, 19, and 20 respectively. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of the prior art as mentioned above. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-13, and 17-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saito, US PGPUB No. 20130293691 A1, hereinafter Saito, in view of Mitani, US PGPUB No. 20220385885 A1, hereinafter Mitani, in view of Aga et al., US PGPUB No. 20210377515 A1, hereinafter Aga, in view of Son et al., US PGPUB No. 20100171697 A1, hereinafter Son, and further in view of Yeatman, JR. et al., US PGPUB No. 20110222757 A1, hereinafter Yeatman. Regarding claim 19, Saito discloses a device (Saito; a device (i.e. stereo display apparatus) [¶ 0043 and ¶ 0073]) comprising: to perform operations comprising: obtaining a three-dimensional (3D) content item (Saito; performing operations [as addressed above] comprises obtaining a 3D content item (i.e. 3D object data) [¶ 0073-0074], as illustrated within Fig. 11; additionally, adjacent viewpoint images at the time of imaging [¶ 0053-0054], and an imaging arrangement for left and right eye angular imaging [¶ 0055-0059], as illustrated within Fig. 6), the 3D content item comprising left eye content and right eye content generated based on camera images corresponding to left and right eye viewpoints (Saito; the 3D content item (i.e. 3D object data) [as addressed above] comprises left eye content and right eye content generated based on camera images [¶ 0055-0057] corresponding to left and right eye viewpoints [¶ 0049, ¶ 0051-0052, and ¶ 0061-0062], as illustrated by Figs. 3A-B; moreover, angular regions [¶ 0050 and ¶ 0064-0066] and rendering 3D object data [¶ 0070-0072]); providing a first view of the 3D content item at a position within a 3D environment (Saito; performing operations [as addressed above] comprises providing a 1st view of the 3D content item (i.e. 3D object data) at a position within a 3D environment [¶ 0067-0069]; moreover, a 1st view corresponds to at least one viewpoint from “i” viewpoints [¶ 0050 and ¶ 0074-0075] , as illustrated within Fig. 4 and Fig. 12, further corresponding to viewing angles / viewing directions [¶ 0048], as illustrated within Fig. 2), wherein the first view comprises a left eye view of the 3D content item that is based on the left eye content and a right eye view of the 3D content item that is based on the right eye content (Saito; the 1st view [as addressed above] comprises a left eye view of the 3D content item that is based on the left eye content and a right eye view of the 3D content item that is based on the right eye content [¶ 0057, ¶ 0065, and ¶ 0070-0071]); determining from the first view to a second view of the 3D content item based on a criterion (Saito; determining from the 1st view to a 2nd view (i.e. modified position and/or perspective) [¶ 0044, ¶ 0046, and ¶ 0048] of the 3D content item based on a criterion (i.e. angular region) [¶ 0050, ¶ 0070-0072, and ¶ 0075]; wherein, the display provides a plurality of viewpoints [¶ 0044, ¶ 0046, and ¶ 0048]); and providing the second view of the 3D content item within the 3D environment (Saito; providing the 2nd view (i.e. modified position and/or perspective) of the 3D content item within the 3D environment [¶ 0050, ¶ 0070-0072, and ¶ 0075]), wherein the second view comprises the left eye view of the 3D content item that is based on a shared content item and the right eye view of the 3D content item that is based on the shared content item (Saito; the 2nd view (i.e. modified position and/or perspective) [as addressed above] comprises the left eye view of the 3D content item that is based on a shared content item and the right eye view of the 3D content item that is based on the shared content item [¶ 0070-0072 and ¶ 0075]; moreover, stereoscopic imaging of a shared content item [¶ 0003 and ¶ 0050] from multiple angled viewpoints of a shared point/position [¶ 0055-0056 and ¶ 0058-0059]), wherein the shared content item comprises at least one of the left eye content and the right eye content (Saito; the shared content item [as addressed above] comprises at least one of the left eye content and the right eye content (i.e. stereoscopic imaging) [¶ 0050, ¶ 0055-0056, and ¶ 0058-0059]). Satio fails to explicitly disclose a non-transitory computer-readable storage medium; one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the one or more processors; and transitioning from one view to another view. However, Mitani teaches determining to transition from the first view to a second view of the 3D content item based on a criterion (Mitani; determining to transition from the 1st view to a 2nd view of the 3D content item based on a criterion [¶ 0101-0103]; wherein, criterion corresponds to user movement within a viewing environment [¶ 0095-0098]; additionally, coefficients of a left and right side view [¶ 0099]; moreover, multi-viewpoint [¶ 0063-0064] holographic optical element [¶ 0057-0058], and viewpoint switching [¶ 0071, ¶ 0073, and ¶ 0082-0084]); and providing the second view of the 3D content item within the 3D environment (Mitani; providing the 2nd view of the 3D content item within the 3D environment [¶ 0101-0103]), wherein the second view comprises the left eye view of the 3D content item that is based on a shared content item and the right eye view of the 3D content item that is based on the shared content item (Mitani; the 2nd view comprises the left eye view of the 3D content item that is based on a shared content item and the right eye view of the 3D content item that is based on the shared content item [¶ 0076 and ¶ 0091]; wherein, left and right eye view corresponds to a parallax viewpoint for both eyes in relation to a stereoscopical view [¶ 0076 and ¶ 0091]; moreover, motion parallax associated with switching viewpoints of a shared virtual image [¶ 0069-0071 and ¶ 0091]), wherein the shared content item comprises at least one of the left eye content and the right eye content (Mitani; the shared content item [as addressed above] implicitly comprises at least one of the left eye content and the right eye content (given the stereoscopic imaging involving a parallax) [¶ 0076 and ¶ 0091]). Satio and Mitani are considered to be analogous art because both pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio, to incorporate determining to transition from the first view to a second view of the 3D content item based on a criterion; and providing the second view of the 3D content item within the 3D environment, wherein the second view comprises the left eye view of the 3D content item that is based on a shared content item and the right eye view of the 3D content item that is based on the shared content item, wherein the shared content item comprises at least one of the left eye content and the right eye content (as taught by Mitani), in order to provide enhanced realism by improving (motion) parallax associated with viewpoint movement (Mitani; [¶ 0004-0005 and ¶ 0010]). Satio as modified by Mitani fails to disclose a non-transitory computer-readable storage medium; and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the one or more processors to perform operations. However, Aga teaches a non-transitory computer-readable storage medium (Aga; a non-transitory computer-readable storage medium (i.e. a stored programing via a storage unit) [¶ 0123 and ¶ 0127]; moreover, memory based program or CRM [¶ 0225-0227 and ¶ 0236]); and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the one or more processors to perform operations (Aga; one or more processors coupled to the non-transitory computer-readable storage medium wherein the non-transitory computer-readable storage medium comprises program instructions that cause the one or more processors to perform operations when executed on the one or more processors [¶ 0225, ¶ 0227, and ¶ 0236]; moreover, implemented functions [¶ 0127 and ¶ 0231]). Satio in view of Mitani and Aga are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, to incorporate a non-transitory computer-readable storage medium; and one or more processors coupled to the non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium comprises program instructions that, when executed on the one or more processors, cause the one or more processors to perform operations (as taught by Aga), in order to provide an improved rendering and presentation of visualizations within an augmented environment that appear three-dimensional to a user (Aga; [¶ 0002-0005 and ¶ 0007]). Satio in view of Mitani and Aga fails to disclose implementing stereoscopic imaging using multi techniques. However, Son teaches implementing stereoscopic imaging using multi techniques (Son; implementing stereoscopic imaging using multi techniques [¶ 0053-0054 and ¶ 0059], as illustrated within Fig. 10A; moreover, glasses method and non-glasses method [¶ 0006]). Satio in view of Mitani and Aga and Son are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani and Aga, to incorporate implementing stereoscopic imaging using multi techniques (as taught by Son), in order to provide an improved visual range associated with multiple viewpoints for presenting dynamic/adaptive three-dimensional data to an observer (Son; [¶ 0008-0010 and ¶ 0051]). Satio in view of Mitani, Aga, and Son fails to disclose determining to transition from the first view that is based on rendering the eye- specific content to provide each of the left and right eye views to a second view of the 3D content item that is based on rendering a single shared point cloud representation to provide both of the left and right eve views, wherein determining to transition is based on a criterion; and providing the second view of the 3D content item within the 3D environment, wherein the second view comprises the left eye view of the 3D content item that is based on a shared content item the single shared point cloud representation and the right eye view of the 3D content item that is based on the shared content item single shared point cloud representation, wherein the shared content item comprises single shared point cloud representation is derived from at least one of the left eye content and the right eye content. However, Yeatman teaches obtaining a three-dimensional (3D) content item, the 3D content item comprising eye specific content comprising left eye content and right eye content generated based on camera images corresponding to left and right eye viewpoints (Yeatman; obtaining a 3D content item, the 3D content item comprising eye specific content comprising left eye content and right eye content generated based on camera images corresponding to left and right eye viewpoints [¶ 0139 and ¶ 0144]); providing a first view of the 3D content item at a position within a 3D environment (Yeatman; providing a 1st view of the 3D content item at a position within a 3D environment [¶ 0140-0143]), wherein the first view comprises a left eye view of the 3D content item that is based on rendering the left eye content and a right eye view of the 3D content item that is based on rendering the right eye content (Yeatman; the 1st view comprises a left eye view of the 3D content item that is based on rendering the left eye content and a right eye view of the 3D content item that is based on rendering the right eye content [¶ 0140-0143]); determining to transition from the first view that is based on rendering the eye- specific content to provide each of the left and right eye views to a second view of the 3D content item that is based on rendering a single shared point cloud representation to provide both of the left and right eve views (Yeatman; determining to transition from the 1st view that is based on rendering the eye-specific content to provide each of the left and right eye views to a 2nd view of the 3D content item that is based on rendering a single shared point cloud representation to provide both of the left and right eve views [¶ 0140-0143]; moreover, point cloud data is embedded in each frame [¶ 0142], in relation with a disparity map [¶ 0009 and ¶ 0114-0115]; and moreover, transition [¶ 0084-0085]), wherein determining to transition is based on a criterion (Yeatman; determining to transition is based on a criterion [¶ 0141-0142]; additionally, displacement information [¶ 0140 and ¶ 0143] and stereo-related decisions [¶ 0144]); and providing the second view of the 3D content item within the 3D environment (Yeatman; providing the 2nd view of the 3D content item within the 3D environment [¶ 0139]), wherein the second view comprises the left eye view of the 3D content item that is based on a shared content item the single shared point cloud representation and the right eye view of the 3D content item that is based on the shared content item single shared point cloud representation (Yeatman; the 2nd view comprises the left eye view of the 3D content item that is based on a shared content item the single shared point cloud representation and the right eye view of the 3D content item that is based on the shared content item single shared point cloud representation [¶ 0141-0142]; moreover, combined point cloud [id.]), wherein the shared content item comprises single shared point cloud representation is derived from at least one of the left eye content and the right eye content (Yeatman; the shared content item comprises single shared point cloud representation is derived from at least one of the left eye content and the right eye content [¶ 0139 and ¶ 0141-0142]). Satio in view of Mitani, Aga and Son and Yeatman are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, and Son, to incorporate obtaining a three-dimensional (3D) content item, the 3D content item comprising eye specific content comprising left eye content and right eye content generated based on camera images corresponding to left and right eye viewpoints; providing a first view of the 3D content item at a position within a 3D environment, wherein the first view comprises a left eye view of the 3D content item that is based on rendering the left eye content and a right eye view of the 3D content item that is based on rendering the right eye content; determining to transition from the first view that is based on rendering the eye- specific content to provide each of the left and right eye views to a second view of the 3D content item that is based on rendering a single shared point cloud representation to provide both of the left and right eve views, wherein determining to transition is based on a criterion; and providing the second view of the 3D content item within the 3D environment, wherein the second view comprises the left eye view of the 3D content item that is based on a shared content item the single shared point cloud representation and the right eye view of the 3D content item that is based on the shared content item single shared point cloud representation, wherein the shared content item comprises single shared point cloud representation is derived from at least one of the left eye content and the right eye content (as taught by Yeatman), in order to provide an improved conversion for dynamic/adaptive three-dimensional data to an observer (Yeatman; [¶ 0003-0008]). Regarding claim 1, the rejection of claim 1 is addressed within the rejection of claim 19, due to the similarities claim 1 and claim 19 share, therefore refer to the rejection of claim 19 regarding the rejection of claim 1. Although, claim 19 and claim 1 may not be identical, it is reasonable to reject claim 1 based on the prior art teachings and rational within the rejection of claim 19. Regarding claim 2, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the at least one of the left eye content and the right eye content (Satio; the at least one of the left eye content and the right eye content [as addressed within the parent claim(s)]) comprises: the left eye content without the right eye content; the right eye content without the left eye content; or content generated based on the left eye content and the right eye content (Satio; the at least one of the left eye content and the right eye content [as addressed above] comprises content generated based on the left eye content and the right eye content [¶ 0070-0072]; moreover, parallax view [¶ 0073]). Regarding claim 3, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the left eye content and the right eye content comprise a stereo camera image or a stereo point cloud generated based on a stereo camera image (Satio; the left eye content and the right eye content comprise a stereo camera image (i.e. imaging devices at multiple points) [¶ 0057 and ¶ 0059]). Aga further teaches a stereo camera (Aga; stereo camera [¶ 0057 and ¶ 0059]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate a stereo camera (as taught by Aga), in order to provide an improved rendering and presentation of visualizations within an augmented environment that appear three-dimensional to a user (Aga; [¶ 0002-0005 and ¶ 0007]). Regarding claim 4, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, further comprising providing a transitioning effect while transitioning from the first view to the second view (Son; providing a transitioning effect (i.e. motion effect) while transitioning from the 1st view to the 2nd view [¶ 0051]), wherein providing the transitioning effect includes providing animated content when transitioning between the first view and the second view (Son; wherein providing the transitioning effect (i.e. motion effect) includes implicitly providing animated content (given real-time parallax changes, real-time rendering) when transitioning/moving between the 1st view and the 2nd view [¶ 0051-0053], as illustrated within Fig. 9; moreover, view changes of the left-eye and right-eye (camera positions) in real-time [¶ 0050]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate providing a transitioning effect while transitioning from the first view to the second view, wherein providing the transitioning effect includes providing animated content when transitioning between the first view and the second view (as taught by Son), in order to provide an improved visual range associated with multiple viewpoints for presenting dynamic/adaptive three-dimensional data to an observer (Son; [¶ 0008-0010 and ¶ 0051]). Regarding claim 5, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 4, the electronic device (Satio; the electronic device (i.e. stereo display apparatus) [as addressed within the parent claim(s)]). Aga further teaches wherein the electronic device includes sensor data of a physical environment proximate the electronic device and providing the transitioning effect includes providing a view of the physical environment (Aga; the electronic device includes sensor data of a physical environment proximate the electronic device and providing the transitioning/movement effect includes providing a view of the physical environment [¶ 0059-0060 and ¶ 0063-0064]; moreover, superimposing virtual and real data [¶ 0068]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate the electronic device includes sensor data of a physical environment proximate the electronic device and providing the transitioning effect includes providing a view of the physical environment (as taught by Aga), in order to provide an improved rendering and presentation of visualizations within an augmented environment that appear three-dimensional to a user (Aga; [¶ 0002-0005 and ¶ 0007]). Regarding claim 6, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the criterion for determining from the first view to the second view is based on a position of the electronic device or a user of the electronic device relative to a portion of the position of the 3D content item within the 3D environment (Satio; the criterion for determining from the 1st view to the 2nd view (i.e. modified position and/or perspective) is based on a position of the electronic device or a user of the electronic device relative to a portion of the position of the 3D content item within the 3D environment [¶ 0046 and ¶ 0051]; wherein, the 3D content item within the 3D environment [as addressed within the parent claim(s)]; moreover, detecting a position of a user/observer [¶ 0097-0100] in relation with determining the angular regions and presenting a parallax image [¶ 0109-0111 and ¶ 0113-0114]; and moreover, after the position of the observer's face in the horizontal direction of the naked-eye stereoscopic display apparatus is detected, the plurality of viewpoint images in which viewpoints are adjusted are shifted in correspondence with the detected position of the face and formatted into a predetermined format [¶ 0003 and ¶ 0117]). Mitani further teaches determining to transition from the first view to the second view (Mitani; determining to transition from the 1st view to the 2nd view [¶ 0101-0103] in relation with a criterion (i.e. user movement within a viewing environment) [¶ 0095-0098]; additionally, coefficients of a left and right side view [¶ 0099]; moreover, multi-viewpoint [¶ 0063-0064] holographic optical element [¶ 0057-0058], and viewpoint switching [¶ 0071, ¶ 0073, and ¶ 0082-0084]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate determining to transition from the first view to the second view (as taught by Mitani), in order to provide enhanced realism by improving (motion) parallax associated with viewpoint movement (Mitani; [¶ 0004-0005 and ¶ 0010]). Regarding claim 7, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the criterion for determining from the first view to the second view is based on a change in a gaze direction of a user of the electronic device relative to the position of the 3D content item within the 3D environment (Satio; the criterion for determining from the 1st view to the 2nd view (i.e. modified position and/or perspective) is based on a change in a gaze direction (i.e. face and eye detection) of a user of the electronic device relative to the position of the 3D content item within the 3D environment [¶ 0049, ¶ 0051, and ¶ 0097-0098 and ¶ 0100], as illustrated within Figs. 3A-B and Fig. 22; moreover, an observer’s face relative to an implicit gaze/line-of-sight [¶ 0089] associated with a viewpoint image [¶ 0090-0092], as illustrated within Figs. 18 and 19). Mitani further teaches determining to transition from the first view to the second view (Mitani; determining to transition from the 1st view to the 2nd view [¶ 0101-0103] in relation with a criterion (i.e. user movement within a viewing environment) [¶ 0095-0098]; moreover, multi-viewpoint [¶ 0063-0064] holographic optical element [¶ 0057-0058], and viewpoint switching [¶ 0071, ¶ 0073, and ¶ 0082-0084]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate determining to transition from the first view to the second view (as taught by Mitani), in order to provide enhanced realism by improving (motion) parallax associated with viewpoint movement (Mitani; [¶ 0004-0005 and ¶ 0010]). Aga further teaches determining a gaze direction (Aga; determining a gaze direction (i.e. line-of-sight) [¶ 0061]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate determining a gaze direction (as taught by Aga), in order to provide an improved rendering and presentation of visualizations within an augmented environment that appear three-dimensional to a user (Aga; [¶ 0002-0005 and ¶ 0007]). Regarding claim 8, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the left eye content and right eye are based on depth data and the criterion for determining to transition from the first view to the second view is based on a change in quality of the depth data (Satio; the left eye content and right eye (content) are based on depth data and the criterion for determining to transition from the 1st view to the 2nd view (i.e. modified position and/or perspective) is based on a change in quality/status of the depth data [¶ 0049-0051]; moreover, depth corresponding to distance towards/near (the display) or away/far (from the display) correlating to quality/status thereof [¶ 0107 and ¶ 0109-0113]; wherein, distances are an amount indicating large (i.e. far), medium, or small (i.e. near) in relation to a display [¶ 0109-0110]; additionally; a change in quality also corresponds to high and low possibility of parallax [¶ 0076-0079]). Regarding claim 9, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the criterion for determining from the first view to the second view is based on a distance between the electronic device and the position of the 3D content item within the 3D environment (Satio; the criterion for determining to transition from the 1st view to the 2nd view (i.e. modified position and/or perspective) is based on a distance between the electronic device and the position of the 3D content item within the 3D environment [¶ 0076], as illustrated within Fig. 13; moreover, a parallax level [¶ 0077-0079] and the angular regions [¶ 0081-0082]). Mitani further teaches determining to transition from the first view to the second view (Mitani; determining to transition from the 1st view to the 2nd view [¶ 0101-0103] in relation with a criterion (i.e. user movement within a viewing environment) [¶ 0095-0098]; moreover, multi-viewpoint [¶ 0063-0064] holographic optical element [¶ 0057-0058], and viewpoint switching [¶ 0071, ¶ 0073, and ¶ 0082-0084]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman to incorporate determining to transition from the first view to the second view (as taught by Mitani), in order to provide enhanced realism by improving (motion) parallax associated with viewpoint movement (Mitani; [¶ 0004-0005 and ¶ 0010]). Regarding claim 10, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, the at least one of the left eye content and the right eye content comprises a combination of the left eye and right eye pixelations (Aga; (the at least one of) the left eye content and the right eye content comprises a combination of the left eye pixelation and right eye pixelation [¶ 0157-0161], as illustrated within Figs. 10 and 11; moreover, creating depth map(s) [¶ 0213-0214]). Yeatman further teaches wherein the at least one of the left eye content and the right eye content comprises a left eye point cloud of a stereo point cloud, a right eye point cloud of the stereo point cloud, or a combination of the left eye point cloud and right eye point cloud (Yeatman; (the at least one of) the left eye content and the right eye content comprises a left eye point cloud of a stereo point cloud, a right eye point cloud of the stereo point cloud, or a combination of the left eye point cloud and right eye point cloud [¶ 0139-0142]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate wherein the at least one of the left eye content and the right eye content comprises a left eye point cloud of a stereo point cloud, a right eye point cloud of the stereo point cloud, or a combination of the left eye point cloud and right eye point cloud (as taught by Yeatman), in order to provide an improved conversion for dynamic/adaptive three-dimensional data to an observer (Yeatman; [¶ 0003-0008]). Regarding claim 11, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the left eye view of the 3D content item is not based on the right eye content (Saito; the left eye view of the 3D content item is implicitly not based on the right eye content (given left perspective imaging) [¶ 0055-0057, ¶ 0067-0069, and ¶ 0073-0074]), and the right eye view of the 3D content item is not based on the left eye content when providing the first view (Saito; the right eye view of the 3D content item is implicitly not based on the left eye content (given right perspective imaging) [¶ 0055-0057, ¶ 0067-0069, and ¶ 0073-0074] when providing the 1st view [as addressed within the parent claim(s)]). Regarding claim 12, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, further comprising: determining from the second view to a third view of the 3D content based on an additional criterion (Satio; determining from the 2nd view (i.e. modified position and/or perspective) to a 3rd view (i.e. further modified position and/or perspective) [¶ 0044, ¶ 0046, and ¶ 0048] of the 3D content item based on an additional criterion (i.e. another angular region) [¶ 0050, ¶ 0070-0072, and ¶ 0075]; wherein, the display provides a plurality of viewpoints [¶ 0044, ¶ 0046, and ¶ 0048]); and providing the third view of the 3D content in which the left eye view and right eye view are based on content that is sparser than the at least one of the left eye content and the right eye content (Satio; providing the 3rd view (i.e. further modified position and/or perspective) of the 3D content in which the left eye view and right eye view are based on content that is (subjectively) sparser than the at least one of the left eye content and the right eye content [¶ 0076-0077]; moreover, parallax level [¶ 0078-0080] in relation with changes in viewpoints [¶ 0081-0083]). Mitani further teaches determining to transition from the second view to a third view (Mitani; determining to transition from the 2nd view to a 3rd view [¶ 0101-0103] in relation with a criterion (i.e. user movement within a viewing environment) [¶ 0095-0098]; moreover, multi-viewpoint [¶ 0063-0064] holographic optical element [¶ 0057-0058], and viewpoint switching [¶ 0071, ¶ 0073, and ¶ 0082-0084]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate determining to transition from the second view to a third view (as taught by Mitani), in order to provide enhanced realism by improving (motion) parallax associated with viewpoint movement (Mitani; [¶ 0004-0005 and ¶ 0010]). Yeatman further teaches providing the third view of the 3D content in which the left eye view and the right eye view are based on more abstract content than the at least one of the left eye content and the right eye content (Yeatman; providing the third view of the 3D content in which the left eye view and the right eye view are based on more abstract content than the at least one of the left eye content and the right eye content [¶ 0108-0110 and ¶ 0112]; additionally, point cloud disparity [¶ 0113-0115]), wherein the more abstract content comprises a point cloud representation that is more sparse than a point cloud representation associated with the at least one of the left eye content and the right eve content (Yeatman; the more abstract content comprises a point cloud representation that is (subjectively) more sparse than a point cloud representation associated with the at least one of the left eye content and the right eve content [¶ 0108-0110 and ¶ 0141-0142]; moreover, forming a disparity map from the at least first and second 2D images, wherein the disparity map has a gray scale that corresponds to distance information of the at least one object relative to the reference coordinate system [¶ 0009]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate providing the third view of the 3D content in which the left eye view and the right eye view are based on more abstract content than the at least one of the left eye content and the right eye content, wherein the more abstract content comprises a point cloud representation that is more sparse than a point cloud representation associated with the at least one of the left eye content and the right eve content (as taught by Yeatman), in order to provide an improved conversion for dynamic/adaptive three-dimensional data to an observer (Yeatman; [¶ 0003-0008]). Regarding claim 13, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 12, wherein the additional criterion used to determine from the second view to the third view is based on a same type of criterion used to determine from the first view to the second view (Satio; the additional criterion (i.e. another angular region) [as addressed within the parent claim(s)] used to determine from the 2nd view to the 3rd view is based on a same type of criterion used to determine from the 1st view to the 2nd view [¶ 0044, ¶ 0046, ¶ 0048, and ¶ 0073-0075]). Mitani further teaches determining to transition from the first view to a second view (Mitani; determining to transition from the 1st view to a 2nd view [¶ 0101-0103] in relation with a criterion (i.e. user movement within a viewing environment) [¶ 0095-0098]; moreover, multi-viewpoint [¶ 0063-0064] holographic optical element [¶ 0057-0058], and viewpoint switching [¶ 0071, ¶ 0073, and ¶ 0082-0084]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate determining to transition from the first view to a second view (as taught by Mitani), in order to provide enhanced realism by improving (motion) parallax associated with viewpoint movement (Mitani; [¶ 0004-0005 and ¶ 0010]). Regarding claim 17, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the 3D environment is an extended reality (XR) environment (Satio; the 3D environment is an XR environment (i.e. naked-eye stereoscopic display environment) [¶ 0008]; moreover, presentation viewpoints [¶ 0048] associated with 3D rendering [¶ 0070-0071 and ¶ 0073-0074] with (real) world coordinates [¶ 0076]). Regarding claim 18, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, the electronic device (Satio; the electronic device [as addressed within the parent claim(s)]). Aga further teaches wherein the electronic device is a head-mounted device (HMD) (Aga; the electronic device is a HMD [¶ 0057]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate the electronic device is a head-mounted device (as taught by Aga), in order to provide an improved rendering and presentation of visualizations within an augmented environment that appear three-dimensional to a user (Aga; [¶ 0002-0005 and ¶ 0007]). Regarding claim 20, the rejection of claim 20 is addressed within the rejection of claim 19, due to the similarities claim 20 and claim 19 share, therefore refer to the rejection of claim 19 regarding the rejection of claim 20. Although, claim 19 and claim 20 may not be identical, it is reasonable to reject claim 20 based on the prior art teachings and rational within the rejection of claim 19. Regarding claim 21, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the criterion is based on at least one of a change in quality of depth data, a gaze convergence angle threshold, and a distance threshold between the electronic device and the 3D content item (Yeatman; the criterion is based on at least one of a change in quality of depth data, a gaze convergence angle threshold, and a distance threshold between the electronic device and the 3D content item [¶ 0142]; moreover, convergence and setting of the interaxial distance between virtual cameras [id.]; additionally, disparity map [¶ 0109-0110] in relation with distance handling [¶ 0009-0011 and ¶ 0063]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate the criterion is based on at least one of a change in quality of depth data, a gaze convergence angle threshold, and a distance threshold between the electronic device and the 3D content item (as taught by Yeatman), in order to provide an improved conversion for dynamic/adaptive three-dimensional data to an observer (Yeatman; [¶ 0003-0008]). Claim(s) 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Satio in view of Mitani, Aga, Son, and Yeatman as applied to claim(s) 1 above, and further in view of Bleyer et al., US PGPUB No. 20190297316 A1, hereinafter Bleyer. Regarding claim 14, Satio in view of Mitani, Aga, Son, and Yeatman further discloses the method of claim 1, wherein the left eye content and right eye content are obtained based on depth data (Satio; the left eye content and right eye content are obtained based on depth data [¶ 0049-0051]; moreover, depth corresponding to distance towards/near (the display) or away/far (from the display) correlating to quality/status thereof [¶ 0107 and ¶ 0109-0113]; wherein, distances are an amount indicating large (i.e. far), medium, or small (i.e. near) in relation to a display [¶ 0109-0110]). Mitani further teaches controlling light (Mitani; controlling light in relation with a holographic optical element [¶ 0057-0059]; moreover, light diffusion [¶ 0087]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate controlling light (as taught by Mitani), in order to provide enhanced realism by improving (motion) parallax associated with viewpoint movement (Mitani; [¶ 0004-0005 and ¶ 0010]). Aga further teaches wherein the left eye content and right eye content are obtained based on depth data and image data (Aga; the left eye content and right eye content are obtained based on depth data [¶ 0157-0161], as illustrated within Figs. 10 and 11; wherein, depth data is obtained [¶ 0060 and ¶ 0129]; additionally, depth sensing and depth map [¶ 0213-0214], as illustrated within Fig. 21, as well as SLAM [¶ 0075-0077]; wherein, projection surface associated with viewpoints involve depth/distance information [¶ 0112-0114 and ¶ 0119] and implicitly involve light intensities (given pixels/color) [¶ 0121 ¶ 0129]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate wherein the left eye content and right eye content are obtained based on depth data and image data (as taught by Aga), in order to provide an improved rendering and presentation of visualizations within an augmented environment that appear three-dimensional to a user (Aga; [¶ 0002-0005 and ¶ 0007]). Son further teaches the left eye content and right eye content are obtained based on depth data and light intensity image data (Son; the left eye content and right eye content are obtained based on depth data and image data [¶ 0050-0052], as illustrated within Fig. 8; wherein, data from CAM1 and CAM2 determine depth information [¶ 0050-0051]; and wherein, images implicitly correspond to a representation of light intensity data given the properties of a camera [¶ 0058]; additionally, polarizing light corresponds to controlling light [¶ 0054-0057], as illustrated within Figs. 10A-D). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate the left eye content and right eye content are obtained based on depth data and light intensity image data (as taught by Son), in order to provide an improved visual range associated with multiple viewpoints for presenting dynamic/adaptive three-dimensional data to an observer (Son; [¶ 0008-0010 and ¶ 0051]). Satio as modified by Mitani, Aga, Son, and Yeatman fails to explicitly disclose light intensity image data. However, Bleyer teaches the left eye content and right eye content are obtained based on depth data and light intensity image data (Bleyer; the left eye content and right eye content (i.e. stereo images) [¶ 0071-0074] are obtained based on depth data and light intensity (i.e. spectrums of light, IR light, visible light) image data [¶ 0053-0055]; wherein, determining/tracking depth (and/or position) [¶ 0037-0040 and ¶ 0046-0048] with a stereo camera system [¶ 0065, ¶ 0080, and ¶ 0089-0090] using light (intensities corresponding to spectrums of light associated with IR light and/or visible light)) [¶ 0008-0012]). Satio in view of Mitani, Aga Son, and Yeatman and Bleyer are considered to be analogous art because they pertain to generating and/or managing data in relation with providing media data to a user using an imaging system, wherein one or more computerized units are utilized in order to produce a visualization effect. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate the left eye content and right eye content are obtained based on depth data and light intensity image data (as taught by Bleyer), in order to provide an improved immersive experience for a user by enhancing information integration, depth (i.e. three-dimensional spatial information) detection, and movement tracking within a visualization (Bleyer; [¶ 0001-0006, and ¶ 0008]). Regarding claim 15, Satio in view of Mitani, Aga, Son, Yeatman, and Bleyer further discloses the method of claim 14, wherein the light intensity image data is based on light intensity image data for the right eye viewpoint from a first light intensity image sensor and light intensity image data for the left eye viewpoint from a second light intensity image sensor (Son; the image data is based on image data for the right eye viewpoint from a 1st light intensity image sensor (i.e. 1st camera, wherein a camera is a light sensitive sensors that measures light intensities) and image data for the left eye viewpoint from a 2nd light intensity image sensor (i.e. 2nd camera, wherein a camera is a light sensitive sensors that measures light intensities) [¶ 0050-0051], as illustrated within Fig. 8; wherein, data from CAM1 and CAM2 determine depth information [¶ 0050-0051]; and wherein, images implicitly correspond to a representation of light intensity data given the properties of a camera [¶ 0058]; additionally, IR light tracking [¶ 0048-0049]). Aga further teaches the image data is based on image data for the right eye viewpoint from a first light intensity image sensor and image data for the left eye viewpoint from a second light intensity image sensor (Aga; the image data is based on image data for the right eye viewpoint from a 1st light intensity image sensor and image data for the left eye viewpoint from a 2nd light intensity image sensor [¶ 0059-0060 and ¶ 0125]; wherein, data from cameras determine depth information [¶ 0060]; and wherein, images implicitly correspond to a representation of light intensity data given the properties of a camera [¶ 0060]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, Yeatman, and Bleyer, to incorporate the image data is based on image data for the right eye viewpoint from a first light intensity image sensor and image data for the left eye viewpoint from a second light intensity image sensor (as taught by Aga), in order to provide an improved rendering and presentation of visualizations within an augmented environment that appear three-dimensional to a user (Aga; [¶ 0002-0005 and ¶ 0007]). Bleyer further teaches the light intensity image data is based on light intensity image data for the right eye viewpoint from a first light intensity image sensor and light intensity image data for the left eye viewpoint from a second light intensity image sensor (Bleyer; the light intensity (i.e. spectrums of light, IR light, visible light) image data is based on light intensity (i.e. spectrums of light, IR light, visible light) image data for the right eye viewpoint from a 1st light intensity image sensor (i.e. 1st camera) and light intensity (i.e. spectrums of light, IR light, visible light) image data for the left eye viewpoint from a 2nd light intensity (i.e. spectrums of light, IR light, visible light) image sensor (i.e. 2nd camera) [¶ 0053-0055], as illustrated within Fig. 2; wherein, determining/tracking depth (and/or position) [¶ 0037-0040 and ¶ 0046-0048] with a stereo camera system [¶ 0065, ¶ 0080, and ¶ 0089-0090] using light (intensities corresponding to spectrums of light associated with IR light and/or visible light)) [¶ 0008-0012]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, Yeatman, and Bleyer, to incorporate the light intensity image data is based on light intensity image data for the right eye viewpoint from a first light intensity image sensor and light intensity image data for the left eye viewpoint from a second light intensity image sensor (as taught by Bleyer), in order to provide an improved immersive experience for a user by enhancing information integration, depth (i.e. three-dimensional spatial information) detection, and movement tracking within a visualization (Bleyer; [¶ 0001-0006, and ¶ 0008]). Regarding claim 16, Satio in view of Mitani, Aga, Son, Yeatman, and Bleyer further discloses the method of claim 14, wherein the depth data is based on depth data from a single depth sensor, depth data from a first depth sensor for the left eye and a second depth sensor the right eye, or depth data determined from the light intensity image data (Satio; the depth/distance data is based on depth/distance data from a single depth/distance sensor [¶ 0109], as illustrated within Fig. 22; additionally, camera and/or infrared sensor [¶ 0097-0098]). Son further teaches the depth data is based on depth data from a first depth sensor for the left eye and a second depth sensor the right eye (Son; the depth data is based on depth data from a 1st depth sensor for the left eye and a 2nd depth sensor the right eye [¶ 0050-0052], as illustrated within Fig. 8). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Satio as modified by Mitani, Aga, Son, and Yeatman, to incorporate the depth data is based on depth data from a first depth sensor for the left eye and a second depth sensor the right eye (as taught by Son), in order to provide an improved visual range associated with multiple viewpoints for presenting dynamic/adaptive three-dimensional data to an observer (Son; [¶ 0008-0010 and ¶ 0051]). Conclusion Item(s) of prior art of interest: Cheng et al. (US Patent No. 8704879 B1), “… the scene could of course be animated and changing in real time based on game play or simulation for example, and at the same time the scene can be re-rendered from different viewpoints in real time based on changing tracking information” ([Col. 7, line 36 to Col. 8, line 11]). Item(s) of prior art of note: Cha et al. (US PGPUB No. 20060125917 A1), Suzuki et al. (US PGPUB No. 20220191455 A1), and Yamamoto et al. (US PGPUB No. 20120062556 A1). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Charles Lloyd Beard whose telephone number is (571)272-5735. The examiner can normally be reached Monday - Friday, 8:00 AM - 5: 00 PM, alternate Fridays EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CHARLES LLOYD. BEARD Primary Examiner Art Unit 2611 /CHARLES L BEARD/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
Jul 26, 2024
Response after Non-Final Action
Sep 18, 2025
Non-Final Rejection — §103
Dec 16, 2025
Examiner Interview Summary
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 18, 2025
Response Filed
Apr 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579729
VOLUMETRIC VIDEO SUPPORTING LIGHT EFFECTS
2y 5m to grant Granted Mar 17, 2026
Patent 12548225
AUDIO OR VISUAL INPUT INTERACTING WITH VIDEO CREATION
2y 5m to grant Granted Feb 10, 2026
Patent 12519924
MULTI-PERSPECTIVE AUGMENTED REALITY EXPERIENCE
2y 5m to grant Granted Jan 06, 2026
Patent 12511801
GENERATING VIDEO STREAMS TO DEPICT BOT PERFORMANCE DURING AN AUTOMATION RUN
2y 5m to grant Granted Dec 30, 2025
Patent 12513279
STEREOSCOPIC VIDEO DISPLAY DEVICE, STEREOSCOPIC VIDEO DISPLAY METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+36.1%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month