Prosecution Insights
Last updated: April 19, 2026
Application No. 18/429,138

DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR CAPTURING MEDIA WITH A CAMERA APPLICATION

Non-Final OA §103
Filed
Jan 31, 2024
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim(s) 329 objected to because of the following informalities: Claim 329: Claim 329 recites a sixth set of alignment criteria without an antecedent first through fifth set of alignment criteria in the parent claim. Further, sixth alignment criteria is not defined in the claim as different from the first through fifth sets of alignment criteria. Accordingly, the scopes of the claims with first through sixth criteria can overlap entirely. Applicant gives examples of the sixth criteria at ¶ 595: a sixth set of alignment criteria (e.g., criteria defining a minimal misalignment from an established stable/target viewpoint at which the viewpoint is considered substantially aligned (e.g., a misalignment margin of error); in some embodiments, the sixth set of alignment criteria includes a criterion that is satisfied when a distance between a current location representing the current viewpoint and the anchor location falls below an alignment threshold distance (in some embodiments, an alignment margin of error (e.g., when movement remains within the minimum threshold distance from the anchor location, the viewpoint is considered substantially aligned); in some embodiments, a minimum angular distance (e.g., 10, 1.5°, and/or 3° yaw and/or pitch rotation); in some embodiments, a minimum cartesian distance (e.g., 1, 2, or 5 cm vertical or horizontal translation); in some embodiments, the alignment threshold distance is the same as the second threshold distance and/or the minimum threshold distance) in some embodiments, the sixth set of alignment criteria includes a criterion that is satisfied when the movement of the viewpoint has stabilized (e.g., remains below one or more threshold velocities and/or accelerations for at least a respective duration)), displays a graphical alignment indication (e.g., as illustrated in FIGS. 17B and 17Q) (in some embodiments, the graphical alignment indication includes a change to the appearance of the virtual indicator element, the virtual alignment element, and/or the virtual boundary element (e.g., the behaviors described above, such as a change in opacity, movement, appearance, and/or disappearance); Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In this instance, and similarly with regard to first, second, third, fourth and fifth sets, because the language is exemplary (“e.g., criteria defining a minimal misalignment from an established stable/target viewpoint”), the claims are interpreted as a set of alignment criteria generally. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 295-299, 301, 319-321, 329-330, 332-333 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsukahara (US 2015/0226969) in view of Ghaly (US 2017/0287221) Claim 295 Examiner’s Interpretation: Spatial video is interpreted as captured video that will be ultimately captured as a left/right pair as part of the capture process (whether from a stereo or monocular camera) and does not include non-stereo video/imagery (see Specification at ¶ 41)(“ while capturing spatial video media of an environment using the one or more cameras, wherein the spatial video media includes a first video component corresponding to a viewpoint of a right eye and a second video component, different from the first video component, corresponding to a viewpoint of a left eye that when viewed concurrently create an illusion of a spatial representation of the environment”) Anchor location, defined as “Anchor location 1706 serves as a reference or target point for HMD X700 to define “stable” or “low-motion” spatial video capture, e.g., video capture with only minimal yaw rotation, pitch rotation, vertical translation, and horizontal translation movement of the viewpoint (e.g., actual and/or apparent camera motion).” (See Specification ¶ 525), can include depth direction (such as in Tsukahara below, which uses it to stabilize the capturing) or rotational positions (such as in Ghaly below). Claim Mapping: Tsukahara discloses a computer system configured to communicate with a display generation component and one or more cameras, the computer system comprising (Tsukahara, Figs. 2, 7, 8, ¶ 6: “an imaging apparatus including: an imaging unit; a display unit…generate the left eye image and the right eye image such that a display object indicating the focal distance is seen at a depth position corresponding to the focal distance and cause the display unit to display the left eye image and the right eye image.”): one or more processors (Tsukahara, ¶ 78: “The system controller 211 is, for example, configured by a micro computer including a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), a non-volatile memory unit, an interface, and the like and controls the respective sections of the see-through HMD 100.”); and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for (Tsukahara, ¶ 78: “The system controller 211 is, for example, configured by a micro computer including a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), a non-volatile memory unit, an interface, and the like and controls the respective sections of the see-through HMD 100.”): PNG media_image1.png 401 610 media_image1.png Greyscale PNG media_image2.png 404 611 media_image2.png Greyscale while capturing spatial video media of an environment using the one or more cameras (e.g. shot composition setup with a marked focal distance indicator; Tsukahara, ¶ 19, 174: “A left eye image and a right eye image are generated…a stereoscopic image formed of the left eye image and the right eye image. A subject is imaged by the imaging unit….The user can more freely set the composition of the captured image while more easily maintaining the focal distance.”), wherein the spatial video media includes a first video component corresponding to a viewpoint of a right eye and a second video component, different from the first video component, corresponding to a viewpoint of a left eye that when viewed concurrently create an illusion of a spatial representation of the environment (Tsukahara, ¶ 19, 43: “A left eye image and a right eye image are generated…a stereoscopic image formed of the left eye image and the right eye image. A subject is imaged by the imaging unit…The stereoscopic image is an image that looks three-dimensional to a user who views the image and recognizes a positional relationship in a depth direction (depth feeling). The stereoscopic image is formed of, for example, images with a plurality of viewpoints. Disparity and convergence angle between the images express the depth feeling. The stereoscopic image is, for example, formed of a left eye image and a right eye image mainly viewed by the left and right eyes of the user, respectively.”), displaying, via the display generation component, a virtual indicator element of an anchor location in the environment that represents a respective viewpoint corresponding to the spatial video media, wherein the virtual indicator element is displayed while the environment is visible via the display generation component (See Fig. 7-8 above; Tsukahara, ¶¶ 175-176: “Note that the focus icon 311 can have any shape. For example, as shown in "E" of FIG. 7, the focus icon 311 may be rectangular. Further, for example, as shown in "F" of FIG. 7, not only the focus icon 311 but also a rectangular frame 331 that is an image showing the field angle of the captured image may be displayed on the display unit 112…the display position of the focus icon 311 in the display region may be variable. For example, the display position of the focus icon 311 (position of object as target to which focal distance is adjusted) may be linked with the line-of-sight of the user (focus is achieved on subject in direction of line-of-sight of user”)); while displaying the virtual indicator element while the environment is visible via the display generation component, detecting a first change in a viewpoint from which the spatial video media is being captured (Tsukahara, ¶ 47, 51: “For example, the size of the focus icon may be reduced as the focal distance increases (in other words, the size of the focus icon may be increased as the focal distance decreases). That is, the focal distance may be expressed using the depth position and size of the focus icon”); and (Tsukahara, ¶ 47, 51: “For example, the size of the focus icon may be reduced as the focal distance increases (in other words, the size of the focus icon may be increased as the focal distance decreases). That is, the focal distance may be expressed using the depth position and size of the focus icon…Specifically, in this case, for example, when the user manually adjusts the focal distance, the depth position of the focus icon is changed according to the focal distance changed by the operation. By performing an operation while viewing the depth position of the focus icon, the user can adjust the focal distance while checking the focal distance. Thus, the user can adjust the focal distance to a desired depth position more easily and more accurately.”). Tsukahara does not explicitly disclose, but Ghaly also discloses capturing spatial video media of an environment using the one or more cameras (Ghaly, ¶¶ 14, 55: “a user may desire to capture an augmented-reality image while a camera has a particular predetermined pose (e.g., in 6 degrees of freedom)… The wearer's perception of distance to virtual display imagery is affected by the positional disparity between the right and left display images”) and additionally discloses in response to detecting the first change in the viewpoint from which the spatial video media is being captured, changing an appearance of the virtual indicator element to indicate the respective viewpoint corresponding to the spatial video media (Ghaly, ¶ 27: “the virtual cues may include alignment of a set of device-locked markers with a set of world-locked markers on a plane. In some implementations, the virtual cues may include a world-locked “ghost” image (e.g., at least partially translucent) of a previously-captured image positioned at a predetermined pose at which the previously-captured image was captured. In some implementations, the virtual cues may include a ghost scene of previous hologram poses. In some implementations, the virtual cues may include 3D/2D alignment of different axes. In some implementations, various characteristics of the virtual cues may change as a current pose moves closer to a predetermined pose. For example, a brightness, transparency, color, or sound of a virtual cue may change as the current pose moves closer to the predetermined pose.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to adjust the indicator relative to the viewpoint. One of ordinary skill in the art would have motivation to align the viewpoint (“without proper pose alignment for each captured augmented-reality image in the sequence, the augmented-reality video may appear to “jump around” or otherwise be distorted as the camera pose changes from one captured image to the next. In general, a user may desire to capture an image from any predetermined pose for any reason.”)(Ghaly, ¶ 14). One of ordinary skill in the art would have had a reasonable expectation of success because alignment would be another improvement to disparity setting in order to stabilize the final image for improved quality. Claim 296 Examiner’s Interpretation: relative to the environment is defined in Applicant’s specification at ¶ 563: “relative to the environment visible via the display generation component (e.g., as illustrated in FIGS. 17C-17K and 17N-17Q) (e.g., changing a color (e.g., increasing or decreasing visual prominence with respect to the environment), transparency (e.g., allowing more or less of the environment to be seen underneath), size (e.g., growing or shrinking relative to the appearance of the environment), and or location (e.g., displaying the virtual indicator element at a location that appears to change with respect to the environment)).” Changing the size of the icon would fall within the scope of the examples given (e.g. growing or shrinking the icon) Claim Mapping: Tsukahara discloses wherein changing the appearance of the virtual indicator element to indicate the respective viewpoint corresponding to the spatial video media includes changing the appearance of the virtual indicator element relative to the environment visible via the display generation component (Tsukahara, ¶ 47, 51: “For example, the size of the focus icon may be reduced as the focal distance increases (in other words, the size of the focus icon may be increased as the focal distance decreases). That is, the focal distance may be expressed using the depth position and size of the focus icon”) Claim 297 Examiner’s Interpretation: Virtual location is defined in Applicant’s specification at ¶ 564 as “virtual indicator element at a virtual location (e.g., the virtual indicator element is rendered at a particular location (in some embodiments, at the focal/convergence location of the user's viewpoint; in some embodiments, at a predetermined location (e.g., 0.5, 1, or 2 meters away from the user's viewpoint)”) Claim Mapping: Tsukahara discloses wherein: displaying the virtual indicator element includes displaying the virtual indicator element at a virtual location in the environment (e.g. at apparent distance, or a specific x-y location corresponding to the user’s view or target selection; Tsukahara, ¶ 135: “controller 234 sets the disparity and convergence angle of the focus icon such that a virtual image position of the focus icon indicating the focal distance is a depth position corresponding to the focal distance as in the manual focus”); and the virtual indicator element is not included in the spatial video media (graphic is an indicator for disparity and would not be stored as the final image; Tsukahara, ¶ 73: “The image data of the captured image obtained by imaging may be supplied to the control box 152 and stored in the storage unit.”); Claim 298 Examiner’s Interpretation: Environment-locked is interpreted as partially stabilized to an environment position based on Applicant’s usage at ¶ 565: anchor region is an environment-locked region (e.g., a region based on a location in the three-dimensional environment, such that, as the user's viewpoint shifts, the region shifts with respect to the viewport through which the environment is visible) that includes the anchor location (e.g., 1706 and/or 1736) (e.g., the steady target location; in some embodiments, the anchor region is centered around the anchor location) (e.g., the virtual indicator element is partially environment-locked, such that the virtual indicator element may appear to move somewhat relative the environment, but remains displayed within the same general region of the environment (e.g., near the anchor location)) Claim Mapping: Tsukahara does not disclose, but Ghaly discloses wherein changing the appearance of the virtual indicator element includes moving the virtual indicator element to a respective location within an anchor region of the environment, wherein the anchor region is an environment-locked region that includes the anchor location (Ghaly, ¶ 27: “virtual cues may include separate rotation and direction visual alignment indicators. In some implementations, the virtual cues may include alignment of a continuous field of view frame with a world-locked continuous frame. In some implementations, the virtual cues may include alignment of a set of device-locked markers with a set of world-locked markers on a plane…. In some implementations, various characteristics of the virtual cues may change as a current pose moves closer to a predetermined pose. For example, a brightness, transparency, color, or sound of a virtual cue may change as the current pose moves closer to the predetermined pose.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use a locked region. One of ordinary skill in the art would have motivation to align the viewpoint (“without proper pose alignment for each captured augmented-reality image in the sequence, the augmented-reality video may appear to “jump around” or otherwise be distorted as the camera pose changes from one captured image to the next. In general, a user may desire to capture an image from any predetermined pose for any reason.”)(Ghaly, ¶ 14). One of ordinary skill in the art would have had a reasonable expectation of success because alignment would be another improvement to disparity setting in order to stabilize the final image for improved quality. Claim 299 Tsukahara does not disclose, but Ghaly discloses wherein moving the virtual indicator element to the respective location within the anchor region of the environment includes moving the virtual indicator element according to one or more simulated physical properties (e.g. simulating movement responsive to user motion; Ghaly, ¶ 63: “Further, the predetermined pose cue may be a virtual indicator that is visually presented relative to the permanent current pose cue.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to adjust the indicator based on simulated physical properties. One of ordinary skill in the art would have motivation to align the viewpoint (“without proper pose alignment for each captured augmented-reality image in the sequence, the augmented-reality video may appear to “jump around” or otherwise be distorted as the camera pose changes from one captured image to the next. In general, a user may desire to capture an image from any predetermined pose for any reason.”)(Ghaly, ¶ 14). One of ordinary skill in the art would have had a reasonable expectation of success because alignment would be another improvement to disparity setting in order to stabilize the final image for improved quality. Claim 301 Examiner’s Interpretation: Environment-locked is interpreted as partially stabilized to an environment position based on Applicant’s usage at ¶ 565: anchor region is an environment-locked region (e.g., a region based on a location in the three-dimensional environment, such that, as the user's viewpoint shifts, the region shifts with respect to the viewport through which the environment is visible) that includes the anchor location (e.g., 1706 and/or 1736) (e.g., the steady target location; in some embodiments, the anchor region is centered around the anchor location) (e.g., the virtual indicator element is partially environment-locked, such that the virtual indicator element may appear to move somewhat relative the environment, but remains displayed within the same general region of the environment (e.g., near the anchor location)) Claim Mapping: Tsukahara does not disclose, but Ghaly discloses wherein the virtual indicator element is environment-locked (Ghaly, ¶ 27: “Any suitable type of virtual cue may be presented by an augmented-reality device. In some implementations, the virtual cues may include separate rotation and direction visual alignment indicators. In some implementations, the virtual cues may include alignment of a continuous field of view frame with a world-locked continuous frame. In some implementations, the virtual cues may include alignment of a set of device-locked markers with a set of world-locked markers on a plane.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use an environment locked indicator. One of ordinary skill in the art would have motivation to align the viewpoint (“without proper pose alignment for each captured augmented-reality image in the sequence, the augmented-reality video may appear to “jump around” or otherwise be distorted as the camera pose changes from one captured image to the next. In general, a user may desire to capture an image from any predetermined pose for any reason.”)(Ghaly, ¶ 14). One of ordinary skill in the art would have had a reasonable expectation of success because alignment would be another improvement to disparity setting in order to stabilize the final image for improved quality. Claim 319 Examiner’s Interpretation: Virtual plane can correspond to any 2D plane including the capture depth claim (“One or more planes of capture (e.g., planes that are substantially perpendicular to the principal axes of the cameras and/or parallel to outward facing lenses of the cameras”)(Specification, ¶ 425) Examiner interprets virtual plane as including the depth direction. Claim Mapping: Tsukahara does not disclose, but Ghaly discloses wherein displaying the virtual indicator element includes positioning the virtual indicator element within a virtual plane in the environment, wherein the virtual plane in the environment is spaced at least a threshold depth away from a user (Ghaly, ¶ 21: “he current pose cue 118 includes a set of four virtual coplanar markers useable to aim the camera of the augmented-reality device. The four virtual coplanar markers of the current pose cue 118 at least partially indicate a field of view of the current pose.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use a planar reference. One of ordinary skill in the art would have motivation to align the viewpoint (“without proper pose alignment for each captured augmented-reality image in the sequence, the augmented-reality video may appear to “jump around” or otherwise be distorted as the camera pose changes from one captured image to the next. In general, a user may desire to capture an image from any predetermined pose for any reason.”)(Ghaly, ¶ 14). One of ordinary skill in the art would have had a reasonable expectation of success because alignment would be another improvement to disparity setting in order to stabilize the final image for improved quality. Claim 320 Tsukahara does not disclose, but Ghaly discloses wherein the one or more programs include instructions for: detecting a gaze of the user; and positioning the virtual plane based on the gaze of the user (Ghaly, ¶ 59: “For each locus of the viewable surface, two line segments are constructed-a first line segment to the pupil position of the wearer's right eye and a second line segment to the pupil position of the wearer's left eye. The pixel Ri of the right display image, which corresponds to locus i, is taken to be the intersection of the first line segment in right image frame 48R. Likewise, the pixel Li of the left display image is taken to be the intersection of the second line segment in left image frame 48L. This procedure automatically provides the appropriate amount of shifting and scaling to correctly render the viewable surface, placing every locus i at the required distance from the wearer.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to position the plane based on the user’s gaze. One of ordinary skill in the art would have motivation to indicate the plane of focus. One of ordinary skill in the art would have had a reasonable expectation of success because Tsukahara considers the disparity at particular depths in order to indicate to the user where the plane of focus is. Claim 321 Tsukahara does not disclose, but Ghaly discloses wherein positioning the virtual plane based on the gaze of the user includes determining a convergence location of the gaze of the user, wherein the virtual plane includes the convergence location (Ghaly, ¶ 59: “For each locus of the viewable surface, two line segments are constructed-a first line segment to the pupil position of the wearer's right eye and a second line segment to the pupil position of the wearer's left eye. The pixel Ri of the right display image, which corresponds to locus i, is taken to be the intersection of the first line segment in right image frame 48R. Likewise, the pixel Li of the left display image is taken to be the intersection of the second line segment in left image frame 48L. This procedure automatically provides the appropriate amount of shifting and scaling to correctly render the viewable surface, placing every locus i at the required distance from the wearer.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to position the plane based on the user’s gaze. One of ordinary skill in the art would have motivation to indicate the plane of focus. One of ordinary skill in the art would have had a reasonable expectation of success because Tsukahara considers the disparity at particular depths in order to indicate to the user where the plane of focus is. Claim 329 Examiner’s Interpretation: Sixth set of alignment criteria is interpreted similar in scope to other sets of alignment criteria (see object above). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In this instance, and similarly with regard to first, second, third, fourth and fifth sets, because the language is exemplary (“e.g., criteria defining a minimal misalignment from an established stable/target viewpoint”), the claims are interpreted as a set of alignment criteria generally. Claim Mapping: Tsukahara does not disclose, bug Ghaly discloses wherein the one or more programs include instructions for: while capturing the spatial video media of the environment and in accordance with a determination that the viewpoint from which the spatial video media is being captured satisfies a sixth set of alignment criteria, displaying a graphical alignment indication (Ghaly, ¶ 27: “the virtual cues may include alignment of a set of device-locked markers with a set of world-locked markers on a plane. In some implementations, the virtual cues may include a world-locked “ghost” image (e.g., at least partially translucent) of a previously-captured image positioned at a predetermined pose at which the previously-captured image was captured. In some implementations, the virtual cues may include a ghost scene of previous hologram poses. In some implementations, the virtual cues may include 3D/2D alignment of different axes. In some implementations, various characteristics of the virtual cues may change as a current pose moves closer to a predetermined pose. For example, a brightness, transparency, color, or sound of a virtual cue may change as the current pose moves closer to the predetermined pose.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider additional alignment indicators. One of ordinary skill in the art would have motivation to align the viewpoint (“without proper pose alignment for each captured augmented-reality image in the sequence, the augmented-reality video may appear to “jump around” or otherwise be distorted as the camera pose changes from one captured image to the next. In general, a user may desire to capture an image from any predetermined pose for any reason.”)(Ghaly, ¶ 14). One of ordinary skill in the art would have had a reasonable expectation of success because alignment would be another improvement to disparity setting in order to stabilize the final image for improved quality. Claim 330 Examiner’s Interpretation: Playable representation would correspond to generic playback of captured image/video. Claim Mapping: Tsukahara discloses wherein the one or more programs include instructions for :after capturing the spatial video media of the environment, displaying a playable representation of the spatial video media (Tsukahara, ¶ 124: “In addition to the operations of the display system and the operations related to the imaging function, the system controller has to determine triggers for operation control for play, cueing, fast-forwarding/rewinding, pause, recording, and the like in the storage unit 261 and operation control related to the transmission and reception”) Claim 332 Examiner’s Interpretation: Scope of Machine Readable Media Machine readable media can encompass forms of signal transmission media that falls outside of the four statutory categories of invention. MPEP 2106; citing In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007). A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. MPEP 2106. Claim 332 as drafted recites a non-transitory computer-readable storage medium… Because the use of non-transitory excludes subject matter ineligible for patent protection, the broadest reasonable interpretation of the claimed medium in view of Applicant’s specification covers only eligible subject matter. Relation to Claim 295 Claim 332 overlaps substantially in scope with system claim 295. Claim Mapping: The same teachings and rationales in claim 295 are appliable to claim 332, as Tsukahara additionally teachings an embodiment including a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more cameras, the one or more programs including instructions for… ( Tsukahara, ¶ 184: “The storage medium is, in addition to the apparatus main body, configured by, for example, the removable medium 264 in which the programs are stored, which is distributed for delivering the programs to the user as shown in FIG. 2 “) Claim 333 Examiner’s Interpretation: Relation to Claim 295 Claim 333 overlaps substantially in scope with system claim 295. Claim Mapping: The same teachings and rationales in claim 295 are appliable to claim 333. Claim(s) 331 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsukahara (US 2015/0226969) in view of Ghaly (US 2017/0287221) in view of Anvariopour (US 2021/0303855) Claim 331 Tsukahara does not disclose, but Anvariopour discloses wherein the one or more programs include instructions for: while displaying the playable representation of the spatial video media: in accordance with a determination that a respective playback mode is available for playing the playable representation of the spatial video media, displaying an indication of the respective playback mode with a first appearance; and in accordance with a determination that a respective playback mode is not available for playing the playable representation of the spatial video media, foregoing displaying the indication of the respective playback mode with the first appearance (Anvaripour, ¶ 23: “o increase the overall efficiency and reduce bandwidth consumed by a client device 102, the messaging client application 104 may only load the graphical elements for a subset of the augmented reality items that are presented. For example, if a given augmented reality item collection includes 10 different augmented reality items, the messaging client application 104 retrieves from a server the graphical elements, metadata and information for augmenting an image or video using the augmented reality items for the first three of the 10 different augmented reality items. The remaining seven augmented reality items may be represented in the display using respective icons that may be greyed out. When a user input, such as tapping a given one of the seven icons, is received that selects a given augmented reality item for which the graphical elements, metada.ta and information has not yet been loaded, the messaging client application 104 then communicates with the server to retrieve the graphical elements, metadata and information.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to change the appearance to indicate availability. One of ordinary skill in the art would have motivation to indicate what items would be available for playback, and differing icons are one means to indicate the status of a file generally. One of ordinary skill in the art would have had a reasonable expectation of success because the technique is applied the same in every application. Allowable Subject Matter Claim(s) 300, 302-318, 322-328 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim(s) 300: Ghaly teaches away from moving the virtual indicator element to the respective location within the anchor region of the environment includes moving the virtual indicator element a second distance in the first direction, wherein the second distance is shorter than the first distance because the degree of movement increases as the user is closer to target (See Figs. 5-6 below). PNG media_image3.png 330 404 media_image3.png Greyscale PNG media_image4.png 336 385 media_image4.png Greyscale Regarding claim(s) 302: Ghaly, which is closest to claim 302’s limitations, suggest a single set of criteria which can be processed to provide the indicator (see Fig. 5 and 6 above). Tsukahara only considers the depth indication. This teaches away from subsequent alignment criteria as claimed. Regarding claim(s) 303-318: Parent claim 302 would be allowable and all claims in this set ultimately depend from claim 302. Regarding claim(s) 322: Ghaly considers transparency but does not suggest “determination that a distance between a current location in the environment that represents the viewpoint from which the spatial video media is being captured and the anchor location exceeds a second predetermined threshold distance, ceasing displaying the virtual indicator element.” Regarding claim(s) 323-326: Parent claim 322 would be allowable and all claims in this set ultimately depend from claim 322. Regarding claim(s) 327: Ghaly does not suggest the claimed maintenance threshold distance in the context of ceasing display of the indicator. Regarding claim(s) 328: Parent claim 327 would be allowable and all claims in this set ultimately depend from claim 327. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Dec 03, 2024
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §103
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month