Prosecution Insights
Last updated: April 19, 2026
Application No. 18/750,769

Displaying Representations of Environments

Non-Final OA §103§DP
Filed
Jun 21, 2024
Examiner
NADKARNI, SARVESH J
Art Unit
2629
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
85%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
354 granted / 494 resolved
+9.7% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
37 currently pending
Career history
531
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
72.6%
+32.6% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 494 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed July 21, 2025 have been fully considered but they are not persuasive. First, regarding Applicant’s Remarks Concerning Double Patenting at Page 10 of the Remarks, Examiner respectfully maintains the Double Patenting rejection below and acknowledges Applicant’s willingness to file a Terminal Disclaimer upon agreement regarding claim language. With regard to amended independent claims 1, 25 and 26, at page 10-11 of the Remarks, Applicant alleges “Bradski is entirely silent with respect to ‘a spatial relationship’ among ER objects inside a diorama-view representation”, and further alleges “Bradski cannot disclose “maintaining display of the first one or more ER objects arranged in the spatial relationship along at least one plane of the first set of ER world coordinates”. Examiner respectfully disagrees. At FIGS. 78A-78B and 86C, and at [1390]-[1394] and [1251]-[1253], Bradski clearly discloses layout of diorama-view representation of objects, and suggests a spatial relationship along at least one plane, particularly in FIG. 86C. As such, Examiner respectfully submits Bradski clearly discloses “changing display of the first one of the plurality of diorama-view representations while maintaining display of the first one or more ER objects arranged in a spatial relationship along at least one plane of the first set of ER world coordinates” (FIGS. 78A-78B and [1390]-[1394] describing user selection and rendering of the virtual content positioned into the field of view; further illustrating coplanar spatial relationship of objects at FIG. 86C and [1251]-[1253] and object size/rendering changed while other objects rendered remain the same in a spatial relationship). Therefore, Examiner respectfully submits the rejection of these claims and all claims depending therefrom stands as properly addressed below. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-26 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4-10, 12-17, 19 and 22 of U.S. Patent No. 11,768,576 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the currently examined application read on the claims of the issued patent, as described below. Currently Examined Application 18/750,769 US Patent No. 11,768,576 B2 Claim 1. A method comprising: at an electronic device including one or more processors, a non-transitory memory, one or more input devices, and a display device: displaying, via the display device, a plurality of diorama-view representations from a corresponding plurality of viewing vectors (different coordinate systems being displayed of the issued patent includes different view vectors being displayed), wherein the plurality of diorama-view representations corresponds to a plurality of enhanced reality (ER) environments, wherein each of the plurality of diorama-view representations is associated with a respective set of ER world coordinates that characterizes a respective ER environment, wherein the plurality of diorama-view representations includes a first one of the plurality of diorama-view representations displayed from a first viewing vector, and wherein the first one of the plurality of diorama-view representations includes a first one or more of ER objects arranged according to a first set of ER world coordinates; detecting, via the one or more input devices, an input associated with the first one of the plurality of diorama-view representations; and in response to detecting the input, changing display of the first one of the plurality of diorama-view representations from the first viewing vector to a second viewing vector while maintaining the first one or more ER objects arranged according to the first set of ER world coordinates. Claim 1. A method comprising: at an electronic device including one or more processors, a non-transitory memory, one or more input devices, and a display device: displaying, via the display device, a home enhanced reality (ER) environment characterized by home ER world coordinates, wherein the home ER environment includes a first diorama-view representation of a first ER environment different from the home ER environment, claim 4 in response to detecting the first input, displaying the subset of the one or more ER objects from a first viewing vector, the method further comprising: while displaying the subset of the one or more ER objects within the home ER environment based on the transformation, detecting, via the one or more input devices, a second input; and in response to detecting the second input, changing display of the subset of the one or more ER objects from the first viewing vector to a second viewing vector while maintaining the subset of the one or more ER objects arranged according to the first ER world coordinates. wherein the first diorama-view representation includes one or more of ER objects arranged in a spatial relationship according to first ER world coordinates different from the home ER world coordinates, and wherein the one or more of ER objects are associated with a first appearance while displayed within the first diorama-view representation of the first ER environment; detecting, via the one or more input devices, a first input that is directed to the first diorama-view representation; and in response to detecting the first input, transforming the home ER environment by: ceasing to display the first diorama-view representation within the home ER environment, transforming the spatial relationship between a subset of the one or more ER objects as a function of the home ER world coordinates and the first ER world coordinates, and displaying, via the display device, the subset of the one or more ER objects within the home ER environment based on the transformation, and wherein the subset of one or more of ER objects are associated with a second appearance different from the first appearance while displayed within the home ER environment. Claim 2 Claim 1 Claim 3 Claim 6 Claim 4 Claim 7 Claim 5 Claim 9 Claim 6 (sensor to determine the positional change would be obvious to one of ordinary skill) Claim 9 Claim 7 Claim 4 Claim 8 (“diorama view representation” commonly understood to mean a scaled, smaller version) Claim 1 Claim 9 (commonly understood in the art that virtual reality is synonymous with augmented and extended reality) Claim 1 Claim 10 (commonly understood in the art that augmented reality is synonymous with extended reality) Claim 1 Claim 11 (movement would be considered a form of animation as viewed by the user) Claim 8 Claim 12 Claim 18 Claim 13 (“diorama view representation” commonly understood to mean a scaled, smaller version) Claim 1 Claim 14 Claim 1 Claim 15 Claim 10 Claim 16 Claim 15 as dependent on claim 14 Claim 17 (level of access would be a subset) Claim 16 Claim 18 Claim 16 Claim 19 Claim 17 Claim 20 (commonly known in the art to save and retrieve historical data of the user) Claim 1 Claim 21 Claim 13 Claim 22 Claim 12 Claim 23 (commonly known in the art to save and retrieve historical recorded data of the user) Claim 1 Claim 24 (control level is similar to access level) Claim 16 Claim 25 (similarly analyzed as claim 1 above) Claim 19 Claim 26 (similarly analyzed as claim 1 above) Claim 22 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-26 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski et al., US 2016/0026253 A1 (hereinafter “Bradski”) in view of Bastov et al., US 2018/0315248 A1 (hereinafter “Bastov”). Regarding claim 1, Bradski discloses a method ([0017]) comprising: at an electronic device (FIGS. 1-3, AR system 10 with user device 12 and at FIG. 3 head mounted display system 30 at [0170]-[0171] and [0180]-[0195] and [0206]) including one or more processors (FIG. 3 with processor 38 described at [0195] and [0206]-[0212]), a non-transitory memory (FIGS. 1-3 with memory described at least at [0190]-[0194]), one or more input devices (FIGS. 1-3 with [0200]-[0201] describing at least some forms of inputs including voice, cameras, etc.), and a display device (FIGS. 1-3 describing display component 33 at [0196]-[0198]): displaying, via the display device, a plurality of diorama-view representations (FIGS. 78A and 78B describing augmented reality AR in a home setting 7802 and 7804 as described at [1390]-[1394] augmenting the home room within which the user is interacting with the device to produce virtual construct 7808 provides a set of small virtual representations of a variety of virtual rooms), wherein the plurality of diorama-view representations corresponds to a plurality of enhanced reality (ER) environments (FIGS. 78A and 78B with [1390]-[1394] small virtual representations of a variety of virtual rooms as disclosed therein), wherein each of the plurality of diorama-view representations is associated with a respective set of ER world coordinates (scaled versions of the rooms as illustrated in FIGS. 78A-78B and [1390]-[1394]) that characterizes a respective ER environment (FIGS. 78A-78B and [1390]-[1394] different room constructs), wherein the plurality of diorama-view representations includes a first one (FIGS. 78A-78B with a single room at [1390]-[1394]) of the plurality of diorama-view representations displayed (FIGS. 78A-78B and [1390]-[1394] different room constructs with each room having an independent viewable angle as would be understood by one of ordinary skill), and wherein the first one of the plurality of diorama-view representations includes a first one or more of ER objects arranged in a spatial relationship according to a first set of ER world coordinates (FIGS. 78A-78B and [1390]-[1394] virtual content associated with the room); detecting, via the one or more input devices, an input associated with the first one of the plurality of diorama-view representations (FIGS. 78A-78B and [1390]-[1394] describing input or swiping or selecting gestures therein); and in response to detecting the input, changing display of the first one of the plurality of diorama-view representations while maintaining display of the first one or more ER objects arranged in a spatial relationship along at least one plane of the first set of ER world coordinates (FIGS. 78A-78B and [1390]-[1394] describing user selection and rendering of the virtual content positioned into the field of view; further illustrating coplanar spatial relationship of objects at FIG. 86C and [1251]-[1253] and object size/rendering changed while other objects rendered remain the same in a spatial relationship). However, although Bradski clearly discloses various angles of representing various virtual elements (Bradski at FIG. 132 and [1317], Bradski does not explicitly disclose the representations are displayed from a corresponding plurality of viewing vectors and displaying from a first viewing vector and changing the representation from the first viewing vector to a second viewing vector. In the same field of endeavor, Bastov discloses the representations are displayed from a corresponding plurality of viewing vectors ([0115] and [0135]) a first viewing vector ([0115] and [0135]) and changing the representation) from the first viewing vector to a second viewing vector ([0115] and [0135]). Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the heads up VR system and device of Bradski to incorporate the adjusting the viewing vectors as disclosed by Bastov because the references are within the same field of endeavor, namely, virtual/augmented reality systems with mixed real and virtual imagery. The motivation to combine these references would have been to improve 3D rendering of virtual objects in the real world space (Bastov at least at [0005]-[0007]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success. Regarding claim 2, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein the input is directed to the first one of the plurality of diorama-view representations ( FIGS. 74A and 74B and further at least at FIGS. 75-77 describing selection via a physical button or inputs at [1387] and [1391] and [1397] describing swipe gestures detected). Regarding claim 3, Bradski in view of Bastov discloses the method of claim 2 (see above), wherein the one or more input devices includes a hand tracking sensor (Bradski and [0784] and FIG. 51), the method further comprising: detecting the input via the hand tracking sensor (Bradski at [1392]-[1394] describing second gesture therein); obtaining hand tracking data from the hand tracking sensor based on the input (Bradski at [0784]); and determining, from the hand tracking data, that the input is directed to the first one of the plurality of diorama-view representations (FIGS. 74A and 74B and further at least at FIGS. 75-77 describing selection gesture inputs at [1387] and [1391] and [1397] describing swipe gestures detected). Regarding claim 4, Bradski in view of Bastov discloses the method of claim 2 (see above), wherein the one or more input devices includes an eye tracking sensor (Bradski at [1005]-[1011] describing FIG. 117 describing gaze detection and tracking), the method further comprising: detecting the input via the eye tracking sensor (Bradski at [1005] and FIG. 117-120 further therein describing broad understanding as eye tracking as used for various inputs); obtaining eye tracking data from the eye tracking sensor based on the input (Bradski at [1005] and FIG. 117-120 further therein describing broad understanding as eye tracking as used for various inputs); and determining, from the eye tracking data, that the input is directed to the first one of the plurality of diorama-view representations (Bradski at [1005] and FIG. 117-120 further therein describing broad understanding as eye tracking as used for various inputs in view of [1392]-[1394]; it would be obvious to one of ordinary skill in the art to use eye gaze tracking as a form of input for the purpose of selecting a visually discernable virtual element). Regarding claim 5, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein the input corresponds to a change in position of the electronic device from a first pose (Bastov head position at [0115], Bradski at [0603]) to a second pose (Bastov head position at [0115], Bradski at [0603] head movement) relative to the first one of the plurality of diorama-view representations (FIGS. 74A and 74B and [1390]-[1394]). Regarding claim 6, Bradski in view of Bastov discloses the method of claim 5 (see above), wherein the one or more input devices includes a positional-change sensor (Bradski at [0200]), the method further comprising: detecting the input via the positional-change sensor (Bradski at [0200]; Bastov describing movement of the head and head mounted display device at least at [0112]-[0114])); obtaining positional-change data from the positional-change sensor based on the input (Bradski at [0603], [0672], [0975]); and determining the change in position of the electronic device based on the positional-change data (Bradski at [0603], [0672], [0975]; Bastov with coordinates determination at least at [0112]-[0114] and [0078]). Regarding claim 7, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein the input is also associated with a second one of the plurality of diorama-view representations (Bradski at FIG. 78A-B and [1392]-[1394] and further at Bastov at least at [0112]-[0114]), and wherein the second one of the plurality of diorama-view representations is displayed from a third viewing vector (Bradski at FIGS. 78A-B and further at [1393]; Bastov [0115] and [0135]) and changing the representation and vector), and wherein the second one of the plurality of diorama-view representations includes a second one or more of ER objects arranged according to a second set of ER world coordinates (Bradski at FIGS. 78A-B and further at [1390]-[1394]), the method further comprising, in response to detecting the input, changing display of the second one of the plurality of diorama-view representations from the third viewing vector to a fourth viewing vector while maintaining the second one or more ER objects arranged according to the second set of ER world coordinates (Bradski at [1392]-[1394]; Bastov with coordinates determination at least at [0112]-[0115] and [0078] with vector determination therein and [0135]). Regarding claim 8, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein the plurality of diorama-view representations corresponds to reduced-sized representations of the corresponding plurality of ER environments (Bradski at FIGS. 78B and [1390]-[1394]). Regarding claim 9, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein the first one of the plurality of diorama-view representations corresponds to a virtual reality (VR) representation of a first ER environment (Bradski at FIGS. 78B and [1390]-[1394]). Regarding claim 10, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein the first one of the plurality of diorama-view representations corresponds to an augmented reality (AR) representation of a first ER environment (Bradski at FIGS. 78B and [1390]-[1394] and [0957] and [1363]), and wherein the first one of the plurality of diorama-view representations includes AR content overlaid on environmental data that is associated with physical features of the first ER environment (Bradski at FIGS. 78B and [1390]-[1394] and [0957] and [1363] and [1410], diorama representation in FIG. 78B of room representation with virtual elements therein). Regarding claim 11, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein displaying the plurality of diorama-view representations includes animating a portion of the plurality of diorama-view representations (Bradski at FIGS. 78A-B and further at [1390]-[1394] swiping left to right moves the representations like scrolling which is a form of animation, and [0714] describing animation of various gaming elements or other environments at [1447]; adding animation to a representation would be obvious to one of ordinary skill). Regarding claim 12, Bradski in view of Bastov discloses the method of claim 1 (see above), further comprising: obtaining a plurality of characterization vectors that respectively provide a plurality of spatial characterizations of the corresponding plurality of ER environments (Bradski at [1217]-[1220] and Bastov at [0112]-[0114]), wherein each of the plurality of characterization vectors includes a plurality of object label values that respectively identify one or more ER objects (Bradski at [1217]-[1220] and [0942-[0947]), and wherein each of the plurality of characterization vectors also includes a plurality of relative position values providing respective positions of the one or more ER objects relative to each other (Bradski at [0942]-[0947] with Bastov at [0112]-[0114]); and generating, from the corresponding plurality of ER environments and the plurality of characterization vectors, the plurality of diorama-view representations of the corresponding plurality of ER environments according to the plurality of relative position values Bradski at [1392]-[1397] and FIGS. 78A-79B describing configuration of the icons to represent the environment therein; it would be obvious to one of ordinary skill to combine the orientation vectors to produce the virtual representations of FIG. 78B in the proper orientation for the user). Regarding claim 13, Bradski in view of Bastov discloses the method of claim 12 (see above), wherein generating the plurality of diorama-view representations includes scaling down the corresponding plurality of ER environments by a scaling factor (scaled representation of the room would be obvious as discussed at least Bradski FIGS. 78B and [1390]-[1394] and [0946]). Regarding claim 14, Bradski in view of Bastov discloses the method of claim 13 (see above), further comprising obtaining, via the one or more input devices, a scaling request input that specifies the scaling factor (Bradski at [0946]-[0947] and FIGS. 74A-79E, various scaling factors to produce the rendering). Regarding claim 15, Bradski in view of Bastov discloses the method of claim 13 (see above), wherein the electronic device includes an image sensor, the method further comprising: obtaining, via the image sensor, pass-through image data bounded by a field-of-view of a physical environment associated with the image sensor (Bastov at [0122] with description of surfaces considered therein); identifying, within the pass-through image data, one or more physical objects within the physical environment (Bastov at [0122] and FIGS. 6 and 7 when determination of rendering virtual objects); and determining the scaling factor based on the one or more physical objects (Bastov at [0108]-[0109], [0122] and [0162] and FIGS. 6 and 7 when determination of rendering virtual objects in view of Bradski at [1392]-[1394]). Regarding claim 16, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein at least a subset of the corresponding plurality of ER environments is respectively associated with a plurality of ER environments corresponding to a plurality of ER sessions that are distinct from each other (Bradski at FIGS. 78A-79B and [1392]-[1397]), and wherein each of the plurality of ER environments enables graphical representations of individuals to be concurrently within the ER environment (Bradski FIG. 78B as illustrated with first environment associated with the user as described at [1392]-[1397] and further describing customization per user at [0877] and [0983] and avatar at [1279]-[1280] in gaming environment and further FIG. 89D [1445]-[1476). Regarding claim 17, Bradski in view of Bastov discloses the method of claim 16 (see above), wherein each of at least a subset (Bradski at parental control [1621]) of the plurality of diorama-view representations corresponding to at least the subset of the corresponding plurality of ER environments includes one or more ER representations of one or more corresponding individuals that are associated with a respective ER session (Bradski at [1621] describing parental guidance/control application and additionally at [1721]). Regarding claim 18, Bradski in view of Bastov discloses the method of claim 17 (see above), wherein each of the one or more corresponding individuals is associated with a respective access level that satisfies an access level criterion that is associated with the respective ER session (Bradski at [1621] describing parental guidance/control application and additionally at [1721]). Regarding claim 19, Bradski in view of Bastov discloses the method of claim 16 (see above), further comprising: playing, via a speaker of the electronic device (Bradski at [1504]-[1505]), a first set of speech data (Bradski at [1504]-[1505]) while displaying the plurality of diorama-view representations (Bradski at FIGS. 78A-79B and [1392]-[1397]), wherein the first set of speech data is associated with one or more corresponding individuals that are associated with a particular ER session associated with a respective ER environment (Bradski disclosing avatar communication within a session/space at [0602], audio communication described at least at [1503]-[1506] in a collaborative environment described at FIG. 93L); obtaining, via an audio sensor of the electronic device, a second set of speech data from a user associated with the electronic device (Bradski at least at [1503]-[1506] describing translation data therein); and providing the second set of speech data to the respective ER environment so that the second set of speech data is audible to the one or more corresponding individuals that are associated with the particular ER session (Bradski at [1503]-[1508] describing language translation therein). Regarding claim 20, Bradski in view of Bastov discloses the method of claim 16 (see above), wherein each of at least a subset of the plurality of diorama-view representations corresponding to at least the subset of the corresponding plurality of ER environments is associated with a respective ER session (Bradski at [0847] personal data used and retrievable and FIGS. 74A-79E at [1390]-[1409]), and wherein displaying the plurality of diorama-view representations is based on historical data about the electronic device joining the plurality of ER sessions (Bradski at [0847] personal data used and retrievable and FIGS. 74A-79E at [1390]-[1409]). Regarding claim 21, Bradski in view of Bastov discloses the method of claim 1 (see above), further comprising obtaining, via an eye tracking sensor Bradski at [1005]-[1011] describing FIG. 117 describing gaze detection and tracking), eye gaze data indicative of an eye gaze location (Bradski at [1005]-[1011] describing FIG. 117 describing gaze detection and tracking), wherein displaying the plurality of diorama-view representations is based on the eye gaze data (Bradski at [1005] and FIG. 117-120 describing broad understanding as eye tracking as used for various inputs in view of [1392]-[1394]; it would be obvious to one of ordinary skill in the art to use eye gaze tracking as a form of input for the purpose of selecting a visually discernable virtual element). Regarding claim 22, Bradski in view of Bastov discloses the method of claim 1 (see above), further comprising: receiving, via the one or more input devices, a diorama-selection input that selects the first one of the plurality of diorama-view representations (Bradski disclosing detecting input detected and selection of one of the representations at [1394]); and in response to receiving the diorama-selection input: displaying, via the display device, an ER environment associated with the first one of the plurality of diorama-view representations ; and ceasing to display the plurality of diorama-view representations (see Bradski at least [1397] and FIG. 79A with prompt to stop displaying current virtual environment described therein and [1392]-[1394]). Regarding claim 23, Bradski in view of Bastov discloses the method of claim 1 (see above), further comprising displaying, within a particular one of the plurality of diorama-view representations, a recording of activity within a respective ER environment associated with the particular one of the plurality of diorama-view representations (Bradski at [0847] personal data used and retrievable and FIGS. 74A-79E at [1390]-[1409] and [1450] and [1538]). Regarding claim 24, Bradski in view of Bastov discloses the method of claim 1 (see above), wherein displaying the plurality of diorama-view representations is based on control values (Bradski at [1621] describing parental guidance/control application and additionally at [1721] in view of Bradski at FIGS. 78A-79B and [1392]-[1397])). Regarding claim 25, it is similar in scope to claim 1 above, the only difference being claim 25 is additionally directed to one or more programs (Bradski at [0174]), wherein the one or more programs are stored in the non-transitory memory (Bradski at [0174] and [0175]) and configured to be executed by the one or more processors (Bradski at [0174] and [0175]), the one or more programs including instructions (Bradski at [0174] and [0175]). Therefore, claim 25 is similarly analyzed and rejected as claim 1. Regarding claim 26 , it is similar in scope to claim 1 above, the only difference being claim 26 is directed to a non-transitory computer readable storage medium storing one or more programs(Bradski at [0537], Bastov at [0181]), the one or more programs comprising instructions, which, when executed by an electronic device (Bradski at [0537], Bastov at [0181]). Therefore, claim 26 is similarly analyzed and rejected as claim 1. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Christen et al., US 10,373,392 B2; Pahud et al., US 2019/0005724 A1; Veiga et al., US 12,254,244 B1; Khalid et al. ,US 11,288,875 B2; Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARVESH J NADKARNI whose telephone number is (571)270-7562. The examiner can normally be reached 8AM-5PM M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached at (571) 272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARVESH J NADKARNI/Examiner, Art Unit 2619 /NITIN PATEL/Supervisory Patent Examiner, Art Unit 2628
Read full office action

Prosecution Timeline

Jun 21, 2024
Application Filed
Nov 14, 2024
Response after Non-Final Action
Mar 12, 2025
Non-Final Rejection — §103, §DP
Jul 07, 2025
Examiner Interview Summary
Jul 07, 2025
Applicant Interview (Telephonic)
Jul 21, 2025
Response Filed
Aug 28, 2025
Final Rejection — §103, §DP
Nov 19, 2025
Interview Requested
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Examiner Interview Summary
Dec 01, 2025
Request for Continued Examination
Dec 16, 2025
Response after Non-Final Action
Dec 19, 2025
Non-Final Rejection — §103, §DP
Apr 14, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573325
SCAN SIGNAL DRIVER CIRCUIT, DISPLAY PANEL, DISPLAY DEVICE, AND DRIVING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12560967
ANNULAR HOUSING FOR DETECTION DEVICE WITH FIRST AND SECOND FLEXIBLE SUBSTRATES
2y 5m to grant Granted Feb 24, 2026
Patent 12554334
PERSONALIZED CALIBRATION OF USER INTERFACES
2y 5m to grant Granted Feb 17, 2026
Patent 12548519
POWER SUPPLY SYSTEM, DISPLAY DEVICE INCLUDING THE SAME, AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Feb 10, 2026
Patent 12504831
TACTILE PRESENTATION APPARATUS AND TACTILE PRESENTATION KNOB
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
85%
With Interview (+13.7%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 494 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month