Prosecution Insights
Last updated: April 19, 2026
Application No. 18/473,239

METHODS FOR IMPROVING USER ENVIRONMENTAL AWARENESS

Final Rejection §103
Filed
Sep 23, 2023
Examiner
LIU, ZHENGXI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
225 granted / 354 resolved
+1.6% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 72-75 and 77-93 are pending. Claims 72, 77-78, 93-94 have been amended. Claim 76 has been cancelled. No claim has been added. Claims 72-75 and 77-93 have been rejected. Compact Prosecution With respect to Claim Interpretation, the Examiner has provided some notes regarding “[BRI on the record]” throughout the Office Action, so that the record is clear about the scope of the claimed invention, and the record is also clear about the basis for the Examiner’s analyses. A clear record of the claim interpretation could expedite the examination by creating the condition to allow the examination to focus on Applicant’s inventive concept and its comparison with related prior art. If there are disagreements, Applicant may present an alternative interpretation based on MPEP 2111. The Examiner will adopt Applicant’s interpretation on the record, if Applicant’s interpretation is reasonable and/or arguments are persuasive. Applicant may amend claims relying on the Examiner’s claim interpretation provided on the record. Claim Objections The objections to Claims 72, 79, 86, and 93-94 have been withdrawn in view of Applicant’s amendments to the independent claims. Claims 82, 84, 87, and 92 are objected to because of the following informalities: The claims recites “while” and the Examiner requests clarification from Applicant’s representative. Claim 82 recites: while the first person satisfies the one or more criteria and while the first person has the increased visual prominence relative to the first virtual content: while the respective portion of the first virtual content is a first respective portion of the first virtual content that corresponds to a first location of the first person relative to the first virtual content, Here, it is unclear whether “while” is similar to “if,” and is a contingent limitation. If a reference art teaches never displays first virtual content, is the “while” limitation satisfied? Response to Arguments Applicant’s arguments regarding the Examiner’s 35 USC § 102 rejections are moot in view of the Examiner’s new grounds of rejections. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 72-75, 77-82, 86-87, and 91-92 are rejected under 35 U.S.C. 103 as being unpatentable over Pekelny et al. (US 20200026922 A1) in view of Abdollahian (US 20190392830 A1). Regarding Claim 72, Pekelny teaches A method comprising: at a computer system in communication with a display generation component and one or more input devices ( [BRI on the record] With respect to “at a computer system in communication with a display generation component and one or more input devices,” the Examiner is reading the claim to require that each step of the method is executed on the computer system as claimed. [Mapping Analysis] PNG media_image1.png 334 444 media_image1.png Greyscale Pekelny discloses a virtual reality display, stating “The VR device 106 in the example of FIG. 1 corresponds to a head-mounted display (HMD). In one implementation, the VR device 106 produces a completely immersive virtual environment. In such an environment, the user 104, while he wears the VR device 106, cannot see the physical environment 102.” Pekelny ¶ 36; see ¶ 80. Pekelny discloses a computer system in communication with the virtual reality display, stating “Further, while FIG. 1 shows that the VR device 106 corresponds to an HMD, the principles described herein can be applied to other types of VR devices. For example, the VR device 106 can alternatively correspond to a computing device of any type which presents a virtual environment on one or more external display devices not affixed to the user's head, but where those external display device(s) at least partially block the user's view of the physical environment 102. ” Pekelny ¶ 37; see ¶139, see figs. 7, 12-13, 16. Pekelny discloses input devices, stating “The computing device 1602 also includes an input/output interface 1616 for receiving various inputs (via input devices 1618), and for providing various outputs (via output devices 1620). Illustrative input devices 1618 and output devices 1620 were described above in connection with FIGS. 12 and 13.” Pekelny ¶ 146; see ¶ 61.): displaying, via the display generation component, first virtual content (see the Examiner’s analysis for the following limitation); while displaying, via the display generation component, first virtual content, wherein the first virtual content is obscuring a first portion of a physical environment, detecting, via the one or more input devices, a first person located in the first portion of the physical environment ( Pekelny teaches displaying virtual environment, mapped to the first virtual content, obscuring the physical environment, stating “The VR device 106 in the example of FIG. 1 corresponds to a head-mounted display (HMD). In one implementation, the VR device 106 produces a completely immersive virtual environment. In such an environment, the user 104, while he wears the VR device 106, cannot see the physical environment 102.” Pekelny ¶ 36. Note, any element within the virtual environment could be mapped to the first virtual content as well. Pekelny further explains about the virtual and physical environments, stating “FIG. 1 shows a physical environment 102 in which a user 104 uses a virtual reality (VR) device 106 to interact with a virtual reality environment (‘virtual environment’). The physical environment 102 corresponds to an indoor space that includes a plurality of objects. In this merely illustrative case, the objects include: another person 108, a plurality of walls (110, 112, 114), and two computing devices (116, 118).” Pekelny ¶ 35. Pekelny teaches detecting a first person in the physical environment as shown in fig. 1 based on a user setting, stating “For instance, the user 104 may identify all people as objects-of-interest, just members of his own family, or just a specific person, etc. . . . Second, the SPC uses automated analysis to determine whether any of the identified objects are present in the physical environment 102 while the user 104 interacts with a virtual world provided by the VR device 106. Any object-of-interest that the SPC detects is referred to herein as a detected object. Third, the SPC provides alert information to the user 104 which alerts the user 104 to each detected object. For instance, the SPC may present the alert information as visual information that is overlaid on the virtual environment 202.” Pekelny ¶ 39.); and in response to detecting the first person in the first portion of the physical environment ( PNG media_image2.png 772 524 media_image2.png Greyscale Here, fig. 15 1504 1506 show that later steps in fig. 15 is in response to detecting the first person in the physical environment. “If not (as determined in block 1506), then the SPC 702 terminates the process 1502 with respect to the class under consideration. If, however, the specified kind of object is present, then, in block 1508, the SPC 702 determines whether it is appropriate to display the object(s) regardless of the identities of their respective instance(s). For example, the user 104 may have instructed the SPC 702 to provide alert information upon the discovery of any people in the physical environment 102, without regard to whom these people may be. If this is so, then the SPC 702 will generate alert information for the detected object(s) (in a manner described below) without resolving the identity(ies) of those object(s).” Pekelny ¶ 134.): in accordance with a determination that the first person satisfies one or more criteria, wherein the one or more criteria indicate that the computer system has detected that attention of the first person “Third, the SPC provides alert information to the user 104 which alerts the user 104 to each detected object. For instance, the SPC may present the alert information as visual information that is overlaid on the virtual environment 202.” Pekelny ¶ 39. Pekelny teaches detecting an expression of attention of the first person directed to a user of HMD through verbal communication, stating “Alternatively, or in addition, the SPC may display the alert information 204 when the other person 108 issues a command ‘Hello John!’ (presuming that the user's name is John), or “See me!” or the like, as represented by the voice bubble 604. For instance, assume that the user's friend wishes to get the user's attention as the user 104 plays a game. The friend may provide a voice command that requests the SPC to provide alert information to the user 104, notifying the user 104 of the friend's location. In one implementation, the SPC can allow each user to configure the SPC to associate different commands by the user and/or another person with respective actions.” Pekelny ¶ 50. PNG media_image3.png 464 716 media_image3.png Greyscale Here, the alert information 204 comprising the image of the other person with increased visual prominence with respect to the virtual environment. “In response to this command, the SPC will show the alert information 204 that identifies the location of the other person 108, presuming the other person 108 has been detected by the SPC.” Pekelny ¶ 50.); and in accordance with a determination that the first person does not satisfy the one or more criteria, forgoing increasing the visual prominence of the first person relative to the first virtual content ( Pekelny teaches detecting an expression of attention of the first person directed to a user of HMD through verbal communication, stating “Alternatively, or in addition, the SPC may display the alert information 204 when the other person 108 issues a command ‘Hello John!’ (presuming that the user's name is John), or “See me!” or the like, as represented by the voice bubble 604. For instance, assume that the user's friend wishes to get the user's attention as the user 104 plays a game. The friend may provide a voice command that requests the SPC to provide alert information to the user 104, notifying the user 104 of the friend's location. In one implementation, the SPC can allow each user to configure the SPC to associate different commands by the user and/or another person with respective actions.” Pekelny ¶ 50. When the other person does not say those words, the alert information 204 is not displayed in this embodiment.). Pekelny does not explicitly disclose: detecting that attention of the first person based at least partially on an orientation of a respective portion of a body of the first personbeing directed toward the user. Abdollahian teaches detecting that attention of the first person based at least partially on an orientation of a respective portion of a body of the first personbeing directed toward the user ( “The process 900 includes detecting 940 a hail event based on the estimate of the orientation of the face. The fact that a person in the vicinity of the user wearing the head-mounted display is facing toward the user may warrant a hail event to alert the user to the presence of the person who may be addressing the user. In some implementations, a hail event is detected 940 when the estimate of the orientation of the face is within a threshold angle of a facing directly toward the user. In some implementations, the estimate of the orientation of the face is one of a plurality of factors considered to detect 940 the hail event.” Abdollahian ¶ 99. The disclosed face is mapped to claimed “the respective portion of the body.” This mapping is consistent with the specification ¶¶ 53, 241, 373.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abdollahian’s alert technique with primary reference Pekelny. One of ordinary skill in the art would be motivated to alert a user of virtual reality to a person who may be paying attention to the user. “The process 900 includes detecting 940 a hail event based on the estimate of the orientation of the face. The fact that a person in the vicinity of the user wearing the head-mounted display is facing toward the user may warrant a hail event to alert the user to the presence of the person who may be addressing the user. In some implementations, a hail event is detected 940 when the estimate of the orientation of the face is within a threshold angle of a facing directly toward the user. In some implementations, the estimate of the orientation of the face is one of a plurality of factors considered to detect 940 the hail event.” Abdollahian ¶ 99. Independent Claims 93-94 are substantially similar to Claim 72. The rejection analyses based on Pekelny in view of Abdollahian for Claim 72 are applied to Claims 93-94. In addition, Claim 93 recites “A computer system that is in communication with a display generation component and one or more input devices, the computer system comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for . . .” and Claim 94 recites “A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, cause the computer system to perform a method . . .” (Pekelny ¶¶ 27, 31, 140-143). Regarding Claim 73, Pekelny further teaches The method of claim 72, wherein increasing the visual prominence of the first person relative to the first virtual content (referencing limitation in parent claim) includes increasing the visual prominence of the first person to a first visual prominence relative to the first virtual content ( PNG media_image3.png 464 716 media_image3.png Greyscale Here, the alert information 204 comprising the image of the other person with increased visual prominence with respect to the virtual environment. Pekelny teaches obscuring the other person in the physical environment before the increasing, stating “The VR device 106 in the example of FIG. 1 corresponds to a head-mounted display (HMD). In one implementation, the VR device 106 produces a completely immersive virtual environment. In such an environment, the user 104, while he wears the VR device 106, cannot see the physical environment 102.” Pekelny ¶ 36.), the method further comprising: in response to detecting the first person in the first portion of the physical environment and before the first person satisfies the one or more criteria, increasing the visual prominence of the first person relative to the first virtual content to a second visual prominence relative to the first virtual content, wherein the second visual prominence is less than the first visual prominence ( PNG media_image4.png 772 496 media_image4.png Greyscale Here, a user may select and add the following: Object (fig. 8 806) Sue Jones Sue Jones When (fig. 8 808) When 3 meters When Sue Jones explicitly asks Mode (fig. 8 810) Pass-through Video/Outline Pass-through Video/Outline Transparency (fig. 8 812) 5 3 Assuming transparency level =3 means more visual prominence when compared with transparency level =5. If it is not the case, the transparency values could be easily switched. When Sue Jones is detected within 3 meters, a notification with transparency level=5 is displayed; and when Sue Jones subsequently speaks, the notification’s transparency level is changed to 3, gaining more visual prominence. It would have been “Obvious to try” – choosing from a finite number of identified predictable solutions in fig 8 and the disclosure, with a reasonable expectation of success. The notification would be produced according to the configuration. (KSR)). Regarding claim 74, Pekelny teaches The method of claim 72. However, Pekelny does not explicitly disclose wherein the one or more criteria include a criterion that is satisfied when the computer system has detected that gaze of the first person is directed to the user of the computer system. Abdollahian teaches wherein the one or more criteria include a criterion that is satisfied when the computer system has detected that gaze of the first person is directed to the user of the computer system ( “For example, the processing apparatus 310 may be configured to detect, based at least in part on the image, a face of the person; determine a gaze direction of the person with respect to the head-mounted display 340; and detect the hail event based on the gaze direction.” Abdollahian ¶ 50. “In some implementations, eyes of the person can be analyzed more closely to determine a gaze direction, in order to assess whether the person is looking at the user.” Abdollahian ¶ 21.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abdollahian’s alert technique with primary reference Pekelny. One of ordinary skill in the art would be motivated to alert a user of virtual reality to a person who may be paying attention to the user. “Head-mounted displays are used to provide computer-generated reality experiences for users. Users of a head-mounted display may be subject to varying levels of immersion in a virtual or augmented environment. Head-mounted displays may present images and audio signals to a user, which, to varying degrees, may impair a user's ability to concurrently detect events in their physical surroundings.” Abdollahian ¶ 3. Regarding Claim 75, Pekelny further teaches The method of claim 72, wherein the one or more criteria include a criterion that is satisfied when the computer system has detected speech of the first person that satisfies one or more second criteria ( “Alternatively, or in addition, the SPC may display the alert information 204 when the other person 108 issues a command ‘Hello John!’ (presuming that the user's name is John), or ‘See me!’ or the like, as represented by the voice bubble 604. For instance, assume that the user's friend wishes to get the user's attention as the user 104 plays a game. The friend may provide a voice command that requests the SPC to provide alert information to the user 104, notifying the user 104 of the friend's location. In one implementation, the SPC can allow each user to configure the SPC to associate different commands by the user and/or another person with respective actions.” Pekelny ¶ 50. ). Regarding Claim 77, Pekelny further teaches The method of claim 72, wherein the one or more criteria include a criterion that is satisfied when the computer system detects a distance of the respective portion of the body of the first person from the user of the computer system that is less than a threshold distance ( “According to a sixth aspect, the alert-condition information specifies that alert information is to be provided to the user when the user is within a prescribed distance to an object-of-interest in the physical environment.” Pekelny ¶ 155; see fig. 8 808. Fig. 8 806 shows that the object-of-interest could be a person.). Regarding Claim 78, Pekelny teaches The method of claim 72. Pekelny does not explicitly disclose wherein the one or more criteria includes a criterion that is satisfied when the computer system detects the orientation of the respective portion of the body of the first person relative to the user of the computer system that is within a threshold orientation. Abdollahian teaches wherein the one or more criteria includes a criterion that is satisfied when the computer system detects the orientation of the respective portion of the body of the first person relative to the user of the computer system that is within a threshold orientation ( “The process 900 includes detecting 940 a hail event based on the estimate of the orientation of the face. The fact that a person in the vicinity of the user wearing the head-mounted display is facing toward the user may warrant a hail event to alert the user to the presence of the person who may be addressing the user. In some implementations, a hail event is detected 940 when the estimate of the orientation of the face is within a threshold angle of a facing directly toward the user. In some implementations, the estimate of the orientation of the face is one of a plurality of factors considered to detect 940 the hail event.” Abdollahian ¶ 99. The disclosed face is mapped to claimed “the respective portion of the body.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abdollahian’s alert technique with primary reference Pekelny. One of ordinary skill in the art would be motivated to alert a user of virtual reality to a person who may be paying attention to the user. “The process 900 includes detecting 940 a hail event based on the estimate of the orientation of the face. The fact that a person in the vicinity of the user wearing the head-mounted display is facing toward the user may warrant a hail event to alert the user to the presence of the person who may be addressing the user. In some implementations, a hail event is detected 940 when the estimate of the orientation of the face is within a threshold angle of a facing directly toward the user. In some implementations, the estimate of the orientation of the face is one of a plurality of factors considered to detect 940 the hail event.” Abdollahian ¶ 99. Regarding Claim 79, Pekelny further teaches The method of claim 72, further comprising: while displaying, via the display generation component, the first virtual content, detecting, via the one or more input devices, a respective person located in a respective portion of the physical environment that is obscured by the first virtual content ( Pekelny teaches displaying virtual environment, mapped to the first virtual content, obscuring the physical environment, stating “The VR device 106 in the example of FIG. 1 corresponds to a head-mounted display (HMD). In one implementation, the VR device 106 produces a completely immersive virtual environment. In such an environment, the user 104, while he wears the VR device 106, cannot see the physical environment 102.” Pekelny ¶ 36. Note, any element within the virtual environment could be mapped to the first virtual content as well. Pekelny teaches detecting a respective person in the physical environment as shown in fig. 1 based on a user setting, stating “For instance, the user 104 may identify all people as objects-of-interest, just members of his own family, or just a specific person, etc. . . . Second, the SPC uses automated analysis to determine whether any of the identified objects are present in the physical environment 102 while the user 104 interacts with a virtual world provided by the VR device 106. Any object-of-interest that the SPC detects is referred to herein as a detected object. Third, the SPC provides alert information to the user 104 which alerts the user 104 to each detected object. For instance, the SPC may present the alert information as visual information that is overlaid on the virtual environment 202.” Pekelny ¶ 39); and in response to detecting the respective person in the respective portion of the physical environment ( fig. 15 1504 1506 “If not (as determined in block 1506), then the SPC 702 terminates the process 1502 with respect to the class under consideration. If, however, the specified kind of object is present, then, in block 1508, the SPC 702 determines whether it is appropriate to display the object(s) regardless of the identities of their respective instance(s). For example, the user 104 may have instructed the SPC 702 to provide alert information upon the discovery of any people in the physical environment 102, without regard to whom these people may be. If this is so, then the SPC 702 will generate alert information for the detected object(s) (in a manner described below) without resolving the identity(ies) of those object(s).” Pekelny ¶ 134.): PNG media_image2.png 772 524 media_image2.png Greyscale ), and in accordance with a determination that the respective person satisfies the one or more criteria ( Fig. 15 1510 1512, showing of the determination whether a detected person is a family member or “Sue Jones.” “If block 1508 is answered in the negative, then, in block 1510, the SPC 702 can invoke the appropriate object detection component(s) to determine whether the specific instance (or instances) that is (or are) being sought (such as a specific person) is (or are) present in the physical environment 102.” Pekelny ¶ 135, see fig. 8.): in accordance with a determination that a first setting of the computer system has a first value (Fig. 15 1512=Y), increasing a visual prominence of the respective person relative to the first virtual content ( Fig. 15 1512=Y, 1514,1516, teaching that the alert information as shown in fig. 6 will be shown when a detected person’s identity is matched to the system setting. PNG media_image5.png 388 474 media_image5.png Greyscale ); and in accordance with a determination that the first setting of the computer system has a second value (fig. 15 1512=N), different from the first value, forgoing increasing the visual prominence of the respective person relative to the first virtual content (fig. 15 1512=N, showing that the process reaches “END” without showing the alert as shown in fig. 6). Regarding Claim 80, Pekelny further teaches The method of claim 79, further comprising: displaying, via the display generation component, a control user interface for the computer system that includes a selectable option that is selectable to set the first value or the second value for the first setting ( PNG media_image4.png 772 496 media_image4.png Greyscale Here, one could choose “Sue Jones,” which sets Y (“Sue Jones”) or N (“not Sue Jones”) two values for the first setting.). Regarding Claim 81, Pekelny further teaches The method of claim 72, wherein increasing the visual prominence of the first person relative to the first virtual content includes modifying a visual appearance of a respective portion of the first virtual content, wherein a shape of the respective portion of the first virtual content is asymmetrical along at least one axis ( PNG media_image3.png 464 716 media_image3.png Greyscale Fig. 15: PNG media_image6.png 138 218 media_image6.png Greyscale Here, the “Pass-Through Video,” “Image Outline,” or different Avatars for fig. 6 204 is provided over/within the virtual environment. All these types alert are asymmetrical along at least one axis. Pekelny provides technical explanation about “Pass-Through Video,” stating “A video pass-through construction component can use any combination of the object detection components to identify an object-of-interest in the physical environment 102. The video pass-through construction component can then determine the location at which the object-of-interest occurs in the physical environment 102 with respect to the user's current position. The video pass-through construction component can make this determination based on depth information provided by a depth camera system. Or the video pass-through construction component can determine the location of the object-of-interest based on image information provided by the VR device's video cameras, e.g., using the principle of triangulation. The video pass-through construction component can then project the parts of the captured video information captured by the VR device's video camera(s) that pertain to the object-of-interest at an appropriate location in the virtual environment 202, representing the determined location of the object-of-interest. In a variant of this approach, the video pass-through construction component can rely on the ROI detection component(s) to identify the region-of-interest (ROI) associated with the object-of-interest. The video pass-through construction component can then selectively present the video information pertaining to the entire ROI.” Pekelny ¶ 70.). Regarding Claim 82, Pekelny further teaches The method of claim 72, wherein increasing the visual prominence of the first person relative to the first virtual content includes modifying a visual appearance of a respective portion of the first virtual content (Pekelny fig. 6 204, 604), the method further comprising: while the first person satisfies the one or more first criteria and while the first person has the increased visual prominence relative to the first virtual content (See Claim 72’s analysis; ): while the respective portion of the first virtual content is a first respective portion of the first virtual content that corresponds to a first location of the first person relative to the first virtual content ( PNG media_image3.png 464 716 media_image3.png Greyscale ), detecting, via the one or more input devices, movement of the first person from the first location relative to the first virtual content to a second location, different from the first location, relative to the first virtual content ( Pekelny Fig. 8: PNG media_image7.png 138 214 media_image7.png Greyscale , which shows that Pekelny’s system detecting the first person moving from within 3 meters to within 1 meter or from within 1 meter to outside 3 meters.); and in response to detecting the movement of the first person from the first location relative to the first virtual content to the second location relative to the first virtual content, modifying a visual appearance of a second respective portion of the first virtual content that corresponds to the second location of the first person relative to the first virtual content ( Object (fig. 8 806) Sue Jones Sue Jones When (fig. 8 808) When within 3 meters When within 2 meters Mode (fig. 8 810) Pass-through Video/Outline Pass-through Video/Outline Transparency (fig. 8 812) 5 3 Sue Jones may move closer to the user, from being within 3 meters to being within 2 meters, and the visual alert as shown in the fig. 6 may change based on the abovementioned settings. Fig. 15: PNG media_image6.png 138 218 media_image6.png Greyscale Here, the “Pass-Through Video,” “Image Outline,” or different Avatars is provided over/within the virtual environment. These alert notifications correspond to the locations of Sue Jones. See Figs. 2, 6. It would have been “Obvious to try” – choosing from a finite number of identified predictable solutions in fig 8 and the disclosure, with a reasonable expectation of success. The notification would be produced according to the configuration. (KSR)). Regarding Claim 86, Pekelny further teaches The method of claim 72, wherein increasing the visual prominence of the first person relative to the first virtual content includes increasing the visual prominence of the first person relative to the first virtual content to a first visual prominence relative to the first virtual content (Pekelny fig. 6 204, 604; fig. 8), the method further comprising: while displaying, via the display generation component, the first virtual content, detecting, via the one or more input devices, a respective person located in a respective portion of the physical environment that is obscured by the first virtual content ( [BRI on the record] With respect to “a respective person,” the Examiner is reading it as a person different from the already claimed “first person.” [Mapping Analysis] A user may select and add the following settings according to FIG. 8: Object (806) Sue Jones Sue Jones Brad Smith Brad Smith When (808) When 3 meters When explicitly asked When 3 meters When explicitly asked Mode (810) Pass-through Video/Outline Pass-through Video/Outline Pass-through Video/Outline Pass-through Video/Outline Transparency (812) 5 3 5 3 Assuming transparency level =3 means more visual prominence when compared with transparency level =5. If it is not the case, the transparency values could be easily switched. When Sue Jones is detected within 3 meters, a visual alert with transparency level=5 is displayed; and when Sue Jones subsequently speaks, the notification’s transparency level is changed to 3, gaining more visual prominence.); and in response to detecting the respective person in the respective portion of the physical environment (At this moment, Brad Smith is detected within 3 meters.), and in accordance with a determination that the respective person does not satisfy the one or more criteria, increasing a visual prominence of the respective person relative to the first virtual content to a second visual prominence relative to the first virtual content, less than the first visual prominence relative to the first virtual content ( A visual alert with transparency level=5 is displayed with respect to Brad Smith. However, Brad Smith did not speak, the one or more criteria not satisfied, therefore, the transparency level is only raised to 5, a second visual prominence less than the first visual prominence (transparency level =3). It would have been “Obvious to try” – choosing from a finite number of identified predictable solutions in fig 8 and the disclosure, with a reasonable expectation of success. The notification would be produced according to the configuration. (KSR)). Regarding Claim 87, Pekelny further teaches The method of claim 72, wherein increasing the visual prominence of the first person relative to the first virtual content includes increasing the visual prominence of the first person relative to the first virtual content to a first visual prominence relative to the first virtual content (Pekelny fig. 6 204, 604; fig. 8), the method further comprising: while the first person has the first visual prominence relative to the first virtual content, detecting, via the one or more input devices, input from the user of the computer system ( “In one implementation, the SPC can allow each user to configure the SPC to associate different commands by the user and/or another person with respective actions.” Pekelny ¶ 50. “Alternatively, or in addition, the user 104 may also issue commands to the SPC while he or she is interacting with the virtual environment 202, e.g., by instructing it to start looking for specific objects, stop looking for certain objects, change the conditions under which alert information is provided, change the way in which alert information is provided, and so on. For instance, the user 104 may issue a voice command, “Show floor now,” or “Switch alert mode to text only,” etc. The SPC can interpret the user's commands using virtual assistant technology and make appropriate changes to its operation.” Pekelny ¶ 55.); and while detecting the input from the user of the computer system, and in accordance with a determination that the input from the user of the computer system satisfies one or more second criteria, reducing the visual prominence of the first person relative to the first virtual content ( “For instance, the user 104 may issue a voice command, “Show floor now,” or “Switch alert mode to text only,” etc. The SPC can interpret the user's commands using virtual assistant technology and make appropriate changes to its operation.” Pekelny ¶ 55. When the user issue command to “Switch alert mode to text only,” for example, the visual prominence of the first person is reduced and switched to text mode.). Regarding Claim 91, Pekelny further teaches The method of claim 87, wherein the one or more second criteria include a criterion that is satisfied when the input from the user includes a portion of a body of the user being in a respective pose ( “For instance, the user 104 may issue a voice command, “Show floor now,” or “Switch alert mode to text only,” etc. The SPC can interpret the user's commands using virtual assistant technology and make appropriate changes to its operation.” Pekelny ¶ 55. When the user issue command to “Switch alert mode to text only,” for example, the visual prominence of the first person is reduced and switched to text mode. Pekelny does not explicitly teach that the verbal command “Switch alert mode to text only” is also implemented through hand gesture as well. “The user 104 may interact with the graphical UI presentations using hand gestures, voice commands, handheld controller manipulations, etc.” Pekelny ¶ 58. “The user 104 may interact with the configuration component 704 using one or more input devices 706. The input devices 706 can include any of a mouse device, a key entry device, one or more controllers, voice recognition technology, gesture recognition technology, etc. The voice recognition technology and gesture recognition technology can use any techniques to identify voice commands and gestures, respectively, such as, without limitation, Hidden Markov Models (HMMs), deep neural networks, etc.” Pekelny ¶ 61. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Pekelny’s hand gesture command with Pekelny’s “Switch alert mode to text only” command. One of ordinary skill in the art would be motivated to allow one to use hand gesture to implement “Switch alert mode to text only,” because some find it more useful. For example, verbal commands may be ineffective for people who speak with heavy accent.). Regarding Claim 92, Pekelny further teaches The method of claim 72, wherein the first virtual content is concurrently visible with a respective portion of an environment via the display generation component ( Pekelny figs. 2, 6; Pekelny provides technical explanation about “Pass-Through Video,” an example for concurrently displaying, stating “A video pass-through construction component can use any combination of the object detection components to identify an object-of-interest in the physical environment 102. The video pass-through construction component can then determine the location at which the object-of-interest occurs in the physical environment 102 with respect to the user's current position. The video pass-through construction component can make this determination based on depth information provided by a depth camera system. Or the video pass-through construction component can determine the location of the object-of-interest based on image information provided by the VR device's video cameras, e.g., using the principle of triangulation. The video pass-through construction component can then project the parts of the captured video information captured by the VR device's video camera(s) that pertain to the object-of-interest at an appropriate location in the virtual environment 202, representing the determined location of the object-of-interest. In a variant of this approach, the video pass-through construction component can rely on the ROI detection component(s) to identify the region-of-interest (ROI) associated with the object-of-interest. The video pass-through construction component can then selectively present the video information pertaining to the entire ROI.” Pekelny ¶ 70.), the method further comprising: while the respective portion of the environment is visible with a first visual prominence relative to the environment, detecting, via the one or more input devices, attention of the user of the computer system directed to the first person ( “In response, the user 104 may issue the command, “Show people now,” as represented in FIG. 6 by the voice bubble 602. In response to this command, the SPC will show the alert information 204 that identifies the location of the other person 108, presuming the other person 108 has been detected by the SPC.” Pekelny ¶ 49. “In one implementation, the SPC can allow each user to configure the SPC to associate different commands by the user and/or another person with respective actions.” Pekelny ¶ 50.); and in response to detecting the attention of the user directed to the first person, increasing a visual prominence of the respective portion of the environment to a second visual prominence relative to the environment ( Fig. 8 shows different options for the settings. Here, a user may select and add the following: Object (fig. 8 806) Sue Jones Sue Jones When (fig. 8 808) When 3 meters User explicitly asked to see Sue Jones Mode (fig. 8 810) Pass-through Video/Outline Pass-through Video/Outline Transparency (fig. 8 812) 5 3 Assuming transparency level =3 means more visual prominence when compared with transparency level =5. If it is not the case, the transparency values could be easily switched. When Sue Jones is detected within 3 meters, an alert with transparency level=5 is displayed; and when the user asked to see Sue Jones, the notification’s transparency level is changed to 3, gaining more visual prominence. It would have been “Obvious to try” – choosing from a finite number of identified predictable solutions in fig 8 and the disclosure, with a reasonable expectation of success. The notification would be produced according to the configuration. (KSR)). Claims 83-85 are rejected under 35 U.S.C. 103 as being unpatentable over Pekelny et al. (US 20200026922 A1) in view of Abdollahian (US 20190392830 A1) as applied in Claim 72, in further view of Oran (US 20130191160 A1). Regarding Claim 83, Pekelny in view of Abdollahian teaches The method of claim 72, wherein increasing the visual prominence of the first person relative to the first virtual content includes: increasing the visual prominence of the first person relative to the first virtual content to a first visual prominence relative to the first virtual content (Pekelny fig. 6 204, 604; fig. 8). Pekelny in view of Abdollahian does not explicitly disclose after increasing the visual prominence of the first person relative to the first virtual content to the first visual prominence, gradually decreasing the visual prominence of the first person relative to the first virtual content from the first visual prominence to a second visual prominence relative to the first virtual content. Oran teaches after increasing the visual prominence of the first person relative to the first virtual content to the first visual prominence, gradually decreasing the visual prominence of the first person relative to the first virtual content from the first visual prominence to a second visual prominence relative to the first virtual content ( “Accordingly, such visual depictions of the laboratory-test information may fade in and out automatically such that the user may be alerted to the existence of, and ultimately view, additional test results.” Oran ¶ 51. After Pekelny in view of Abdollahian is combined with Oran, the visual prominence of an alert as shown in fig. 6 may fade out according to Oran.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Oran’s fading-out visual element with primary reference Pekelny in view of Abdollahian. One of ordinary skill in the art would be motivated to smooth the visual transition so that it would have been more visually pleasing. “Accordingly, such visual depictions of the laboratory-test information may fade in and out automatically such that the user may be alerted to the existence of, and ultimately view, additional test results.” Oran ¶ 51. Regarding Claim 84, Pekelny in view of Abdollahian and Oran teaches The method of claim 83, further comprising: while the visual prominence of the first person relative to the first virtual content is the second visual prominence (faded out according to Claim 83’s analysis) relative to the first virtual content, detecting, via the one or more input devices, attention of the user of the computer system directed to the first person ( “In response, the user 104 may issue the command, “Show people now,” as represented in FIG. 6 by the voice bubble 602. In response to this command, the SPC will show the alert information 204 that identifies the location of the other person 108, presuming the other person 108 has been detected by the SPC.” Pekelny ¶ 49. The user here shows attention directed to “the other person 108,” mapped to the “first person.” “In one implementation, the SPC can allow each user to configure the SPC to associate different commands by the user and/or another person with respective actions.” Pekelny ¶ 50.); and in response to detecting the attention of the user of the computer system directed to the first person, increasing the visual prominence of the first person relative to the first virtual content to a third visual prominence relative to the first virtual content, wherein the third visual prominence is greater than the second visual prominence ( “In response, the user 104 may issue the command, “Show people now,” as represented in FIG. 6 by the voice bubble 602. In response to this command, the SPC will show the alert information 204 that identifies the location of the other person 108, presuming the other person 108 has been detected by the SPC.” Pekelny ¶ 49. Here, the alert information 204 is displayed after it has faded out.). Regarding Claim 85, Pekelny in view of Abdollahian and Oran teaches The method of claim 83, wherein the one or more criteria are satisfied based on a degree of attention detected by the computer system being greater than a threshold degree of attention ( Pekelny Fig. 8: PNG media_image7.png 138 214 media_image7.png Greyscale , shows, as a person approaches the user, the person is expected as having shown increasing degrees of attention. Therefore, 3 meters, 2 meters, and 1 meter correspond to a degree of attention.). Claims 88-89 are rejected under 35 U.S.C. 103 as being unpatentable over Pekelny et al. (US 20200026922 A1) in view of Abdollahian (US 20190392830 A1) as applied to Claim 87, in further view of Liang (WO 2021203856 A1) and Bradski et al. (US 20160026253 A1). Regarding Claim 88, Pekelny in view of Abdollahian teaches The method of claim 87. However, Pekelny in view of Abdollahian does not explicitly disclose wherein the one or more second criteria include a criterion that is satisfied when the input from the user includes input for moving the first virtual content. Liang teaches wherein the one or more second criteria include a criterion that is satisfied when the input from the user includes input for interacting/selecting (“For example, as shown in Figure 6, when the user selects a virtual item, the terminal can display that the display item 501 of the virtual item is in a highlighted state.” Liang p. 12. Therefore, the visual prominence of the first person is reduced relative to the first virtual content, even if the visual representation of the first person remains the same, because the visual prominence of the first virtual content is increased through highlighting.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Liang’s highlighting with primary reference Pekelny in view of Abdollahian. One of ordinary skill in the art would be motivated to help the user to easily interact with virtual environment. The highlight serves as a form of visual feedback to the user. “For example, as shown in Figure 6, when the user selects a virtual item, the terminal can display that the display item 501 of the virtual item is in a highlighted state.” Liang p. 12. Pekelny in view of Abdollahian and Liang does not explicitly disclose the interacting/selecting is moving the first virtual content. Bradski teaches the interacting/selecting is moving the first virtual content ( “For instance, the AR system may render a set of virtual email messages to be read and a set of virtual email messages which the user has already read. As the user scrolls through the virtual email messages, the AR system re-renders the virtual content such that the read virtual email messages are moved from the unread set to the read set. The user may choose to scroll in either direction, for example via appropriate gestures.” Bradski ¶ 1479.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bradski’s scrolling and moving virtual items with Pekelny in view of Abdollahian and Liang. One of ordinary skill in the art would be motivated to interact with virtual contents to be productive and to complete tasks. “For instance, the AR system may render a set of virtual email messages to be read and a set of virtual email messages which the user has already read. As the user scrolls through the virtual email messages, the AR system re-renders the virtual content such that the read virtual email messages are moved from the unread set to the read set. The user may choose to scroll in either direction, for example via appropriate gestures.” Bradski ¶ 1479. Regarding Claim 89, Pekelny in view of Abdollahian and Liang teaches The method of claim 87, wherein the one or more second criteria include a criterion that is satisfied when the input from the user includes input for interacting/selecting “For example, as shown in Figure 6, when the user selects a virtual item, the terminal can display that the display item 501 of the virtual item is in a highlighted state.” Liang p. 12. Therefore, the visual prominence of the first person is reduced relative to the first virtual content, even if the visual prominence of the first person remains the same, because the visual prominence of the first virtual content is increased through highlighting.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Liang’s highlighting with primary reference Pekelny in view of Abdollahian. One of ordinary skill in the art would be motivated to help the user to interact with virtual environment. The highlight serves as a form of visual feedback to the user. “For example, as shown in Figure 6, when the user selects a virtual item, the terminal can display that the display item 501 of the virtual item is in a highlighted state.” Liang p. 12. Pekelny in view of Abdollahian and Liang does not explicitly disclose the for interacting/selecting is scrolling through the first virtual content. Bradski teaches the interacting/selecting is scrolling through the first virtual content ( “For instance, the AR system may render a set of virtual email messages to be read and a set of virtual email messages which the user has already read. As the user scrolls through the virtual email messages, the AR system re-renders the virtual content such that the read virtual email messages are moved from the unread set to the read set. The user may choose to scroll in either direction, for example via appropriate gestures.” Bradski ¶ 1479.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bradski’s scrolling and moving virtual items with Pekelny in view of Abdollahian and Liang. One of ordinary skill in the art would be motivated to interact with virtual contents to be productive and to complete tasks. “For instance, the AR system may render a set of virtual email messages to be read and a set of virtual email messages which the user has already read. As the user scrolls through the virtual email messages, the AR system re-renders the virtual content such that the read virtual email messages are moved from the unread set to the read set. The user may choose to scroll in either direction, for example via appropriate gestures.” Bradski ¶ 1479. Claim 90 is rejected under 35 U.S.C. 103 as being unpatentable over Pekelny et al. (US 20200026922 A1) in view of Abdollahian (US 20190392830 A1) as applied to Claim 87, in further view of Brinda (US 20170214782 A1). Regarding Claim 90, Pekelny in view of Abdollahian teaches The method of claim 87. However, Pekelny in view of Abdollahian does not explicitly disclose wherein the one or more second criteria include a criterion that is satisfied when the input from the user includes input interacting with one or more controls associated with the first virtual content. Brinda teaches wherein the one or more second criteria include a criterion that is satisfied when the input from the user includes input interacting with one or more controls associated with the first virtual content ( “Once the user is done with the conversation, the user may dismiss the notification MSG by, for example, pressing the same button on the controller 130, and a second trigger signal would then be generated and transmitted to the processing device 110. In response to the received second trigger signal, the notification MSG as well as the virtual keyboard KB would be no longer displayed, and the processing device 110 would resume the progress in the virtual reality environment VR and direct the controller 130 to interact with the virtual reality environment VR as it did before the notification appeared.” Brinda ¶ 39. “Once the user is done with reading the notification MSG, he may dismiss the notification MSG by lowering down the controller 130.” Brinda ¶ 36. The teaching needed is just a trigger signal, not necessarily the second one. “In response to the received motion data determined to be matching the second motion data, the processing device 110 would dismiss the notification MSG from the virtual reality environment VR. As such, the user does not need to abandon the virtual reality environment VR to access the content of the notification.” Brinda ¶ 36.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Brinda’s technique to remove notification/alert with primary reference Pekelny in view of Abdollahian. One of ordinary skill in the art would be motivated to conveniently switching back to the immersive virtual reality. The system “would dismiss the notification MSG from the virtual reality environment VR. As such, the user does not need to abandon the virtual reality environment VR to access the content of the notification.” Brinda ¶ 36. Conclusion. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Medeiros et al. (“Promoting Reality Awareness in Virtual Reality through Proxemics”) Ghosh et al. (“NotifiVR: Exploring Interruptions and Notifications in Virtual Reality”) Taylor et al. (US 9779605 B1): PNG media_image8.png 432 680 media_image8.png Greyscale THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZHENGXI LIU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 23, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Examiner Interview Summary
Dec 10, 2025
Response Filed
Jan 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602865
METHODS FOR DEPTH CONFLICT MITIGATION IN A THREE-DIMENSIONAL ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12599463
COLOR MANAGEMENT PROCESS FOR CUSTOMIZED DENTAL RESTORATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597402
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM FOR APPLICATION WINDOW HAVING FIRST DISPLAY MODE AND SECOND DISPLAY MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12567193
PARTICLE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561929
METHOD AND ELECTRONIC DEVICE FOR PROVIDING INFORMATION RELATED TO PLACING OBJECT IN SPACE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+40.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month