DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on January 22, 2026 has been entered.
In view of the amendment to the claims, the amendment of claims 1, 5, 12 and 15 have been acknowledged. Claims 7-8 and 18-33 have been canceled. New claims 34-44 have been added.
Response to Arguments
Applicant’s arguments, see pages 7-10 of Remarks, filed January 22, 2026 have been fully considered. But they are not persuasive.
Regarding Claims 1 and 12, Applicants state in pages 7-9 of Remarks that “Applicant respectfully requests reconsideration of claim 1 in its new form. As discussed below, Applicant submits that Stafford fails to describe all features of amended claim 1.
Stafford describes "a head-mounted display (HMD) 200 is shown for presenting, in a display 202, augmented reality (AR) images that may be projected onto the display 202 combine with real world images passed through an occlusion layer 204." Stafford, para. [0045]. The HID 200 includes an "occlusion layer 204 [that] may be, for example, variable-transparency glass, variable-transparency plastic, or other variable-transparency material and may be implemented by, for instance, electrochromic devices, suspended particle devices, or liquid crystal devices that vary their transparency based on control voltages applied." Id. at para. [0046]. The "occlusion layer 204 may be controlled by one or more processors 208 accessing instructions on one or more computer storages 210 to present demanded AR images on the display 202 and to control the transparency of the occlusion layer 204 on a region-by-region basis," where "some regions of the layer 204 may be controlled to be more transparent than other regions." Id. at para. [0047]. Stafford explains that "using the outward-looking camera 212 and/or outward-oriented microphone 218, the person 214 is detected" and, "upon identification of the region of the social target, the opacity of the region is decreased, with the opacity of other regions of the display 202 with occlusion layer 204 remaining the same, so that the wearer of the HMD can more clearly see the person 214." Id. at para. [0051].
However, Applicant submits that Stafford fails to describe "detect[ing] a person external to the extended reality device" and "adjust[ing] the virtual content displayed on the display to reduce a prominence of the virtual content based on detection of the person, wherein, to adjust the virtual content, the at least one processor is configured to at least one of increase a transparency of the virtual content, decrease a size of the virtual content, or adjust a position of the virtual content on the display," as recited in amended claim 1.
For example, as noted above, Stafford describes that, "upon identification of the region of the social target, the opacity of the region is decreased' using an occlusion layer 204 that includes "variable-transparency glass, variable-transparency plastic, or other variable-transparency material and may be implemented by, for instance, electrochromic devices, suspended particle devices, or liquid crystal devices that vary their transparency based on control voltages applied." Stafford, para. [0046] and [0051]. Applicant submits that reducing opacity of an occlusion layer 204 (e.g., variable transparency glass) of the display to reveal real-world content through the display is different from "adjust[ing] the virtual content displayed on the display to reduce a prominence of the virtual content," much less by "increas[ing] a transparency of the virtual content,""decreas[ing] a size of the virtual content," and/or "adjust a position of the virtual content on the display," as claimed.
For at least the reasons discussed above, Applicant respectfully submits that Stafford fails to disclose all features of claim 1. Therefore, it is respectfully submitted that claim 1 is in condition for allowance.
While differing in scope, independent claim 12 has been amended to recite features that are similar to distinguishing features of claim 1 discussed above. Therefore, it is respectfully submitted that claims 12 is also in condition for allowance for at least the same reasons”.
Examiner replies:
The examiner disagrees with Applicant’s premises and conclusion. The examiner respectfully maintains that the prior art rejections in this case are proper for the following reasons. In respond to the applicant’s arguments, the examiner recites Stafford in order to disclose the issue. Stafford discloses a head-mounted display (HMD) (As shown in FIG. 2) including a display 202 and an occlusion layer 204. Stafford discloses that the HMD controls the display to present images and change the opacity of the region of the display 202 (Paragraph [0051]). More specifically, paragraph [0059] of Stafford describes the opacity of the region of “VR objects being presented on the display 202 as represented for illustration purposes by cross-hatch lines 802” is decreased after detecting the person. Thus, the HMD changes the transparency of the virtual object/image displayed on the display 202. Accordingly, Stafford discloses the above arguments.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 6, 9, 11-13 and 16-17 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Stafford et al (U.S. Patent Application Publication 2021/0217211 A1).
Regarding claim 1, Stafford discloses an extended reality device (FIG. 2; paragraph [0045], a head-mounted display (HMD) 200) comprising:
a display (Paragraph [0045], a display 202);
at least one memory (Paragraph [0047], computer storages 210);
at least one camera (Paragraph [0048], forward-looking camera); and
at least one processor coupled to the at least one memory and the at least one camera (Paragraph [0045], augmented reality (AR) images that may be projected onto the display 202 combine with real world images ...; paragraph [047], ... one or more processors 208 accessing instructions on one or more computer storages 210 to present demanded AR images on the display 202 and to control the transparency of the occlusion layer 204 on a region-by-region basis ...), the at least one processor configured to:
cause images of a real-world environment captured by the at least one camera to be displayed on the display (Paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 and a real world display 216 ... In one example, the camera 212 is registered with the HMD display coordinates, so that the location of the person 214 in the camera image can be mapped directly to the region of the display 202 the person can be seen through);
cause virtual content to be displayed on the display (Abstract, a user wearing an AR head-mounted display (HMD) ... the virtual images presented in the HMD ...; paragraph [0003], at least one display is provided through which a wearer of the HMD can see real world objects. Also, the system includes at least one processor configured with instructions executable to present at least one image on the display ...; paragraph [0045], augmented reality (AR) images that may be projected onto the display 202 combine with real world images passed through an occlusion layer 204; paragraph [0059], FIG. 8 shows that absent present principles, a real world person 800 cannot be clearly seen through the display 202 as indicated by the person 800 being depicted in dashed lines, owing to an opaque background and/or VR objects being presented on the display 202 as represented for illustration purposes by cross-hatch lines 802. Thus, the VR object of cross-hatch lines 802 is a virtual content displayed on the display 202);
detect a person external to the extended reality device (Paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 ...); and
adjust the virtual content displayed on the display (Paragraph [0059], FIG. 9, however, illustrates that application of principles discussed above results in the opacity of the region 900 surrounding the person 800 being decreased such that the person 800 is clearly seen) to reduce a prominence of the virtual content based on detection of the person (FIG. 3; paragraph [0051], using the outward-looking camera 212 and/or outward-oriented microphone 218, the person 214 is detected. Proceeding to block 302, the logic identifies or determines the location of the display 202 with occlusion region 204 through which the person 214 can be seen using, for example, the techniques described above. Moving to block 304, upon identification of the region of the social target, the opacity of the region is decreased ...), wherein, to adjust the virtual content, the at least one processor is configured to at least one of increase a transparency of the virtual content (Paragraph [0059], results in the opacity of the region 900 surrounding the person 800 being decreased such that the person 800 is clearly seen ... Thus, the transparency of “VR objects being presented on the display 202 as represented for illustration purposes by cross-hatch lines 802” is increased), decrease a size of the virtual content, or adjust a position of the virtual content on the display.
Regarding claim 2, Stafford discloses everything claimed as applied above (see claim 1), and Stafford further disclose wherein, to detect the person, the at least one processor is configured to determine that the person is attempting to communicate with a user of the extended reality device (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 is looking at and/or speaking to the wearer of the HMD 200).
Regarding claim 6, Stafford discloses everything claimed as applied above (see claim 1), and Stafford further disclose wherein the at least one processor is configured to detect the person external to the extended reality device based on analysis of image data or video data captured by one or more cameras of the extended reality device (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 is looking at and/or speaking to the wearer of the HMD 200).
Regarding claim 9, Stafford discloses everything claimed as applied above (see claim 1), and Stafford further disclose wherein, to reduce the prominence of the virtual content, the at least one processor is configured to decrease a size of a field of view occupied by the virtual content (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ...; paragraph [0059], FIG. 8 shows that absent present principles, a real world person 800 cannot be clearly seen through the display 202 as indicated by the person 800 being depicted in dashed lines, owing to an opaque background and/or VR objects being presented on the display 202 as represented for illustration purposes by cross-hatch lines 802. FIG. 9, however, illustrates that application of principles discussed above results in the opacity of the region 900surrounding the person 800 being decreased such that the person 800 is clearly seen, as indicated by the person being represented by solid lines. If desired, a visibly highlighted boundary 902 may be included in the virtual images to surround the region 900 in which the person 800 appears to further bring the attention of the wearer of the HMD onto the person 800. Thus, a size of a field of view occupied by the virtual content is decreased).
Regarding claim 11, Stafford discloses everything claimed as applied above (see claim 1), and Stafford further disclose wherein the at least one processor is configured to cause a representation of the person to be displayed on the display based on detection of the person (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... In one example, the camera 212 is registered with the HMD display coordinates, so that the location of the person 214 in the camera image can be mapped directly to the region of the display 202 the person can be seen through ...; paragraph [0059], ... FIG. 9, however, illustrates that application of principles discussed above results in the opacity of the region 900 surrounding the person 800 being decreased such that the person 800 is clearly seen, as indicated by the person being represented by solid lines. If desired, a visibly highlighted boundary 902 may be included in the virtual images to surround the region 900 in which the person 800 appears to further bring the attention of the wearer of the HMD onto the person 800).
Regarding claim 12, Stafford discloses a method comprising:
display images of a real-world environment captured by at least one camera of an extended reality device (FIG. 2; paragraph [0045], a head-mounted display (HMD) 200 ... augmented reality (AR) images that may be projected onto the display 202 combine with real world images ...; paragraph [0048], forward-looking camera) to be displayed on a display of the extended reality device (Paragraph [0045], a display 202; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 and a real world display 216 ... In one example, the camera 212 is registered with the HMD display coordinates, so that the location of the person 214 in the camera image can be mapped directly to the region of the display 202 the person can be seen through);
displaying virtual content on the display (Paragraph [0045], a display 202 ...; abstract, a user wearing an AR head-mounted display (HMD) ... the virtual images presented in the HMD ...; paragraph [0003], at least one display is provided through which a wearer of the HMD can see real world objects. Also, the system includes at least one processor configured with instructions executable to present at least one image on the display ...; paragraph [0045], augmented reality (AR) images that may be projected onto the display 202 combine with real world images passed through an occlusion layer 204; paragraph [0059], FIG. 8 shows that absent present principles, a real world person 800 cannot be clearly seen through the display 202 as indicated by the person 800 being depicted in dashed lines, owing to an opaque background and/or VR objects being presented on the display 202 as represented for illustration purposes by cross-hatch lines 802. Thus, the VR object of cross-hatch lines 802 is a virtual content displayed on the display 202);
detecting a person external to the extended reality device (Paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 ...); and
adjust the virtual content displayed on the display (Paragraph [0059], FIG. 9, however, illustrates that application of principles discussed above results in the opacity of the region 900 surrounding the person 800 being decreased such that the person 800 is clearly seen) to reduce a prominence of the virtual content based on detection of the person FIG. 3; paragraph [0051], using the outward-looking camera 212 and/or outward-oriented microphone 218, the person 214 is detected. Proceeding to block 302, the logic identifies or determines the location of the display 202 with occlusion region 204 through which the person 214 can be seen using, for example, the techniques described above. Moving to block 304, upon identification of the region of the social target, the opacity of the region is decreased ...) by at least one of increasing a transparency of the virtual content (Paragraph [0059], results in the opacity of the region 900 surrounding the person 800 being decreased such that the person 800 is clearly seen ... Thus, the transparency of “VR objects being presented on the display 202 as represented for illustration purposes by cross-hatch lines 802” is increased), decreasing a size of the virtual content, or adjust a position of the virtual content on the display.
Regarding claim 13, Stafford discloses everything claimed as applied above (see claim 12), and Stafford further disclose wherein detecting the person comprises determining that the person is attempting to communicate with a user of the extended reality device (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 is looking at and/or speaking to the wearer of the HMD 200).
Regarding claim 16, Stafford discloses everything claimed as applied above (see claim 12), and Stafford discloses further comprising detecting the person external to the extended reality device based on analysis of image data or video data captured by one or more cameras of the extended reality device (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 is looking at and/or speaking to the wearer of the HMD 200).
Regarding claim 17, Stafford discloses everything claimed as applied above (see claim 12), and Stafford discloses further comprising displaying a representation of the person on the display based on detection of the person (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... In one example, the camera 212 is registered with the HMD display coordinates, so that the location of the person 214 in the camera image can be mapped directly to the region of the display 202 the person can be seen through ...; paragraph [0059], ... FIG. 9, however, illustrates that application of principles discussed above results in the opacity of the region 900 surrounding the person 800 being decreased such that the person 800 is clearly seen, as indicated by the person being represented by solid lines. If desired, a visibly highlighted boundary 902 may be included in the virtual images to surround the region 900 in which the person 800 appears to further bring the attention of the wearer of the HMD onto the person 800).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4, 14, 36, 38 and 44 are rejected under 35 U.S.C. 103 as being unpatentable over Stafford et al (U.S. Patent Application Publication 2021/0217211 A1) in view of Mullins (U.S. Patent Application Publication 2019/0068529 A1).
Regarding claim 3, Stafford discloses everything claimed as applied above (see claim 2), and Stafford further disclose wherein, to determine that the person is attempting to communicate with the user of the extended reality device (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 is looking at and/or speaking to the wearer of the HMD 200).
However, Stafford does not specifically disclose the at least one processor is configured to determine that the person is approaching the user.
In additional, Mullins discloses (FIGS. 1 and 2; paragraph [0046], The HMD 101 can be worn on the head of a user, e.g., the user 102; paragraph [0047], the HMD 101 may also display a virtual object based on a geographic location of the HMD 101. For example, a set of virtual objects may be accessible when the user 102 of the HMD 101 is located in a particular building ...) the at least one processor (Paragraph [0046], The HMD 101 includes a processor 212) is configured to determine that the person is approaching the user (Paragraph [0067], the processor 212 may include a directional content application 214 ..; paragraph [0071], FIG. 4 is a block diagram illustrating an example embodiment of the directional content application 214. The directional content application 214 is shown, by way of example, to include a direction module 402 ...; paragraph [0072], the direction module 402 uses other types of sensors (e.g., a time-of-flight sensor) to detect the presence of a person within a preset radius of the HMD 101, a relative location of the person with respect to the HMD 101, and a distance between the person and the HMD 101; FIG. 5; paragraph [0077], At operation 502, the HMD 101 determines a location of a second user relative to the HMD 101. The second user includes a person speaking to the user 102 of the HMD 101. The HMD 101 detects that the second user is located within a preset radius of the HMD 101. Thus, the second user is approaching the user of the HMD when the second user is located within a preset radius of the HMD).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Mullins, and applying the head mounted displays taught by Mullins to provide the directional content application for detecting a relative location of a person with respect to the HMD; and then determine the person is approaching the user of the HMD when the presence of the person within a preset radius of the HMD. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Mullins to obtain the invention as specified in claim.
Regarding claim 4, Stafford discloses everything claimed as applied above (see claim 2), and Stafford further disclose wherein, to determine that the person is attempting to communicate with the user of the extended reality device (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 is looking at and/or speaking to the wearer of the HMD 200).
However, Stafford does not specifically disclose the at least one processor is configured to determine that the person is addressing the user.
In additional, Mullins discloses (FIGS. 1 and 2; paragraph [0046], The HMD 101 can be worn on the head of a user, e.g., the user 102; paragraph [0047], the HMD 101 may also display a virtual object based on a geographic location of the HMD 101. For example, a set of virtual objects may be accessible when the user 102 of the HMD 101 is located in a particular building ...) the at least one processor (Paragraph [0046], The HMD 101 includes a processor 212) is configured to determine that the person is addressing the user (Paragraph [0067], the processor 212 may include a directional content application 214 ... For example, the directional content application 214 detects audio content originating from another user located within a preset distance or radius of the HMD 101 (e.g., a person standing in front of the HMD 101 and speaking in a first language (e.g., Spanish) to the user 102 of the HMD 101) ...; paragraph [0071], FIG. 4 is a block diagram illustrating an example embodiment of the directional content application 214. The directional content application 214 is shown, by way of example, to include a direction module 402 ...; paragraph [0072], the direction module 402 detects audio content (e.g., speech from another person) and determines a location of the person relative to the HMD 101 using a beamforming technique ... Other computer-vision based techniques (e.g., facial recognition) can be used to determine whether the person is speaking, facing the user 102, and addressing the user 102; paragraphs [0090]-[0091], FIG. 8A is a block diagram illustrating an example of the HMD 101 detecting audio from users 812, 814 ... FIG. 8B is a block diagram illustrating an example of AR content being displayed in a transparent display 1300 in the HMD illustrating the detected audio of FIG. 8A ... The speech bubble 816 includes a text of a translation of speech content from the user 812 ... Thus, the user 812 says “HELLO” to the user 102 of the HMD 101).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Mullins, and applying the head mounted displays taught by Mullins to provide the directional content application for detecting speech from another person and using facial recognition to determine the person is addressing the user of the HMD. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Mullins to obtain the invention as specified in claim.
Regarding claim 14, Stafford discloses everything claimed as applied above (see claim 13).
However, Stafford does not specifically disclose wherein determining that the person is attempting to communicate with the user of the extended reality device comprises determining that the person is approaching the user.
In additional, Mullins discloses (FIGS. 1 and 2; paragraph [0046], The HMD 101 can be worn on the head of a user, e.g., the user 102; paragraph [0047], the HMD 101 may also display a virtual object based on a geographic location of the HMD 101. For example, a set of virtual objects may be accessible when the user 102 of the HMD 101 is located in a particular building ...) wherein determining that the person is attempting to communicate with the user of the extended reality device (Paragraph [0067], the processor 212 may include a directional content application 214 ... For example, the directional content application 214 detects audio content originating from another user located within a preset distance or radius of the HMD 101 (e.g., a person standing in front of the HMD 101 and speaking in a first language (e.g., Spanish) to the user 102 of the HMD 101) ...; paragraph [0071], FIG. 4 is a block diagram illustrating an example embodiment of the directional content application 214. The directional content application 214 is shown, by way of example, to include a direction module 402 ...; paragraph [0072], the direction module 402 detects audio content (e.g., speech from another person) and determines a location of the person relative to the HMD 101 using a beamforming technique ... Other computer-vision based techniques (e.g., facial recognition) can be used to determine whether the person is speaking, facing the user 102, and addressing the user 102) comprises determining that the person is approaching the user (Paragraph [0067], the processor 212 may include a directional content application 214 ..; paragraph [0071], FIG. 4 is a block diagram illustrating an example embodiment of the directional content application 214. The directional content application 214 is shown, by way of example, to include a direction module 402 ...; paragraph [0072], the direction module 402 uses other types of sensors (e.g., a time-of-flight sensor) to detect the presence of a person within a preset radius of the HMD 101, a relative location of the person with respect to the HMD 101, and a distance between the person and the HMD 101; FIG. 5; paragraph [0077], At operation 502, the HMD 101 determines a location of a second user relative to the HMD 101. The second user includes a person speaking to the user 102 of the HMD 101. The HMD 101 detects that the second user is located within a preset radius of the HMD 101. Thus, the second user is approaching the user of the HMD when the second user is located within a preset radius of the HMD).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Mullins, and applying the head mounted displays taught by Mullins to provide the directional content application for detecting a relative location of a person with respect to the HMD and speech from another person; and then determine the person is approaching the user of the HMD when the presence of the person within a preset radius of the HMD. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Mullins to obtain the invention as specified in claim.
Regarding claim 36, Stafford discloses everything claimed as applied above (see claim 1).
However, Stafford does not specifically disclose wherein the at least one processor is configured to adjust the virtual content further based on detection of a gaze of a user of the extended reality device in a direction of the person.
In additional, Mullins discloses (FIGS. 1 and 2; paragraph [0046], The HMD 101 can be worn on the head of a user, e.g., the user 102; paragraph [0047], the HMD 101 may also display a virtual object based on a geographic location of the HMD 101. For example, a set of virtual objects may be accessible when the user 102 of the HMD 101 is located in a particular building ...) wherein the at least one processor (Paragraph [0046], The HMD 101 includes a processor 212) is configured to adjust the virtual content further based on detection of a gaze of a user of the extended reality device (Paragraphs [0114], the user may be provoked by an application to pin a virtual object or content. An application may showcase or display one or more available widgets that the user can pin. When the user sees an appropriate real world location or object that is appropriate for pinning a selected widget, the user can do so when the real world object or location is in the focus or gaze of the user) in a direction of the person (Paragraph [0067], the processor 212 may include a directional content application 214 ..; paragraph [0071], FIG. 4 is a block diagram illustrating an example embodiment of the directional content application 214. The directional content application 214 is shown, by way of example, to include a direction module 402 ...; paragraph [0072], the direction module 402 uses other types of sensors (e.g., a time-of-flight sensor) to detect the presence of a person within a preset radius of the HMD 101, a relative location of the person with respect to the HMD 101, and a distance between the person and the HMD 101; FIG. 5; paragraph [0077], at operation 502, the HMD 101 determines a location of a second user relative to the HMD 101. The second user includes a person speaking to the user 102 of the HMD 101. The HMD 101 detects that the second user is located within a preset radius of the HMD 101. Thus, the second user is approaching the user of the HMD when the second user is located within a preset radius of the HMD).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Mullins, and applying the head mounted displays taught by Mullins to provide the directional content application for detecting a relative location of a person with respect to the HMD; and then determine the person is approaching the user of the HMD when the presence of the person within a preset radius of the HMD. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Mullins to obtain the invention as specified in claim.
Regarding claim 38, Stafford discloses everything claimed as applied above (see claim 12), and Stafford further disclose wherein determining that the person is attempting to communicate with a user of the extended reality device (FIG. 2; paragraph [0048], at least one forward-looking camera, which may be a red-green-blue (RGB) camera, is mounted on the HMD 200 to generate images of objects in front of the HMD 200, such as a person 214 ... Image and/or audio recognition may be applied to the images by the processor 208 to detect that, for example, the person 214 is looking at and/or speaking to the wearer of the HMD 200).
However, Stafford does not specifically disclose a user of the extended reality device comprises determining that the person is addressing the user.
In additional, Mullins discloses (FIGS. 1 and 2; paragraph [0046], The HMD 101 can be worn on the head of a user, e.g., the user 102; paragraph [0047], the HMD 101 may also display a virtual object based on a geographic location of the HMD 101. For example, a set of virtual objects may be accessible when the user 102 of the HMD 101 is located in a particular building ...) a user of the extended reality device comprises determining that the person is addressing the user (Paragraph [0067], the processor 212 may include a directional content application 214 ... For example, the directional content application 214 detects audio content originating from another user located within a preset distance or radius of the HMD 101 (e.g., a person standing in front of the HMD 101 and speaking in a first language (e.g., Spanish) to the user 102 of the HMD 101) ...; paragraph [0071], FIG. 4 is a block diagram illustrating an example embodiment of the directional content application 214. The directional content application 214 is shown, by way of example, to include a direction module 402 ...; paragraph [0072], the direction module 402 detects audio content (e.g., speech from another person) and determines a location of the person relative to the HMD 101 using a beamforming technique ... Other computer-vision based techniques (e.g., facial recognition) can be used to determine whether the person is speaking, facing the user 102, and addressing the user 102; paragraphs [0090]-[0091], FIG. 8A is a block diagram illustrating an example of the HMD 101 detecting audio from users 812, 814 ... FIG. 8B is a block diagram illustrating an example of AR content being displayed in a transparent display 1300 in the HMD illustrating the detected audio of FIG. 8A ... The speech bubble 816 includes a text of a translation of speech content from the user 812 ... Thus, the user 812 says “HELLO” to the user 102 of the HMD 101).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Mullins, and applying the head mounted displays taught by Mullins to provide the directional content application for detecting speech from another person and using facial recognition to determine the person is addressing the user of the HMD. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Mullins to obtain the invention as specified in claim.
Regarding claim 44, Stafford discloses everything claimed as applied above (see claim 12).
However, Stafford does not specifically disclose wherein the virtual content is adjusted further based on detection of a gaze of a user of the extended reality device in a direction of the person.
In additional, Mullins discloses (FIGS. 1 and 2; paragraph [0046], The HMD 101 can be worn on the head of a user, e.g., the user 102; paragraph [0047], the HMD 101 may also display a virtual object based on a geographic location of the HMD 101. For example, a set of virtual objects may be accessible when the user 102 of the HMD 101 is located in a particular building ...) wherein the virtual content is adjusted further based on detection of a gaze of a user of the extended reality device (Paragraphs [0114], the user may be provoked by an application to pin a virtual object or content. An application may showcase or display one or more available widgets that the user can pin. When the user sees an appropriate real world location or object that is appropriate for pinning a selected widget, the user can do so when the real world object or location is in the focus or gaze of the user) in a direction of the person (Paragraph [0067], the processor 212 may include a directional content application 214 ..; paragraph [0071], FIG. 4 is a block diagram illustrating an example embodiment of the directional content application 214. The directional content application 214 is shown, by way of example, to include a direction module 402 ...; paragraph [0072], the direction module 402 uses other types of sensors (e.g., a time-of-flight sensor) to detect the presence of a person within a preset radius of the HMD 101, a relative location of the person with respect to the HMD 101, and a distance between the person and the HMD 101; FIG. 5; paragraph [0077], at operation 502, the HMD 101 determines a location of a second user relative to the HMD 101. The second user includes a person speaking to the user 102 of the HMD 101. The HMD 101 detects that the second user is located within a preset radius of the HMD 101. Thus, the second user is approaching the user of the HMD when the second user is located within a preset radius of the HMD).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Mullins, and applying the head mounted displays taught by Mullins to provide the directional content application for detecting a relative location of a person with respect to the HMD; and then determine the person is approaching the user of the HMD when the presence of the person within a preset radius of the HMD. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Mullins to obtain the invention as specified in claim.
Claims 5, 10, 15, 34-35, 37 and 39-43 are rejected under 35 U.S.C. 103 as being unpatentable over Stafford et al (U.S. Patent Application Publication 2021/0217211 A1) in view of Powderly et al (U.S. Patent Application Publication 2018/0189568 A1).
Regarding claim 5, Stafford discloses everything claimed as applied above (see claim 2).
However, Stafford does not specifically disclose wherein the at least one processor is configured to:
determine the person is no longer attempting to communicate with the user of the extended reality device; and
adjust the virtual content to increase the prominence of the virtual content based on determining the person is no longer attempting to communicate with the user of the extended reality device.
In additional, Powderly discloses (Abstract, embodiments of a wearable device can include a head-mounted display (HMD) which can be configured to display virtual content. While the user is interacting with visual or audible virtual content, the user of the wearable may encounter a triggering event such as, for example, an emergency condition or an unsafe condition, detecting one or more triggering objects in an environment, or determining characteristics of the user's environment (e.g., home or office) ...; paragraph [0044], FIG. 2 illustrates an example of wearable system 200 which can be configured to provide an AR/VR/MR scene ...; paragraphs [0131]-[0133], the worker shown in FIG. 11C wears an HMD, which renders virtual content 1130 in the user's field of view to enhance job performance ... the worker may encounter an incoming vehicle which may drive at a very fast speed or a pedestrian may walk in front of the machinery ... If the wearable system determines that the speed or the distance passes a threshold condition (e.g., the vehicle is approaching very fast or the vehicle or pedestrian is very close to the worker), the HMD may automatically mute the virtual content (e.g., by pausing the game, moving the virtual game to be outside of the FOV) to reduce distractions ...; paragraphs [0197]-[0200], FIGS. 13A and 13B illustrate example processes of muting the wearable system based on a triggering event ... At block 1312 of the process 1310, the wearable system can receive data from environmental sensors ... At block 1314, the wearable system analyzes the data to detect a triggering event ... At block 1316, the display system can automatically be muted in response to the triggering event. For example, the wearable system can automatically turn off the virtual content display or mute a portion of the virtual content presented by the display. As a result, the user may see through the wearable system into the physical environment without distractions by the virtual content ...) wherein the at least one processor (Paragraph [0049], the local processing and data module 260 may comprise a hardware processor ...) is configured to:
determine the person is no longer attempting to communicate with the user of the extended reality device (Paragraph [0201], at optional block 1318a, the wearable system can determine the termination of a triggering event. For example, the wearable system can determine whether the situation which caused the triggering event is over (e.g., the fire is put out) or the user is no longer in the same environment (e.g., a user walks from home to a park)); and
adjust the virtual content to increase the prominence of the virtual content based on determining the person is no longer attempting to communicate with the user of the extended reality device (Paragraph [0201], if the triggering event is no longer present, the process 1310 may proceed to optional block 1318b to resume the display system or the muted virtual content ...).
It's noted that Powderly dose not describe “determine the person is attempting to communicate with the user”. However, as discussed in claim 2, Stafford discloses “determine the person is attempting to communicate with the user” and Powderly discloses “the wearable system can automatically mute or resume the virtual content based on the triggering event “pedestrian may walk in front of the machinery” or the triggering event is over”. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly to provide capability for detecting whether a triggering event is no longer present, as a result, the HMD can resume the muted virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 10, Stafford discloses everything claimed as applied above (see claim 1).
However, Stafford does not specifically disclose wherein, to reduce the prominence of the virtual content, the at least one processor is configured to decrease a volume of audio associated with the virtual content.
In additional, Powderly discloses (Abstract, embodiments of a wearable device can include a head-mounted display (HMD) which can be configured to display virtual content. While the user is interacting with visual or audible virtual content, the user of the wearable may encounter a triggering event such as, for example, an emergency condition or an unsafe condition, detecting one or more triggering objects in an environment, or determining characteristics of the user's environment (e.g., home or office) ...; paragraph [0044], FIG. 2 illustrates an example of wearable system 200 which can be configured to provide an AR/VR/MR scene ...) wherein, to reduce the prominence of the virtual content, the at least one processor (Paragraph [0049], the local processing and data module 260 may comprise a hardware processor ...) is configured to decrease a volume of audio associated with the virtual content (Paragraphs [0131]-[0133], the worker shown in FIG. 11C wears an HMD, which renders virtual content 1130 in the user's field of view to enhance job performance ... the worker may encounter an incoming vehicle which may drive at a very fast speed or a pedestrian may walk in front of the machinery ... If the wearable system determines that the speed or the distance passes a threshold condition (e.g., the vehicle is approaching very fast or the vehicle or pedestrian is very close to the worker), the HMD may automatically mute the virtual content (e.g., by pausing the game, moving the virtual game to be outside of the FOV) to reduce distractions ...; paragraphs [0197]-[0200], FIGS. 13A and 13B illustrate example processes of muting the wearable system based on a triggering event ... At block 1312 of the process 1310, the wearable system can receive data from environmental sensors ... At block 1314, the wearable system analyzes the data to detect a triggering event ... At block 1316, the display system can automatically be muted in response to the triggering event. For example, the wearable system can automatically turn off the virtual content display or mute a portion of the virtual content presented by the display. As a result, the user may see through the wearable system into the physical environment without distractions by the virtual content ... As another example, the wearable system can turn off the sound or lower the volume of the sound associated with the virtual content to reduce perceptual confusions ...).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly, and applying the processes of muting an augmented reality display device based on a triggering event taught by Powderly to provide capability for reducing the volume of the sound associated with the virtual content displayed on the HMD in response to the detected person, as a result, the user can see through the HMD into the physical environment without distractions by the virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 15, Stafford discloses everything claimed as applied above (see claim 13).
However, Stafford does not specifically disclose further comprising:
determining the person is no longer attempting to communicate with the user of the extended reality device; and
adjust the virtual content to increase the prominence of the virtual content based on determining the person is no longer attempting to communicate with the user of the extended reality device.
In additional, Powderly discloses (Abstract, embodiments of a wearable device can include a head-mounted display (HMD) which can be configured to display virtual content. While the user is interacting with visual or audible virtual content, the user of the wearable may encounter a triggering event such as, for example, an emergency condition or an unsafe condition, detecting one or more triggering objects in an environment, or determining characteristics of the user's environment (e.g., home or office) ...; paragraph [0044], FIG. 2 illustrates an example of wearable system 200 which can be configured to provide an AR/VR/MR scene ...; paragraphs [0131]-[0133], the worker shown in FIG. 11C wears an HMD, which renders virtual content 1130 in the user's field of view to enhance job performance ... the worker may encounter an incoming vehicle which may drive at a very fast speed or a pedestrian may walk in front of the machinery ... If the wearable system determines that the speed or the distance passes a threshold condition (e.g., the vehicle is approaching very fast or the vehicle or pedestrian is very close to the worker), the HMD may automatically mute the virtual content (e.g., by pausing the game, moving the virtual game to be outside of the FOV) to reduce distractions ...; paragraphs [0197]-[0200], FIGS. 13A and 13B illustrate example processes of muting the wearable system based on a triggering event ... At block 1312 of the process 1310, the wearable system can receive data from environmental sensors ... At block 1314, the wearable system analyzes the data to detect a triggering event ... At block 1316, the display system can automatically be muted in response to the triggering event. For example, the wearable system can automatically turn off the virtual content display or mute a portion of the virtual content presented by the display. As a result, the user may see through the wearable system into the physical environment without distractions by the virtual content ...) further comprising:
determining the person is no longer attempting to communicate with the user of the extended reality device (Paragraph [0201], at optional block 1318a, the wearable system can determine the termination of a triggering event. For example, the wearable system can determine whether the situation which caused the triggering event is over (e.g., the fire is put out) or the user is no longer in the same environment (e.g., a user walks from home to a park)); and
adjust the virtual content to increase the prominence of the virtual content based on determining the person is no longer attempting to communicate with the user of the extended reality device (Paragraph [0201], if the triggering event is no longer present, the process 1310 may proceed to optional block 1318b to resume the display system or the muted virtual content ...).
It's noted that Powderly dose not describe “determine the person is attempting to communicate with the user”. However, as discussed in claim 13, Stafford discloses “determine the person is attempting to communicate with the user” and Powderly discloses “the wearable system can automatically mute or resume the virtual content based on the triggering event “pedestrian may walk in front of the machinery” or the triggering event is over”. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly to provide capability for detecting whether a triggering event is no longer present, as a result, the HMD can resume the muted virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 34, Stafford discloses everything claimed as applied above (see claim 1).
However, Stafford does not specifically disclose wherein the at least one processor is configured to adjust an amount of movement of the virtual content based on detection of the person.
In additional, Powderly discloses (Abstract, embodiments of a wearable device can include a head-mounted display (HMD) which can be configured to display virtual content. While the user is interacting with visual or audible virtual content, the user of the wearable may encounter a triggering event such as, for example, an emergency condition or an unsafe condition, detecting one or more triggering objects in an environment, or determining characteristics of the user's environment (e.g., home or office) ...; paragraph [0044], FIG. 2 illustrates an example of wearable system 200 which can be configured to provide an AR/VR/MR scene ...) wherein the at least one processor (Paragraph [0049], the local processing and data module 260 may comprise a hardware processor ...) is configured to adjust an amount of movement of the virtual content based on detection of the person (Paragraphs [0131]-[0133], the worker shown in FIG. 11C wears an HMD, which renders virtual content 1130 in the user's field of view to enhance job performance ... the worker may encounter an incoming vehicle which may drive at a very fast speed or a pedestrian may walk in front of the machinery ... If the wearable system determines that the speed or the distance passes a threshold condition (e.g., the vehicle is approaching very fast or the vehicle or pedestrian is very close to the worker), the HMD may automatically mute the virtual content (e.g., by pausing the game, moving the virtual game to be outside of the FOV) to reduce distractions ...).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly, and applying the processes of muting an augmented reality display device based on a triggering event taught by Powderly to provide capability for adjusting movement of the virtual content displayed on the HMD in response to the detected person, as a result, the user can see through the HMD into the physical environment without distractions by the virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 35, the combination of Stafford in view of Powderly discloses everything claimed as applied above (see claim 34).
However, Stafford does not specifically disclose wherein, to adjust the amount of movement of the virtual content, the at least one processor is configured to stop the movement of the virtual content.
In additional, Powderly discloses wherein, to adjust the amount of movement of the virtual content, the at least one processor is configured to stop the movement of the virtual content (Paragraph [0134], when the wearable system detects a termination condition, such as e.g., when the triggering event is over, the HMD may resume normal operations and restore presentation of virtual content to the worker).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly, and applying the processes of muting an augmented reality display device based on a triggering event taught by Powderly to provide capability for adjusting movement of the virtual content displayed on the HMD in response to the detected person, as a result, the user can see through the HMD into the physical environment without distractions by the virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 37, the combination of Stafford in view of Powderly discloses everything claimed as applied above (see claim 5).
However, Stafford does not specifically disclose wherein, to adjust the virtual content to increase the prominence of the virtual content, the at least one processor is configured to at least one of decrease the transparency of the virtual content or increase the size of the virtual content.
In additional, Powderly discloses wherein, to adjust the virtual content to increase the prominence of the virtual content, the at least one processor is configured to at least one of decrease the transparency of the virtual content (Paragraph [0201], at optional block 1318a, the wearable system can determine the termination of a triggering event. For example, the wearable system can determine whether the situation which caused the triggering event is over (e.g., the fire is put out) or the user is no longer in the same environment (e.g., a user walks from home to a park); paragraph [0201], if the triggering event is no longer present, the process 1310 may proceed to optional block 1318b to resume the display system or the muted virtual content ...) or increase the size of the virtual content.
It's noted that Powderly dose not describe “determine the person is attempting to communicate with the user”. However, as discussed in claim 13, Stafford discloses “determine the person is attempting to communicate with the user” and Powderly discloses “the wearable system can automatically mute or resume the virtual content based on the triggering event “pedestrian may walk in front of the machinery” or the triggering event is over”. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly to provide capability for detecting whether a triggering event is no longer present, as a result, the HMD can resume the muted virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 39, Stafford discloses everything claimed as applied above (see claim 12).
However, Stafford does not specifically disclose wherein reducing the prominence of the virtual content comprises decreasing a size of a field of view occupied by the virtual content.
In additional, Powderly discloses (Abstract, embodiments of a wearable device can include a head-mounted display (HMD) which can be configured to display virtual content. While the user is interacting with visual or audible virtual content, the user of the wearable may encounter a triggering event such as, for example, an emergency condition or an unsafe condition, detecting one or more triggering objects in an environment, or determining characteristics of the user's environment (e.g., home or office) ...; paragraph [0044], FIG. 2 illustrates an example of wearable system 200 which can be configured to provide an AR/VR/MR scene ...) wherein reducing the prominence of the virtual content comprises decreasing a size of a field of view occupied by the virtual content (Paragraphs [0131]-[0133], the worker shown in FIG. 11C wears an HMD, which renders virtual content 1130 in the user's field of view to enhance job performance ... the worker may encounter an incoming vehicle which may drive at a very fast speed or a pedestrian may walk in front of the machinery ... If the wearable system determines that the speed or the distance passes a threshold condition (e.g., the vehicle is approaching very fast or the vehicle or pedestrian is very close to the worker), the HMD may automatically mute the virtual content (e.g., by pausing the game, moving the virtual game to be outside of the FOV) to reduce distractions ...).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly, and applying the processes of muting an augmented reality display device based on a triggering event taught by Powderly to provide capability for adjusting movement of the virtual content displayed on the HMD in response to the detected person, as a result, the user can see through the HMD into the physical environment without distractions by the virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 40, Stafford discloses everything claimed as applied above (see claim 12).
However, Stafford does not specifically disclose wherein reducing the prominence of the virtual content comprises decreasing a volume of audio associated with the virtual content.
In additional, Powderly discloses (Abstract, embodiments of a wearable device can include a head-mounted display (HMD) which can be configured to display virtual content. While the user is interacting with visual or audible virtual content, the user of the wearable may encounter a triggering event such as, for example, an emergency condition or an unsafe condition, detecting one or more triggering objects in an environment, or determining characteristics of the user's environment (e.g., home or office) ...; paragraph [0044], FIG. 2 illustrates an example of wearable system 200 which can be configured to provide an AR/VR/MR scene ...) wherein reducing the prominence of the virtual content comprises decreasing a volume of audio associated with the virtual content (Paragraphs [0131]-[0133], the worker shown in FIG. 11C wears an HMD, which renders virtual content 1130 in the user's field of view to enhance job performance ... the worker may encounter an incoming vehicle which may drive at a very fast speed or a pedestrian may walk in front of the machinery ... If the wearable system determines that the speed or the distance passes a threshold condition (e.g., the vehicle is approaching very fast or the vehicle or pedestrian is very close to the worker), the HMD may automatically mute the virtual content (e.g., by pausing the game, moving the virtual game to be outside of the FOV) to reduce distractions ...; paragraphs [0197]-[0200], FIGS. 13A and 13B illustrate example processes of muting the wearable system based on a triggering event ... At block 1312 of the process 1310, the wearable system can receive data from environmental sensors ... At block 1314, the wearable system analyzes the data to detect a triggering event ... At block 1316, the display system can automatically be muted in response to the triggering event. For example, the wearable system can automatically turn off the virtual content display or mute a portion of the virtual content presented by the display. As a result, the user may see through the wearable system into the physical environment without distractions by the virtual content ... As another example, the wearable system can turn off the sound or lower the volume of the sound associated with the virtual content to reduce perceptual confusions ...).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly, and applying the processes of muting an augmented reality display device based on a triggering event taught by Powderly to provide capability for reducing the volume of the sound associated with the virtual content displayed on the HMD in response to the detected person, as a result, the user can see through the HMD into the physical environment without distractions by the virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 41, the combination of Stafford in view of Powderly discloses everything claimed as applied above (see claim 15).
However, Stafford does not specifically disclose wherein adjusting the virtual content to increase the prominence of the virtual content comprises at least one of decreasing the transparency of the virtual content or increasing the size of the virtual content.
In additional, Powderly discloses wherein adjusting the virtual content to increase the prominence of the virtual content comprises at least one of decreasing the transparency of the virtual content or increasing the size of the virtual content (Paragraph [0201], at optional block 1318a, the wearable system can determine the termination of a triggering event. For example, the wearable system can determine whether the situation which caused the triggering event is over (e.g., the fire is put out) or the user is no longer in the same environment (e.g., a user walks from home to a park); paragraph [0201], if the triggering event is no longer present, the process 1310 may proceed to optional block 1318b to resume the display system or the muted virtual content ...).
It's noted that Powderly dose not describe “determine the person is attempting to communicate with the user”. However, as discussed in claim 13, Stafford discloses “determine the person is attempting to communicate with the user” and Powderly discloses “the wearable system can automatically mute or resume the virtual content based on the triggering event “pedestrian may walk in front of the machinery” or the triggering event is over”. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly to provide capability for detecting whether a triggering event is no longer present, as a result, the HMD can resume the muted virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 42, Stafford discloses everything claimed as applied above (see claim 12).
However, Stafford does not specifically disclose further comprising adjusting an amount of movement of the virtual content based on detection of the person.
In additional, Powderly discloses (Abstract, embodiments of a wearable device can include a head-mounted display (HMD) which can be configured to display virtual content. While the user is interacting with visual or audible virtual content, the user of the wearable may encounter a triggering event such as, for example, an emergency condition or an unsafe condition, detecting one or more triggering objects in an environment, or determining characteristics of the user's environment (e.g., home or office) ...; paragraph [0044], FIG. 2 illustrates an example of wearable system 200 which can be configured to provide an AR/VR/MR scene ...) further comprising adjusting an amount of movement of the virtual content based on detection of the person (Paragraphs [0131]-[0133], the worker shown in FIG. 11C wears an HMD, which renders virtual content 1130 in the user's field of view to enhance job performance ... the worker may encounter an incoming vehicle which may drive at a very fast speed or a pedestrian may walk in front of the machinery ... If the wearable system determines that the speed or the distance passes a threshold condition (e.g., the vehicle is approaching very fast or the vehicle or pedestrian is very close to the worker), the HMD may automatically mute the virtual content (e.g., by pausing the game, moving the virtual game to be outside of the FOV) to reduce distractions ...).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly, and applying the processes of muting an augmented reality display device based on a triggering event taught by Powderly to provide capability for adjusting movement of the virtual content displayed on the HMD in response to the detected person, as a result, the user can see through the HMD into the physical environment without distractions by the virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Regarding claim 43, the combination of Stafford in view of Powderly discloses everything claimed as applied above (see claim 42).
However, Stafford does not specifically disclose wherein adjusting the amount of movement of the virtual content comprises stopping the movement of the virtual content.
In additional, Powderly discloses wherein adjusting the amount of movement of the virtual content comprises stopping the movement of the virtual content (Paragraph [0134], when the wearable system detects a termination condition, such as e.g., when the triggering event is over, the HMD may resume normal operations and restore presentation of virtual content to the worker).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system for a user wearing a head mounted displays (HMD) to improve the interaction with external objects taught by Stafford incorporate the teachings of Powderly, and applying the processes of muting an augmented reality display device based on a triggering event taught by Powderly to provide capability for adjusting movement of the virtual content displayed on the HMD in response to the detected person, as a result, the user can see through the HMD into the physical environment without distractions by the virtual content. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Stafford according to the relied-upon teachings of Powderly to obtain the invention as specified in claim.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Xilin Guo whose telephone number is (571)272-5786. The examiner can normally be reached Monday - Friday 9:00 AM-5:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XILIN GUO/Primary Examiner, Art Unit 2616