DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/03/2026 has been entered.
Response to Arguments
Applicant's arguments filed on 02/03/2026 with respect to claims 1-13 and 15-20 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Mukawa et al. (US 2015/0219897 hereinafter Mukawa) in view of Mirhosseini et al. (US 2020/0258278 hereinafter Mirhosseini), and Wei et al. (US 2021/0089122 hereinafter Wei).
Referring to claim 1, Mukawa discloses a head-mountable device (Fig. 1; head-mounted image display device 1) comprising:
a first sensor (Fig. 3; environmental information acquisition unit 304) configured to detect a feature ([0074]; information such as environmental light intensity, acoustic intensity, position or location, temperature, weather, time, an ambient image, and the number of people outside as the environmental information) of an environment of the head-mountable device ([0074];The environmental information acquisition unit 304 acquires information such as environmental light intensity, acoustic intensity, position or location, temperature, weather, time, an ambient image, and the number of people outside as the environmental information, for example.);
a second sensor (Fig. 3; status information acquisition unit 305) configured to detect a condition of an eye of a user wearing the head-mountable device ([0076]; The status information acquisition unit 305 acquires information related to the status of the viewer wearing the head-mounted image display device 1, and outputs to the control unit 301. For the status information, the status information acquisition unit 305 acquires the user's work status (whether or not the user is wearing the device), the user's action status (the orientation of the wearing user's head, movement such as walking, and the open/closed state of the eyelids), mental status (indicating whether or not the user is immersed in or concentrating on viewing an internal image (or watching in the background as a distraction), such as excitement level, alertness level, or feelings and emotions), as well as the physiological status, for example. In addition, in order to acquire this status information from the user); and
an output device (Fig. 3; display of the external image) configured to, in response to a detecting the feature ([0140]; Subsequently, when a change in the environment necessitating a change of the internal image or the external image occurs (step S2203, Yes), the control unit 301 controls the display of the external image according to the current environment (step S2204).):
in accordance with a determination that the condition of the eye does not satisfy the one or more criteria, forgo providing the output ([0133]; Furthermore, in response to an instruction from the user given via the input operating unit 302 (including blink operations and eyeball movement detected with a myoelectric sensor or oculo-electric sensor), the information to display as the external image may be changed (like a slideshow, for example).)…. and [0147]; First, the status information acquisition unit 305 acquires output information from various status sensors (discussed earlier) as status information (step S2301). Subsequently, the control unit 301 analyzes the acquired status information (step S2302), identifies the user's current work status, action status, mental status, and physiological status, and checks whether or not a user status that should be reported to nearby people has occurred (step S2303). Thus, at step (S2303) is NO, meets the limitation “forgo providing the output” before step (S2304).
However, Mukawa does not explicitly disclose in accordance with a determination that the feature is within a threshold distance from the head-mountable device and that the condition of the eye satisfies one or more criteria, provide an output to the user; and
in accordance with a determination that the feature is not within the threshold distance from the head-mountable device, forgo providing the output.
In an analogous art, Mirhosseini discloses in accordance with a determination that the feature is within a threshold distance from the head-mountable device (Mirhosseini- [0004]; The present disclosure describes techniques for indicating physical obstacles that are in a user's immediate physical environment while the user is immersed in a virtual reality environment. In one exemplary embodiment, a virtual reality environment is displayed. A distance between the electronic device and a physical object is determined. Further, whether the distance between the electronic device and a physical object is within a first threshold distance is determined. In accordance with a determination that the distance is within the first threshold distance, a visual effect in the virtual reality environment is displayed. Further, whether the distance between the electronic device and the physical object is within the second threshold distance is determined. In accordance with a determination that the distance is within a second threshold distance, a visual representation of at least part of a physical environment is displayed. The visual representation is provided by the one or more cameras.); and
in accordance with a determination that the feature is not within the threshold distance from the head-mountable device, forgo providing the output (Mirhosseini- [0055]; With reference to FIG. 3A, user device 206 can determine whether distance 304A between user device 206 and physical object 208A is less than or equal to a first threshold distance 306. As shown in FIG. 3A, in some embodiments, user device 206 determines that distance 304 is greater than first threshold distance 306, indicating that user 204 does not need to be made aware of physical object 208A because user device 206 is still far away from physical object 208A. As such, user device 206 continues to display virtual reality environment 260 without displaying any visual effect or image/video of the physical object 208A.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Mirhosseini to the system of Mukawa in order to enable reducing impact of visual effect, enhancing user experience and improving efficiency in generating virtual objects without having to significantly alter the virtual reality environment.
However, Mukawa in view of Mirhosseini does not explicitly disclose in accordance with a determination that the feature is within a threshold distance and that the condition of the eye satisfies one or more criteria, provide an output to the user.
In an analogous art, Wei discloses in accordance with a determination that the feature is within a threshold distance (i.e., (1) the user's eyes are less than a distance threshold from the screen) and that the condition of the eye satisfies one or more criteria (i.e., (2) no open eyes are detected), provide an output to the user (Wei- [0066]; FIG. 11 is a flow diagram of a process for screen control 150 in accordance with an example of the present disclosure. At operation 1110, the screen controller waits for N seconds (a selected delay period) before checking, at operation 1120, whether the screen is on or off. In some examples, N may be 15. If the screen is off, the process ends 1130. If the screen is on, then at operation 1140, a check for any trigger conditions is performed. Trigger conditions include (1) the user's eyes are less than a distance threshold from the screen, and (2) no open eyes are detected. If there are no trigger conditions, then the process loops back to operation 1110 and waits for another N seconds. In some examples, N may be 5. If any trigger conditions are detected, then at operation 1150, the screen is dimmed, and at operation 1160, the screen controller waits for M additional seconds. At operation 1170, another check is performed for any trigger conditions. If there are no trigger conditions, then at operation 1190, normal screen brightness is restored and the process loops back to operation 1110. However, if any trigger conditions do persist, then at operation 1180, the screen is locked and/or turned off.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Wei to the system of Mukawa in view of Mirhosseini in order to reduce the brightness of the screen of the computer system if the relative distance is less than a threshold ratio.
Referring to claim 7, Mukawa discloses a head-mountable device (Fig. 1; head-mounted image display device 1) comprising:
a first sensor (Fig. 3; status information acquisition unit 305) configured to detect movement of the head-mountable device ([0076]; The status information acquisition unit 305 acquires information related to the status of the viewer wearing the head-mounted image display device 1, and outputs to the control unit 301. For the status information, the status information acquisition unit 305 acquires the user's work status (whether or not the user is wearing the device), the user's action status (the orientation of the wearing user's head, movement such as walking, and the open/closed state of the eyelids), mental status (indicating whether or not the user is immersed in or concentrating on viewing an internal image (or watching in the background as a distraction), such as excitement level, alertness level, or feelings and emotions), as well as the physiological status, for example. In addition, in order to acquire this status information from the user, the status information acquisition unit 305 may also be equipped with various status sensors such as a wear sensor made up of a mechanical switch and the like, a gyro sensor, an acceleration sensor, a velocity sensor, a pressure sensor, a body temperature sensor, a sweat sensor, a myoelectric sensor, an oculo-electric sensor, and a brain wave sensor (none illustrated in FIG. 3). Status information acquired from these status sensors is temporarily stored in the RAM 301B, for example.);
a second sensor (Fig. 3; status information acquisition unit 305) configured to detect a condition of an eye of a user wearing the head-mountable device ([0076]; The status information acquisition unit 305 acquires information related to the status of the viewer wearing the head-mounted image display device 1, and outputs to the control unit 301. For the status information, the status information acquisition unit 305 acquires the user's work status (whether or not the user is wearing the device), the user's action status (the orientation of the wearing user's head, movement such as walking, and the open/closed state of the eyelids), mental status (indicating whether or not the user is immersed in or concentrating on viewing an internal image (or watching in the background as a distraction), such as excitement level, alertness level, or feelings and emotions), as well as the physiological status, for example. In addition, in order to acquire this status information from the user); and
an output device (Fig. 3; display of the external image) configured in response to detecting movement of the head-mountable device ([0076]; The status information acquisition unit 305 acquires information related to the status of the viewer wearing the head-mounted image display device 1, and outputs to the control unit 301. For the status information, the status information acquisition unit 305 acquires the user's work status (whether or not the user is wearing the device), the user's action status (the orientation of the wearing user's head, movement such as walking, and the open/closed state of the eyelids), mental status (indicating whether or not the user is immersed in or concentrating on viewing an internal image (or watching in the background as a distraction), such as excitement level, alertness level, or feelings and emotions), as well as the physiological status, for example…, [0147]; First, the status information acquisition unit 305 acquires output information from various status sensors (discussed earlier) as status information (step S2301). Subsequently, the control unit 301 analyzes the acquired status information (step S2302), identifies the user's current work status, action status, mental status, and physiological status, and checks whether or not a user status that should be reported to nearby people has occurred (step S2303)…, and [0148]; Subsequently, when a user status that should be reported to nearby people occurs (step S2303, Yes), the control unit 301 controls the display of the external image according to that user status (step S2304).);
in accordance with a determination that the condition of the eye does not satisfy the one or more criteria, forgo providing the output ([0133]; Furthermore, in response to an
instruction from the user given via the input operating unit 302 (including blink
operations and eyeball movement detected with a myoelectric sensor or oculo-electric
sensor), the information to display as the external image may be changed (like a
slideshow, for example).)…. and [0147]; First, the status information acquisition unit 305
acquires output information from various status sensors (discussed earlier) as status
information (step S2301). Subsequently, the control unit 301 analyzes the acquired status
information (step S2302), identifies the user's current work status, action status, mental
status, and physiological status, and checks whether or not a user status that should be
reported to nearby people has occurred (step S2303). Thus, at step (S2303) is NO, meets
the limitation “forgo providing the output” before step (S2304).
However, Mukawa does not explicitly disclose a first sensor configured to detect movement of the head-mountable device with respect to an object different from the head-mountable device and in an environment of the head-mountable device;
an output device configured in response to detecting movement of the head-mountable device with respect to the object;
in accordance with a determination that the movement of the head-mountable device with respect to the object exceeds a threshold and that the condition of the eye satisfies one or more criteria, provide an output to the user;
in accordance with a determination that the movement of the head-mountable device with respect to the object does not exceed the threshold, forgo providing the output.
In an analogous art, Mirhosseini a first sensor configured to detect movement of the head-mountable device with respect to an object different from the head-mountable device and in an environment of the head-mountable device (Mirhosseini- [0004]; The present disclosure describes techniques for indicating physical obstacles that are in a user's immediate physical environment while the user is immersed in a virtual reality environment. In one exemplary embodiment, a virtual reality environment is displayed. A distance between the electronic device and a physical object is determined. Further, whether the distance between the electronic device and a physical object is within a first threshold distance is determined. In accordance with a determination that the distance is within the first threshold distance, a visual effect in the virtual reality environment is displayed. Further, whether the distance between the electronic device and the physical object is within the second threshold distance is determined. In accordance with a determination that the distance is within a second threshold distance, a visual representation of at least part of a physical environment is displayed. The visual representation is provided by the one or more cameras.);
an output device configured in response to detecting movement of the head-mountable device with respect to the object (Mirhosseini- [0056]; FIG. 3B depicts a user device 206 positioned at a distance 304B within the first threshold distance 306 and an exemplary virtual indication 324 displayed in the corresponding virtual reality environment 260. With reference to FIG. 3B, user device 206 may be continuously moved toward physical object 208A as user 204 walks closer to physical object 208A. User device 206 determines whether distance 304B between user device 206 and physical object 208A of physical environment 200 is less than or equal to first threshold distance 306. If so, user device 206 determines that distance 304B is within the first threshold distance. While FIG. 3A illustrates an embodiment that uses first threshold distance 306 as a first threshold condition for determining whether a visual effect should be displayed to make the user aware of the physical objects, it is appreciated that the first threshold condition can include one or more of other conditions such as an angle of the user device 206, a direction of movement of user device 206, a type of a physical object 208A, or the like.);
in accordance with a determination that the movement of the head-mountable device with respect to the object does not exceed the threshold, forgo providing the output (Mirhosseini- [0055]; With reference to FIG. 3A, user device 206 can determine whether distance 304A between user device 206 and physical object 208A is less than or equal to a first threshold distance 306. As shown in FIG. 3A, in some embodiments, user device 206 determines that distance 304 is greater than first threshold distance 306, indicating that user 204 does not need to be made aware of physical object 208A because user device 206 is still far away from physical object 208A. As such, user device 206 continues to display virtual reality environment 260 without displaying any visual effect or image/video of the physical object 208A.).
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to apply the technique of Mirhosseini to the system
of Mukawa in order to enable reducing impact of visual effect, enhancing user experience and
improving efficiency in generating virtual objects without having to significantly alter the virtual
reality environment.
However, Mukawa in view of Mirhosseini does not explicitly disclose in accordance with a determination that the movement of the head-mountable device with respect to the object exceeds a threshold and that the condition of the eye satisfies one or more criteria, provide an output to the user.
In an analogous art, Wei discloses in accordance with a determination that the movement of the head-mountable device with respect to the object exceeds a threshold (i.e., (1) the user's eyes are less than a distance threshold from the screen) and that the condition of the eye satisfies one or more criteria (i.e., (2) no open eyes are detected), provide an output to the user (Wei- [0066]; FIG. 11 is a flow diagram of a process for screen control 150 in accordance with an example of the present disclosure. At operation 1110, the screen controller waits for N seconds (a selected delay period) before checking, at operation 1120, whether the screen is on or off. In some examples, N may be 15. If the screen is off, the process ends 1130. If the screen is on, then at operation 1140, a check for any trigger conditions is performed. Trigger conditions include (1) the user's eyes are less than a distance threshold from the screen, and (2) no open eyes are detected. If there are no trigger conditions, then the process loops back to operation 1110 and waits for another N seconds. In some examples, N may be 5. If any trigger conditions are detected, then at operation 1150, the screen is dimmed, and at operation 1160, the screen controller waits for M additional seconds. At operation 1170, another check is performed for any trigger conditions. If there are no trigger conditions, then at operation 1190, normal screen brightness is restored and the process loops back to operation 1110. However, if any trigger conditions do persist, then at operation 1180, the screen is locked and/or turned off.).
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to apply the technique of Wei to the system of
Mukawa in view of Mirhosseini in order to reduce the brightness of the screen of the computer
system if the relative distance is less than a threshold ratio.
Claims 2, 4-5, 8 and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Mukawa et al. (US 2015/0219897 hereinafter Mukawa) in view of Mirhosseini et al. (US 2020/0258278 hereinafter Mirhosseini), Wei et al. (US 2021/0089122 hereinafter Wei), and WANG (US 2021/0019036 hereinafter WANG).
Referring to claim 2, Mukawa in view of Mirhosseini, and Wei as applied above does not specifically disclose wherein the output device is a display, and the output comprises a visual element provided in a first region of the display and having a brightness that is greater than a brightness in a second region of the display.
In an analogous art, WANG discloses wherein the output device is a display (WANG-[0037]; FIG. 1 also illustrates a first user view 140 of a portion of the first virtual environment 120, including virtual objects within a field of view (FOV) of an associated virtual camera, rendered for display to the user 110 and presented to the user 110 by the MR/VR system 138…. and Fig. 10; virtual environment 610), and the output comprises a visual element (WANG-Fig. 10; visual appearance 1044) provided in a first region of the display and having a brightness that is greater than a brightness in a second region of the display (WANG- [0071]; visual appearance 1044 in response to the user actuation (e.g., by a change in color, brightness, size, shape, or other brief visual transitional indication), to provide visual feedback to the user. Thus, area of the visual appearance 1044 has brightness greater than area 670 in Fig. 10).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of WANG to the system of Mukawa in view of Mirhosseini, and Wei in order to allow the user to effectively and confidently reorient virtual objects according to the orientations of selected virtual objects through simple and intuitive hand-controlled user inputs.
Referring to claim 4, Mukawa in view of Mirhosseini, and Wei as applied above does not specifically disclose wherein the output device is a speaker, and the output comprises a sound that is louder than a sound from the speaker prior to providing the output.
In an analogous art, WANG discloses wherein the output device is a speaker, and the output comprises a sound that is louder than a sound from the speaker prior to providing the output (WANG- [0088]; It is understood that references to displaying or presenting an item (such as, but not limited to, presenting an image on a display device, presenting audio via one or more loudspeakers, and/or vibrating a device) include issuing instructions, commands, and/or signals causing, or reasonably expected to cause, a device or system to display or present the item…. and…[0104]; User output components 2752 may include, for example, display components for displaying information (for example, a liquid crystal display (LCD) or a projector), acoustic components (for example, speakers), haptic components (for example, a vibratory motor or force-feedback device), and/or other signal generators.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of WANG to the system of Mukawa in view of Mirhosseini, and Wei in order to allow the user to effectively and confidently reorient virtual objects according to the orientations of selected virtual objects through simple and intuitive hand-controlled user inputs.
Referring to claim 5, Mukawa in view of Mirhosseini, and Wei as applied above does not specifically disclose wherein the output device is a haptic feedback device, and the output comprises haptic feedback.
In an analogous art, WANG discloses wherein the output device is a haptic feedback device, and the output comprises haptic feedback (WANG- [0088]; It is understood that references to displaying or presenting an item (such as, but not limited to, presenting an image on a display device, presenting audio via one or more loudspeakers, and/or vibrating a device) include issuing instructions, commands, and/or signals causing, or reasonably expected to cause, a device or system to display or present the item…. and…[0104]; User output components 2752 may include, for example, display components for displaying information (for example, a liquid crystal display (LCD) or a projector), acoustic components (for example, speakers), haptic components (for example, a vibratory motor or force-feedback device), and/or other signal generators.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of WANG to the system of Mukawa in view of Mirhosseini, and Wei in order to allow the user to effectively and confidently reorient virtual objects according to the orientations of selected virtual objects through simple and intuitive hand-controlled user inputs.
Referring to claim 8, Mukawa in view of Mirhosseini, and Wei as applied above does not specifically disclose wherein the output device is a display, and the output comprises a visual element provided in a first region of the display and having a brightness that is greater than a brightness in a second region of the display.
In an analogous art, WANG discloses wherein the output device is a display (WANG-[0037]; FIG. 1 also illustrates a first user view 140 of a portion of the first virtual environment 120, including virtual objects within a field of view (FOV) of an associated virtual camera, rendered for display to the user 110 and presented to the user 110 by the MR/VR system 138…. and Fig. 10; virtual environment 610), and the output comprises a visual element (Fig. 10; visual appearance 1044) provided in a first region of the display and having a brightness that is greater than a brightness in a second region of the display (WANG- [0071]; visual appearance 1044 in response to the user actuation (e.g., by a change in color, brightness, size, shape, or other brief visual transitional indication), to provide visual feedback to the user. Thus, area of the visual appearance 1044 has brightness greater than area 670 in Fig. 10).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of WANG to the system of Mukawa in view of Mirhosseini, and Wei in order to allow the user to effectively and confidently reorient virtual objects according to the orientations of selected virtual objects through simple and intuitive hand-controlled user inputs.
Referring to claim 10, Mukawa in view of Mirhosseini, and Wei does not specifically disclose wherein the output device is a speaker, and the output comprises a sound that is louder than a sound from the speaker prior to providing the output.
In an analogous art, WANG discloses wherein the output device is a speaker, and the output comprises a sound that is louder than a sound from the speaker prior to providing the output (WANG- [0088]; It is understood that references to displaying or presenting an item (such as, but not limited to, presenting an image on a display device, presenting audio via one or more loudspeakers, and/or vibrating a device) include issuing instructions, commands, and/or signals causing, or reasonably expected to cause, a device or system to display or present the item…. and [0104]; User output components 2752 may include, for example, display components for displaying information (for example, a liquid crystal display (LCD) or a projector), acoustic components (for example, speakers), haptic components (for example, a vibratory motor or force-feedback device), and/or other signal generators.).
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to apply the technique of WANG to the system of
Mukawa in view of Mirhosseini, and Wei in order to allow the user to effectively and confidently reorient virtual objects according to the orientations of selected virtual objects through simple and intuitive hand-controlled user inputs.
Referring to claim 11, Mukawa in view of Mirhosseini does not specifically disclose wherein the output device is a haptic feedback device, and the output comprises haptic feedback.
In an analogous art, WANG discloses wherein the output device is a haptic feedback device, and the output comprises haptic feedback (WANG- [0088]; It is understood that references to displaying or presenting an item (such as, but not limited to, presenting an image on a display device, presenting audio via one or more loudspeakers, and/or vibrating a device) include issuing instructions, commands, and/or signals causing, or reasonably expected to cause, a device or system to display or present the item…. and [0104]; User output components 2752 may include, for example, display components for displaying information (for example, a liquid crystal display (LCD) or a projector), acoustic components (for example, speakers), haptic components (for example, a vibratory motor or force-feedback device), and/or other signal generators.).
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to apply the technique of WANG to the system of
Mukawa in view of Mirhosseini, and Wei in order to allow the user to effectively and confidently reorient virtual objects according to the orientations of selected virtual objects through simple and intuitive hand-controlled user inputs.
Claims 3 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Mukawa et al. (US 2015/0219897 hereinafter Mukawa) in view of Mirhosseini et al. (US 2020/0258278 hereinafter Mirhosseini), Wei et al. (US 2021/0089122 hereinafter Wei), and Ke et al. (US 2020/0341539 hereinafter Ke).
Referring to claim 3, Mukawa in view of Mirhosseini, and Wei as applied above does not specifically disclose wherein the output device is a display, and the output comprises a virtual feature provided on the display with a motion to simulate that the virtual feature is approaching the user.
In an analogous art, Ke discloses wherein the output device is a display, and the output comprises a virtual feature provided on the display with a motion to simulate that the virtual feature is approaching the user (Ke- [0041]; In one embodiment, a virtual object is selected by a user in the virtual environment. When the user performs a first motion on the virtual object, the processor 150 may moves the virtual object backward to be away from the operating object or moves the virtual object forward to approach the operating object in the virtual environment.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Ke to the system of Mukawa in view of Mirhosseini, and Wei in order to allow the user to simulate behavior of a virtual object in a virtual environment.
Referring to claim 9, Mukawa in view of Mirhosseini, and Wei as applied above does not specifically disclose wherein the output device is a display, and the output comprises a virtual feature provided on the display with a motion to simulate that the virtual feature is approaching the user.
In an analogous art, Ke discloses wherein the output device is a display, and the output comprises a virtual feature provided on the display with a motion to simulate that the virtual feature is approaching the user (Ke- [0041]; In one embodiment, a virtual object is selected by a user in the virtual environment. When the user performs a first motion on the virtual object, the processor 150 may moves the virtual object backward to be away from the operating object or moves the virtual object forward to approach the operating object in the virtual environment.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Ke to the system of Mukawa in view of Mirhosseini, and Wei in order to allow the user to simulate behavior of a virtual object in a virtual environment.
Claims 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Mukawa et al. (US 2015/0219897 hereinafter Mukawa) in view of Mirhosseini et al. (US 2020/0258278 hereinafter Mirhosseini), and Selvakumar et al. (US 10,492,346 hereinafter Selvakumar).
Referring to claim 6, Mukawa in view of Mirhosseini, and Wei as applied above does not specifically disclose wherein the output device comprises a blower, and the output comprises a flow of air from the blower toward the eye of the user.
In an analogous art, Selvakumar discloses wherein the output device comprises a blower, and the output comprises a flow of air from the blower toward the eye of the user (Selvakumar- Col. 9 lines 36-48, Fig. 10;The thermal regulation components of the head-mounted display 1000 include an intake vent 1064 and an exhaust fan 1066. The intake vent 1064 is configured and positioned the same as the second intake vent 964 to allow air to enter the eye chamber 1026. The exhaust fan 1066 has a configuration that is similar to that of the exhaust fan 340, but is positioned on the upper wall 1015 at an opening that extends through the upper wall 1015 to the eye chamber 1026 in order to draw air out of the eye chamber 326 directly to the exterior of the head-mounted display 1000.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Selvakumar to the system of Mukawa in view of Mirhosseini, and Wei in order to provide a head-mounted display that incorporates thermal regulation components to reduce heat levels and enhance user experience.
Referring to claim 12, Mukawa in view of Mirhosseini, and Wei does not specifically disclose wherein the output device comprises a blower, and the output comprises a flow of air from the blower toward the eye of the user.
In an analogous art, Selvakumar discloses wherein the output device comprises a blower, and the output comprises a flow of air from the blower toward the eye of the user (Selvakumar- Col. 9 lines 36-48, Fig. 10;The thermal regulation components of the head-mounted display 1000 include an intake vent 1064 and an exhaust fan 1066. The intake vent 1064 is configured and positioned the same as the second intake vent 964 to allow air to enter the eye chamber 1026. The exhaust fan 1066 has a configuration that is similar to that of the exhaust fan 340, but is positioned on the upper wall 1015 at an opening that extends through the upper wall 1015 to the eye chamber 1026 in order to draw air out of the eye chamber 326 directly to the exterior of the head-mounted display 1000.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Selvakumar to the system of Mukawa in view Mirhosseini, and Wei in order to provide a head-mounted display that incorporates thermal regulation components to reduce heat levels and enhance user experience.
Claims 13, 15-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fullam (CN 106714663 B hereinafter Fullam) in view of Newman (US 2019/0285911 hereinafter Newman), and Otsuki Masaki (JP 2004236187A hereinafter Otsuki).
Referring to claim 13, Fullam discloses a head-mountable device (head-mounted display system 12A) comprising:
an optical sensor (see attachment highlighted section; Head-mounted display system 12A comprises also operatively coupled to the controller 20A of the sensing subsystem 26A. In the embodiment shown, sensor sub-system comprises an eye imaging camera 28, the on-axis illumination source 30 and shaft lower illumination source 30 '… and display panel 16, the beam steering optical device 44 allows the eye imaging camera and on-axis illumination source sharing a common optical axis A, regardless of whether they are arranged on the periphery of the display panel.) configured to detect a moisture condition of an eye of a user wearing the head-mountable device (see attachment highlighted section; Thus, turning to FIG. 5 again, the sensing subsystem 26 (such as 26A and 26B) can include a configuration for sensing the viewer 14 of the ocular condition of the eye sensor 60. In some embodiments, the ocular sensor can respond to ocular pressure. ocular sensor may be a contact eye temperature sensor, for example, that attributable to eye sensing of IR or NIR illumination of the eye change heat to the viewer and/or evaporation of liquid. In other example, the eye sensor may be configured for imaging the viewer eye imaging sensor. Such eye sensor may be configured to sense eye redness, expand capillary vessel, or part of the closed eyelid. in these and other embodiments, the eye imaging sensor can be configured to sense the pupil hole extends. form, e.g., web camera, web-enabled camera of IR, or response web camera equipped with the long wave IR sensor of surface temperature of the eye being imaged imaging sensor can be taken by a digital camera. In a more specific example, the imaging sensor may be capable of decomposing view of the eye in the eye stimulating gas particles. In some embodiments, the eye sensor 60 can be configured to sense various types of eye movement, the eye in the eye of the viewer corresponding to relatively slow rotation of the change in gaze direction of the user, and an eye faster panning rotation, and very fast associated with eye movement… and … eyes of additional possible discomfort caused by sensor subsystem component of the display system 12A. The above noted, illumination sources 30 and 30 ' can emit IR or NIR probing light to illuminate the eye for imaging purposes. light of such a wavelength capable of heating the eye, causing evaporation of surface moisture and dryness feeling..); and
a display configured to output, in response to a detection of the moisture condition of the eye (see attachment highlighted section; As described above, if the sensed condition indicates that the current or one or more operating parameters of eye discomfort, controller 20 can preemptively adjust imminent occurrence of display system 12 to reduce or prevent the discomfort. is selected for adjustment of operating parameters can be different in the different embodiments of the present disclosure, and may be different in different operating conditions of the same embodiment of the invention), a visual feature (i.e., display text) (see attachment highlighted section; For example, when the display 16 is used to display text, obviously reduce the display image brightness or the adjustment of hue may be subjected to eye discomfort the viewer may accept (or even )).
However, FULLAM is silent on a display configured to output, a visual feature until the moisture condition of the eye changes, wherein the display is configured to move the visual feature based on a region of the eye in which the moisture condition is detected.
In an analogous art, FULLAM discloses (FULLAM- see attachment highlighted section; For example, when the display 16 is used to display text, obviously reduce the display image brightness or the adjustment of hue may be subjected to eye discomfort the viewer may accept (or even ). Therefore, when the sensing of moisture or dryness of the eye(s) is at a normal level, the display text brightness is resumed, which reads on the limitation “a display configured to output, a visual feature until the moisture condition of the eye changes”, thus to allow a viewer experience at a comfort level based on the sensing of the eyes condition.
In an analogous art, Newman discloses a display configured to output (Newman- [0122]; The user communication device 240 can include any system configured to communicate with the user or alter the vision of the user. The user communication device 240 can change the optical characteristics of the contact lens 205. For example, the user communication device 240 can flash a visual communication in the user's field of vision. This can be a series of information available to the user and can include a simple message or a more complex message. For example, it can include simply flashing light or a color, or creating another visual variance to warn of a health condition or another type of condition. If a health condition is not present, the user communication device 240 can flash a first color such as a green color. An unhealthy condition can be indicated using a second color such as red. In some embodiments, the user communication device 240 can change the vision of the user. For example, if the user requires a changed prescription for driving, the user communication device 240 can alter the contact lens 205 to meet those requirements by dynamically modifying the shape, geometry, pressure, position, or visual aspects of the lens. If the user requires a setting for reading, the user communication device 240 can alter the contact lens 205 for reading.), in response to a detection of the moisture condition of the eye (Newman- [0121]; The output device 235 can include a reservoir that includes a substance, such as a therapeutic substance, that can be administered into the eye. In response to the detection of an eye condition or from an external source, the output device 235 can release at least a portion of the substance. Types of substances that can be released from the output device 235 can include, but are not limited to, insulin, hydrating substance, pain relievers, anti-inflammatory, eye lash enhancers, other types of substances, or combinations thereof. In some examples, the substance is a liquid.), a visual feature (i.e., flashing light or a color) until the moisture condition of the eye changes (Newman- [0122]; The user communication device 240 can include any system configured to communicate with the user or alter the vision of the user. The user communication device 240 can change the optical characteristics of the contact lens 205. For example, the user communication device 240 can flash a visual communication in the user's field of vision. This can be a series of information available to the user and can include a simple message or a more complex message. For example, it can include simply flashing light or a color, or creating another visual variance to warn of a health condition or another type of condition. If a health condition is not present, the user communication device 240 can flash a first color such as a green color. An unhealthy condition can be indicated using a second color such as red. Thus, a visual flashing light changes from red to green when an eye condition from unhealth to health).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Newman to the system of Fullam in order to provide the user health condition via sensing the condition of the eyes.
However, Fullam in view of Newman is silent on wherein the display is configured to move the visual feature based on a region of the eye in which the moisture condition is detected.
In an analogous art, Otsuki discloses wherein the display is configured to move the visual feature based on a region of the eye in which the moisture condition is detected (Otsuki-see attachment highlighted section; And a half mirror 25 and a CCD 26 for observing interference images of reflection by the front and back surfaces of the lipid layer on the outermost layer of tear fluid. Then, the left eye fatigue measuring unit 12 obtains the amount of change from the difference between the images observed at intervals. The interference image moves fluidly when the eyes are not dry, but moves slowly when the eyes are dry. Therefore, when the amount of change is smaller than a predetermined amount (reference value), the control unit 9 causes the eyes to dry. You have eye fatigue.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Otsuki to the system of Fullam in view of Newman in order to provide a head-mounted display device that is capable of giving a warning or restricting use according to a use situation.
Referring to claim 15, Fullam discloses wherein the display is further configured to reduce visual detail of the visual feature until the moisture condition of the eye changes (Fullam- see attachment highlighted section; In some implementations, subjected to adjustment of operational parameters may be parameters of display 16 (16A, 16B, etc.). For example, is sensed or predicted time display image brightness can be adjusted (i.e., reduced) when the eye discomfort…. and… For example, when the display 16 is used to display text, obviously reduce the display image brightness or the adjustment of hue may be subjected to eye discomfort the viewer may accept (or even ). Therefore, when the sensing of moisture or dryness of the eye(s) is at a normal level, the display text brightness is resumed, which reads on the limitation “wherein the display is further configured to reduce visual detail of the visual feature until the moisture condition of the eye changes”, thus to allow a viewer experience at a comfort level based on the sensing of the eyes condition.).
Referring to claim 16, Fullam discloses wherein the display is further configured to alter a brightness of the visual feature until the moisture condition of the eye changes (Fullam- see attachment highlighted section; In some implementations, subjected to adjustment of operational parameters may be parameters of display 16 (16A, 16B, etc.). For example, is sensed or predicted time display image brightness can be adjusted (i.e., reduced) when the eye discomfort…. and… For example, when the display 16 is used to display text, obviously reduce the display image brightness or the adjustment of hue may be subjected to eye discomfort the viewer may accept (or even ). Therefore, when the sensing of moisture or dryness of the eye(s) is at a normal level, the display text brightness is resumed, which reads on the limitation “wherein the display is further configured to alter a brightness of the visual feature until the moisture condition of the eye changes”, thus to allow a viewer experience at a comfort level based on the sensing of the eyes condition.).
Referring to claim 17, Fullam discloses wherein the visual feature comprises an instruction for the user to perform an action with the eye (Fullam- see attachment highlighted section; in the case shown in FIG. 4, for example, the viewer 14D is based on a gaze direction or presenting navigation display 16D of hand gesture UI. In one example, a gaze-tracking engine controller 20D 54 based on plane image from camera 28D of image data to calculate the positive fixation of points corresponding to the user display screen coordinates (X, Y). each UI element by the moved to the display screen on the other point of his gaze, the viewer can be presented on the display 56 and navigation.).
Referring to claim 19, Fullam discloses wherein the optical sensor is configured to detect the moisture condition based on a temperature of the eye (Fullam- see attachment highlighted section; turning to FIG. 5 again, the sensing subsystem 26 (such as 26A and 26B) can include a configuration for sensing the viewer 14 of the ocular condition of the eye sensor 60. In some embodiments, the ocular sensor can respond to ocular pressure. ocular sensor may be a contact eye temperature sensor, for example, that attributable to eye sensing of IR or NIR illumination of the eye change heat to the viewer and/or evaporation of liquid.).
Referring to claim 20, Fullam discloses wherein the optical sensor comprises a light emitter and is configured to detect the moisture condition based on a reflection of light from the light emitter and reflected by the eye (Fullam- see attachment highlighted section; As shown in FIG. 2, shaft lower illumination can be created from the eyes of the wearer of the cornea 34 reflects specular reflection 32. shaft lower illumination may also be used for illuminating the eye for "in the way" effect, in which the pupil 36 appears darker than the surrounding iris 38. opposite, on-axis illumination from the IR or NIR source can be used to create "displayed" effect, in which the pupil is brighter than the surrounding iris. More specifically, from the illumination source 30 of IR or NIR illumination axis of illuminating the eye retina 40 retro-reflective tissue, the retroreflective structure to reflect light back through the pupil, forming pupil hole of the bright image 42. display panel 16, the beam steering optical device 44 allows the eye imaging camera and on-axis illumination source sharing a common optical axis A, regardless of whether they are arranged on the periphery of the display panel. In some embodiments, the eye imaging camera may include transport barrier illumination source emission band to improve the strong ambient light present bright pupil comparison of the wavelength filter. from the eye imaging camera 28 of the digital image data can be transferred in remote computer system can access through the network into the controller 20A or the controller of the associated logic. where, capable of processing the image data so as to analyzing according to pupil center, pupil contour, and/or one or more mirrors from corneal surface reflection characteristic such as 32. the input parameter of the model (e.g. polynomial model) image data can be used as the position of these features in the feature position and the viewing vector V in contact. Various embodiments for determining a gaze vector for the right eye and the left eye, the controller may also be configured to compute a wearer of focus as the right and cross point of left gaze vector. In some embodiments, the eye imaging camera may be used to perform iris or retinal scanning function to determine the identity of the wearer 14A.…and… turning to FIG. 5 again, the sensing subsystem 26 (such as 26A and 26B) can include a configuration for sensing the viewer 14 of the ocular condition of the eye sensor 60. In some embodiments, the ocular sensor can respond to ocular pressure. ocular sensor may be a contact eye temperature sensor, for example, that attributable to eye sensing of IR or NIR illumination of the eye change heat to the viewer and/or evaporation of liquid.).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Fullam (CN 106714663 B hereinafter Fullam) in view of Newman (US 2019/0285911 hereinafter Newman), Otsuki Masaki (JP 2004236187A hereinafter Otsuki), and Mukawa et al. (US 2015/0219897 hereinafter Mukawa).
Referring to claim 18, Fullam discloses wherein the optical sensor is configured to detect the moisture condition of the eye (Fullam- see attachment highlighted section; Thus, turning to FIG. 5 again, the sensing subsystem 26 (such as 26A and 26B) can include a configuration for sensing the viewer 14 of the ocular condition of the eye sensor 60. In some embodiments, the ocular sensor can respond to ocular pressure. ocular sensor may be a contact eye temperature sensor, for example, that attributable to eye sensing of IR or NIR illumination of the eye change heat to the viewer and/or evaporation of liquid. In other example, the eye sensor may be configured for imaging the viewer eye imaging sensor. Such eye sensor may be configured to sense eye redness, expand capillary vessel, or part of the closed eyelid. in these and other embodiments, the eye imaging sensor can be configured to sense the pupil hole extends. form, e.g., web camera, web-enabled camera of IR, or response web camera equipped with the long wave IR sensor of surface temperature of the eye being imaged imaging sensor can be taken by a digital camera. In a more specific example, the imaging sensor may be capable of decomposing view of the eye in the eye stimulating gas particles. In some embodiments, the eye sensor 60 can be configured to sense various types of eye movement, the eye in the eye of the viewer corresponding to relatively slow rotation of the change in gaze direction of the user, and an eye faster panning rotation, and very fast associated with eye movement… and … additional eye discomfort may be caused by sensor subsystem component of the display system 12A. The attention to illumination source 30 and 30 can emit IR or NIR detection light to illuminate the eye for imaging purposes. The light of this wavelength can be heated eye, to cause evaporation of surface moisture and dryness.).
However, Fullam in view of Newman, and Otsuki as applied above does not specifically disclose wherein the optical sensor is configured to detect the condition based on a number of times the eye blinks within a span of time.
In an analogous art, Mukawa discloses wherein the optical sensor is configured to detect the condition based on a number of times the eye blinks within a span of time (Mukawa- [0151]; For example, it is known that a blink operation may be detected on the basis of output information from a status sensor such as a myoelectric sensor or an oculo-electric sensor. The control unit 301 may determine the user's mental status according to a number of blinks per unit time and a blink time. FIG. 26 illustrates an example of a method of determining the mental status (alert/drowsy/concentrating) of the user according to a number of blinks per unit time and a blink time detected with an oculo-electric technique.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of Mukawa to the system of Fullam in view of Newman, and Otsuki in order to allow the surrounding person can know the state of the user individual, and can know the degree currently concentrated or immersed to what the individual is doing, viewing and listening through the outer side image.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT D AU whose telephone number is (571)272-5948. The examiner can normally be reached M-F. General 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT D AU/Examiner, Art Unit 2624