DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/18/2025 has been entered. Claims 2, 4, 7, 18 and 20 have been canceled, claim 1, 3, 5-6, 8-17, 19 and 21-25 remain pending in the application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 3, 5, 8-10, 12, 17, 19, 21, 23 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Li U.S. Patent Application 20220139041 in view of Ano U.S. Patent Application 20180061372, and further in view of Karve U.S. Patent 9298413.
Regarding claim 17, Li discloses an electronic device comprising:
one or more processors (CPU 3010);
a non-transitory memory (memory 3050);
a display (display 3030); and
one or more programs, wherein the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors (paragraph [0128]: A computing system comprising one or more processors and one or more memories storing instructions that, when executed by the one or more processors, cause the computing system to perform a process), the one or more programs including instructions for:
while displaying, on the display, computer-generated content according to a first locked mode, determining that the electronic device changes from a first distance to a second distance from the physical environment (paragraph [0047]: A virtual object in the world-locked follow mode (first locked mode) can have defined world-locked anchor points. When a user is within a threshold distance or in the same room as such an anchor, the virtual object can be positioned at that anchor. However, when the user is not within the threshold distance or is not within the same room as the anchor, the virtual object can become body locked. A virtual object in the body-locked mode says body locked to the user as the user moves around); and
in response to determining that the electronic device changes from the first distance to the second distance:
in accordance with a determination that the physical surface would occlude at least a portion of the computer-generated content while the electronic device is at the second distance, changing display of the computer-generated content from the first locked mode to a second locked mode (paragraph [0047]: when the user is not within the threshold distance or is not within the same room (change criterion) as the anchor, the virtual object can become body locked (second locked mode). A virtual object in the body-locked mode says body locked to the user as the user moves around); and
in accordance with a determination that the physical surface would not occlude at least a portion of the computer-generated content while the electronic device is at the second distance, maintaining display of the computer-generated content according to the first locked mode (paragraph [0047]: A virtual object in the world-locked follow mode (first locked mode) can have defined world-locked anchor points. When a user is within a threshold distance or in the same room (not occlude) as such an anchor, the virtual object can be positioned at that anchor).
Li discloses all the features with respect to claim 17 as outlined above. However, Li fails to disclose determining that the electronic device changes from a first distance to a physical surface to a second distance from the physical surface, and if the physical surface would occlude at least a portion of the computer-generated content while the electronic device is at the second distance from the physical surface, changing display such that the physical surface is prevented from occluding the computer-generated content; and if the physical surface would not occlude at least a portion of the computer-generated content while the electronic device is at the second distance from the physical surface, maintaining display explicitly.
Ano discloses determining that the electronic device changes from a first distance to a physical surface to a second distance from the physical surface (paragraph [0009]: in the case that the distance detected by the detecting unit changes from the second distance to the first distance, the control unit may change a display form of the image displayed on the display surface by the display unit to a display form corresponding to the first distance; paragraph [0150]: when the distance from the setting surface to the display surface 51 is the first distance, the processing for changing the display on the display surface 51 to the display corresponding to the first distance is stored; Ano’s teaching of changing display form based on distance from the physical surface can be combined with Li’s device, such that to change displaying locked mode based on distance from the physical surface).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface.
Li as modified by Ano discloses all the features with respect to claim 17 as outlined above. However, Li as modified by Ano fails to disclose if the physical surface would occlude at least a portion of the computer-generated content, changing display such that the physical surface is prevented from occluding the computer-generated content; and if the physical surface would not occlude at least a portion of the computer-generated content, maintaining display explicitly.
Karve discloses if the physical surface would occlude at least a portion of the computer-generated content, changing display such that the physical surface is prevented from occluding the computer-generated content; and if the physical surface would not occlude at least a portion of the computer-generated content, maintaining display (col. 4 line 11-21: the first state in which the display system is operating includes displaying the portion of the image in the non-display surface associated with the display system such that the portion of the image is occluded by the non-display surface, then the second state in which the display system is commanded to operate may include adjusting (e.g. via output scaling) the display content including the portion of the image such that the portion of the image is displayed within an area surrounded by the non-display surface to prevent occlusion of the portion of the image by the non-display surface; Karve’s teaching of changing display state based on whether the physical surface occludes portion of image can be combined with Li and Ano’s device, such that to change displaying mode based on whether the physical surface occludes portion of image at certain distance from the physical surface).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Claim 1 recites the functions of the apparatus recited in claim 17 as method steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 17 applies to the method steps of claim 1.
Regarding claim 3, Li as modified by Ano and Karve discloses the method of claim 1, wherein determining that the physical surface would occlude at least a portion of the computer-generated content while the electronic device is at the second distance from the physical surface includes determining that the second distance is less than a first threshold (Li’s paragraph [0047]: When a user is within a threshold distance (first threshold) or in the same room as such an anchor, the virtual object can be positioned at that anchor. However, when the user is not within the threshold distance or is not within the same room (occlusion criterion) as the anchor, the virtual object can become body locked. A virtual object in the body-locked mode says body locked to the user as the user moves around; Ano’s paragraph [0111]: FIG. 4 is a diagram showing the processing association table 257; for example, section 4 range 2.6 ≤ X < 2.8 (occlusion criterion), increase luminance, hide menu image; Karve’s col. 4 line 11-21: the first state in which the display system is operating includes displaying the portion of the image in the non-display surface associated with the display system such that the portion of the image is occluded by the non-display surface, then the second state in which the display system is commanded to operate may include adjusting (e.g. via output scaling) the display content including the portion of the image such that the portion of the image is displayed within an area surrounded by the non-display surface to prevent occlusion of the portion of the image by the non-display surface).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface; and combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Regarding claim 5, Li as modified by Ano and Karve discloses the method of claim 1, wherein the first locked mode corresponds to an object-locked mode in which the computer-generated content is locked to an object, wherein the second locked mode corresponds to a world-locked mode in which the computer-generated content is world-locked to the physical surface, and wherein changing from the first locked mode to the second locked mode includes changing from the object-locked mode to the world-locked mode (Li’s paragraph [0047]: A virtual object in the world-locked follow mode can have defined world-locked anchor points. When a user is within a threshold distance or in the same room as such an anchor, the virtual object can be positioned at that anchor. However, when the user is not within the threshold distance or is not within the same room as the anchor, the virtual object can become body locked (object-locked mode). A virtual object in the body-locked mode says body locked to the user as the user moves around; paragraph [0050]: When the user enters office room 600A, the world-locked follow mode can lock or anchor virtual object 602 to a defined anchor point in that room... it must be within a threshold distance of the user, e.g., two or three meters. In this case, the selected anchor point is on a desk where the user previously placed the avatar, causing the avatar 602 to appear on the desk).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface; and combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Regarding claim 21, Li as modified by Ano and Karve discloses the method of claim 3, further comprising:
while displaying, on the display, the computer-generated content according to the second locked mode, determining that the electronic device changes from the second distance to a third distance from the physical surface; and in accordance with a determination that the third distance is greater than a second threshold, changing display of the computer-generated content from the second locked mode to a third locked mode corresponding to an object-locked mode in which the computer-generated content is locked to an object (Li’s paragraph [0055]: the world-locked no-follow mode can identify an established anchor point for avatar 802, which is on the desk (desk-locked mode), causing avatar 802 to be displayed at that anchor point. When the user leaves office room 800A, the world-locked no-follow mode can keep avatar 802 locked in its current position in office room 800A (which can include no longer displaying avatar 802 when the user is a threshold distance from the anchor point or when the anchor points is no longer in view). Accordingly, when the user enters hallway 800B, the world-locked no-follow mode can cause avatar 802 to not follow the user as he/she moves to another room. Hence, avatar 802 is not displayed when the user is in hallway 800B).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface; and combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Regarding claim 8, Li as modified by Ano and Karve discloses the method of claim 21, wherein the second threshold is greater than or equal to the first threshold (Li’s paragraph [0047]: A virtual object in the world-locked follow mode can have defined world-locked anchor points. When a user is within a threshold distance (first threshold) or in the same room as such an anchor, the virtual object can be positioned at that anchor. However, when the user is not within the threshold distance or is not within the same room (second threshold) as the anchor, the virtual object can become body locked. A virtual object in the body-locked mode says body locked to the user as the user moves around; paragraph [0050]: When the user enters office room 600A, the world-locked follow mode can lock or anchor virtual object 602 to a defined anchor point in that room... it must be within a threshold distance of the user, e.g., two or three meters. In this case, the selected anchor point is on a desk where the user previously placed the avatar, causing the avatar 602 to appear on the desk).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface; and combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Regarding claim 9, Li as modified by Ano and Karve discloses the method of claim 21, wherein the first locked mode corresponds to a world-locked mode in which the computer-generated content is world-locked to the physical surface, and wherein the second locked mode corresponds to an object-locked mode in which the computer-generated content is locked to an object (Li’s paragraph [0047]: A virtual object in the world-locked follow mode can have defined world-locked anchor points. When a user is within a threshold distance or in the same room as such an anchor, the virtual object can be positioned at that anchor. However, when the user is not within the threshold distance or is not within the same room as the anchor, the virtual object can become body locked (object-locked mode). A virtual object in the body-locked mode says body locked to the user as the user moves around; paragraph [0050]: When the user enters office room 600A, the world-locked follow mode can lock or anchor virtual object 602 to a defined anchor point in that room... it must be within a threshold distance of the user, e.g., two or three meters. In this case, the selected anchor point is on a desk where the user previously placed the avatar, causing the avatar 602 to appear on the desk).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface; and combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Regarding claim 10, Li as modified by Ano and Karve discloses the method of claim 9, wherein the object-locked mode corresponds to a display-locked mode or a body-locked mode (Li’s paragraph [0047]: A virtual object in the world-locked follow mode can have defined world-locked anchor points. When a user is within a threshold distance or in the same room as such an anchor, the virtual object can be positioned at that anchor. However, when the user is not within the threshold distance or is not within the same room as the anchor, the virtual object can become body locked (object-locked mode). A virtual object in the body-locked mode says body locked to the user as the user moves around).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface; and combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Regarding claim 12, Li as modified by Ano and Karve discloses the method of claim 1, wherein the electronic device includes a positional sensor that generates positional sensor data, and wherein determining that the electronic device changes from the first distance to the second distance is based on the positional sensor data (Ano’s paragraph [0009]: in the case that the distance detected by the detecting unit changes from the second distance to the first distance, the control unit may change a display form of the image displayed on the display surface by the display unit to a display form corresponding to the first distance; paragraph [0044]: A distance sensor 235 is disposed on the upper surface of an exterior housing of the projector 200. The distance sensor 235 measures the distance from the floor surface (a reference position), which is a setting surface of the screen board 50, to the distance sensor 235).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li’s to determine distance change from physical surface as taught by Ano, to change the display on the display surface according to the position of the display surface; and combine Li and Ano’s to change display state as taught by Karve, to change display state without interruption.
Claim 19 recites the functions of the apparatus recited in claim 17 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the apparatus in claim 17 applies to the medium steps of claim 19.
Claim 23 recites the functions of the method recited in claim 21 as apparatus steps. Accordingly, the mapping of the prior art to the corresponding functions of the method in claim 21 applies to the apparatus steps of claim 23.
Claim 25 recites the functions of the method recited in claim 21 as medium steps. Accordingly, the mapping of the prior art to the corresponding functions of the method in claim 21 applies to the medium steps of claim 25.
Claim 13-14, 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Li U.S. Patent Application 20220139041 in view of Ano U.S. Patent Application 20180061372, in view of Karve U.S. Patent 9298413, and further in view of Shahrokni U.S. Patent Application 20210264674.
Regarding claim 13, Li as modified by Ano and Karve discloses the positional sensor generates the positional sensor data, and wherein the sensor data includes a first distance value indicative of the first distance and includes a second distance value indicative of the second distance (Ano’s paragraph [0009]: in the case that the distance detected by the detecting unit changes from the second distance to the first distance, the control unit may change a display form of the image displayed on the display surface by the display unit to a display form corresponding to the first distance; paragraph [0044]: A distance sensor 235 is disposed on the upper surface of an exterior housing of the projector 200. The distance sensor 235 measures the distance from the floor surface (a reference position), which is a setting surface of the screen board 50, to the distance sensor 235). However, Li as modified by Ano and Karve fails to disclose a depth sensor that generates depth sensor data.
Shahrokni discloses a depth sensor that generates depth sensor data (paragraph [0231]: A depth sensor (not shown) may determine distances to the surfaces. The surfaces are thus represented by data in three dimensions including their sizes, shapes, and distances from the real object detection camera).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li, Ano and Karve’s to use depth sensor as taught by Shahrokni, to present realistic and readily information about how the physical world might be altered.
Regarding claim 14, Li as modified by Ano, Karve and Shahrokni discloses the method of claim 12, wherein the positional sensor corresponds to an inertial measurement unit (IMU) that generates IMU data included in the positional sensor data, and wherein determining the change from the first distance to the second distance is based on the IMU data (Shahrokni’s paragraph [0173]: Inertial measurement units 557 may determine movement and orientation of the viewing optics assembly 548... the depth sensor 551 is operatively coupled to the eye tracking cameras 550 as a confirmation of measured accommodation against actual distance the user eyes 549 are looking at; Ano’s paragraph [0009]: in the case that the distance detected by the detecting unit changes from the second distance to the first distance, the control unit may change a display form of the image displayed on the display surface by the display unit to a display form corresponding to the first distance).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li, Ano and Karve’s to use depth sensor as taught by Shahrokni, to present realistic and readily information about how the physical world might be altered.
Regarding claim 22, Li as modified by Ano, Karve and Shahrokni discloses the method of claim 1, wherein determining whether the physical surface would occlude at least a portion of the computer-generated content comprises:
receiving, from a depth sensor of the electronic device, depth sensor data indicative of a distance from the electronic device to the physical surface (Shahrokni’s paragraph [0300]: depth data from depth sensors 135 to determine the locations of surfaces and their relative distance from the depth sensors 135); and
determining, based on the depth sensor data and a known depth position of the computer generated content, that the physical surface would occlude at least a portion of the computer generated content while the electronic device is at the second distance from the physical surface (Shahrokni’s paragraph [0173]: Inertial measurement units 557 may determine movement and orientation of the viewing optics assembly 548... the depth sensor 551 is operatively coupled to the eye tracking cameras 550 as a confirmation of measured accommodation against actual distance the user eyes 549 are looking at; paragraph [0103]: render virtual content so as to appear fully or partially occluded by physical objects between the user and the rendered location of the virtual content; Ano’s paragraph [0009]: in the case that the distance detected by the detecting unit changes from the second distance to the first distance, the control unit may change a display form of the image displayed on the display surface by the display unit to a display form corresponding to the first distance; paragraph [0111]: FIG. 4 is a diagram showing the processing association table 257; for example, section 4 range 2.6 ≤ X < 2.8 (occlusion criterion), increase luminance, hide menu image).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li, Ano and Karve’s to use depth sensor as taught by Shahrokni, to present realistic and readily information about how the physical world might be altered.
Claim 24 recites the functions of the method recited in claim 22 as apparatus steps. Accordingly, the mapping of the prior art to the corresponding functions of the method in claim 22 applies to the apparatus steps of claim 24.
Claim 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Li U.S. Patent Application 20220139041 in view of Ano U.S. Patent Application 20180061372, in view of Karve U.S. Patent 9298413, and further in view of Pestov U.S. Patent Application 20210097760.
Regarding claim 15, Li as modified by Ano and Karve discloses the electronic device includes an image sensor that captures image data of the physical surface, wherein the image data includes a first image that represents the physical surface at the first distance from the electronic device, wherein the image data includes a second image that represents the physical surface at the second distance from the electronic device (Ano’s paragraph [0049]: The distance between the reference position and the projector 200 may be measured by an image pickup unit 241 provided in a pointer detecting unit 240 rather than separately providing the distance sensor 235... an image of the marker is picked up by the image pickup unit 241, and a distance is calculated on the basis of a size of the marker shown in the picked-up image; paragraph [0009]: in the case that the distance detected by the detecting unit changes from the second distance to the first distance, the control unit may change a display form of the image displayed on the display surface by the display unit to a display form corresponding to the first distance). However, Li as modified by Ano and Karve fails to disclose determining that the electronic device changes from the first distance to the second distance includes comparing the first image against the second image.
Pestov discloses determining that the electronic device changes from the first distance to the second distance includes comparing the first image against the second image (paragraph [0050]: distance can be determined, for example, by using a built-in or external range finder spatial sensor directed to the object. In some cases, distance can be determined by a spatial sensor by capturing several images of the scene and comparing pixel shift).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li, Ano and Karve’s to determine distance by comparing images as taught by Pestov, to facilitate collecting distance data.
Regarding claim 16, Li as modified by Ano, Karve and Pestov discloses the method of claim 15, wherein comparing the first image against the second image includes:
identifying a respective subset of pixels of the first image corresponding to the physical surface; identifying a respective subset of pixels of the second image corresponding to the physical surface; and comparing the respective subset of pixels of the first image against the respective subset of pixels of the second image (Pestov’s paragraph [0050]: distance can be determined, for example, by using a built-in or external range finder spatial sensor directed to the object. In some cases, distance can be determined by a spatial sensor by capturing several images of the scene and comparing pixel shift).
Therefore, it would have been obvious before the effective filing date of the claimed invention to combine Li, Ano and Karve’s to determine distance by comparing images as taught by Pestov, to facilitate collecting distance data.
Allowable Subject Matter
Claim 6 and 11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 6 is about the world-locked mode when the electronic device is the second distance from the physical surface, the computer-generated content is displayed at a first depth from the electronic device, the method further comprising:
determining that the electronic device changes from the second distance to a third distance from the physical surface that is less than the second distance; and in response to determining that the electronic device changes from the second distance to the third distance, reducing the depth of the computer-generated content from the first depth to a second depth from the electronic device while maintaining the computer-generated content world-locked to the physical surface.
Li 20220139041, Ano 20180061372, Karve 9298413 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Claim 11 is about tracking user engagement with respect to the computer-generated content is characterized by an error level, and wherein changing the display of the computer-generated content from the first locked mode to the second locked mode prevents the error level from exceeding the error threshold, or reduces the error level below the error threshold.
Li 20220139041, Ano 20180061372, Karve 9298413 and Stachniak 20200226823 combined cannot teach these features perfectly. These limitations when read in light of the rest of the limitations in the claim and the claims to which it depends make the claim allowable subject matter.
Response to Arguments
Applicant's arguments filed 11/18/2025, page 11 - 13, with respect to the rejection(s) of claim(s) 1, 17 and 19 under 103, have been fully considered and are moot upon a new ground(s) of rejection made under 35 U.S.C. 103 as being unpatentable over Li U.S. Patent Application 20220139041 in view of Ano U.S. Patent Application 20180061372, and further in view of Karve U.S. Patent 9298413, as outlined above.
Applicant argues on page 11-12 that Shahrokni fails to disclose "in accordance with a determination that the physical surface would occlude at least a portion of the computer-generated content while the electronic device is at the second distance from the physical surface, changing display of the computer-generated content from the first locked mode to a second locked mode such that the physical surface is prevented from occluding the computer-generated content".
In reply, the rejection is based on Li, Ano and Karve combined. Karve discloses if the physical surface would occlude at least a portion of the computer-generated content, changing display such that the physical surface is prevented from occluding the computer-generated content; and if the physical surface would not occlude at least a portion of the computer-generated content, maintaining display (col. 4 line 11-21: the first state in which the display system is operating includes displaying the portion of the image in the non-display surface associated with the display system such that the portion of the image is occluded by the non-display surface, then the second state in which the display system is commanded to operate may include adjusting (e.g. via output scaling) the display content including the portion of the image such that the portion of the image is displayed within an area surrounded by the non-display surface to prevent occlusion of the portion of the image by the non-display surface). Karve’s teaching of changing display state based on whether the physical surface occludes portion of image can be combined with Li and Ano’s device, such that to change displaying mode based on whether the physical surface occludes portion of image at certain distance from the physical surface.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/YI YANG/
Primary Examiner, Art Unit 2616