DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/17/2026 has been entered.
Response to Arguments
Applicant’s arguments filed 2/26/2026, with respect to claims 1-15 have been fully considered but are moot in view new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-5, 11 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (PGPUB Document No. US 2021/0287440) in view of Samec et al. (PGPUB Document No. US 2016/0270656) in view of Needham et al. (PGPUB Document No. US 2018/0096528) in view of Butler et al. (PGPUB Document No. US 2021/0369102).
Regarding claim 1, Wang teaches a method performed by an augmented reality (AR) device for measuring a vision of a user, the method comprising:
Obtaining, by using a camera of the AR device, a background image comprising an image of at least one physical object (physical scene 200 is illustrated, which is captured by camera (Wang: 0043));
Identifying an edge of the image of the at least one physical object in the background image (determine candidate physical location based on geographical boundaries (Wang: 0018, 0045));
Determining a first region for placing virtual objects on the background image based on the edge of the image (the resulting placing of the virtual object at the candidate location (Wang: 0046));
Determining a second region corresponding to the first region on a display of the AR device (the required and inherent position of the digital object within the AR space that corresponds to the candidate location determined by Wang);
Outputting a virtual object for measuring the vision of the user to the second region (the resulting digital object that is rendered on the user’s computing device 300 as shown in FIG.3);
However, Wang does not expressly teach the digital object being information for measuring the vision of the user (virtual eye test chart 1420 (Samec: 1686));
Obtaining a user input signal for vision measurement after the outputting the virtual object (the user performing the vision test (Samec: 1686));
And determining a vision prescription value of the user based on the user input signal (the result of the eye exam as administered to the user as implied in para 1686 of Samec).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the teachings of Wang to further include the ability for vision measurement, because this enables the application of AR devices to an added variety of contexts.
Further, the combined teachings above do not expressly teach but Needham teaches, wherein the first region is determined as an area where no edge is included (“To produce a convincing AR experience, the chosen surface should contain a place to put the AR model with a space of appropriate size, relatively clear of clutter” (Needham: 0022), wherein the AR model should not cross a seam or edge (Needham: 0050)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to place AR models in the manner taught by Needham, because this enables a convincing AR experience (Needham: 0022).
And further, the combined teachings above do not expressly teach but Butler teaches identifying, by using at least one sensor (This invention advantageously uses the distant sensing device 130 to measure the distance from an object (patient. P) at some distance in front of the display monitor 104 (Butler: 0033)), a distance to a physical object corresponding to the second region where the virtual object is displayed; and compensating the vision prescription value based on the identified distance (Butler teaches the concept of dynamically resizing the optotypes on a monitor based on the distance of the patient to the monitor (Butler: 0032)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the dynamic optotypes of Butler, because this enables accurate testing regardless of the distance.
Regarding claim 3, the combined teachings teach the method of claim 1, wherein the obtaining the background image comprises:
Obtaining a depth map of the background image by using a depth sensor of the AR device (candidate physical locations values based on geographical boundaries (Wang: 0045), for which a list or an array of values, or a map representing the spatially-dependent attribute, covering the surroundings of user 110 may be retrieved from the database (Wang: 0046). Note, spatially-dependent attribute is a function of physical location in the surroundings of user (Wang: 0046));
And identifying, based on the depth map, at least one of a depth value of the at least one physical object or a shape of the at least one physical object (the physical location for placing current virtual object 104/204 is selected from the set of candidate physical locations based on the values of the spatially-dependent attribute evaluated at the at least one candidate physical location (Wang; 0046)).
Regarding claim 4, the combined teachings teach the method of claim 3, wherein the determining the first region on the background image, based on the edge and at least one of the depth value of the at least one physical object or the shape of the at least one physical object (the resulting determination of the candidate location based on the teachings of Wang as stated in the rejection above (claim 3)).
Regarding claim 5, the combined teachings above teach the method of claim 1, further comprising:
Identifying a focal distance from an eye of the user to the virtual object (the measured distance (Butler: 0031));
Determining a test vision compensation value based on the focal distance (the value for adjusting to compensate for the measured and calculated distance (Butler: 0031));
And compensating the vision prescription value based on the test vision compensation value (dynamically adjusting the sizes of the optotypes 106 to compensate for the distances (Butler: 0031)).
Regarding claim 11, the combined teachings teach the method of claim 1, further comprising determining, based on an area of the first region, at least one of sizes or a number of virtual objects for measuring the vision of the user (the different size and characters/numbers within the virtual eye test chart 1420 (Samec: FIG.14)).
Claim(s) 15 is a corresponding device claim(s) of claim(s) 1. The limitations of claim(s) 15 are substantially similar to the limitations of claim(s) 1. Therefore, it has been analyzed and rejected substantially similar to claim(s) 15. Note, the combined teachings teach an AR device (Wang: 0036).
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Samec in view of Needham as applied to the claim(s) above, and further in view of Takasu et al. (PGPUB Document No. US 2022/0180115)
Regarding claim 2, the combined teachings do not expressly teach but Takasu teaches the method of claim 1, wherein the identifying the edge of the image of the at least one physical object in the background image comprises
Determining, as the edge, at least one pixel having a first intensity higher, by a preset threshold value, than second intensities of other pixels adjacent to the at least one pixel (Laplacian filter or a Sobel filter may be used to calculate edge intensity, and pixels, of which edge intensity is greater than a threshold TH1, may be detected as edge pixels (Takasu: 0044)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to implement the edge detecting teaching of Takasu, because enable an effective method of detecting objects.
Claim(s) 6-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Samec in view of Needham in view of Butler as applied to the claim(s) above, and further in view of Spaas et al. (PGPUB Document No. US 2022/0354582).
Regarding claim 6, the combined teachings teach the method of claim 5, wherein the identifying the focal distance from the eye of the user to the virtual object comprises:
Identifying a physical object corresponding to the first region corresponding to the second region where the virtual object is displayed (the determined candidate physical location according to the teachings of Wang as stated in the rejection to claim 1 above);
And identifying the focal distance from the eye of the user to the physical object, by using a sensor (various distant sensing devices (Butler: 0033))
However, the combined teachings do not expressly teach but Spaas teaches the sensor being at least one of a light detection and ranging (LIDAR), a depth sensor, or an eye tracking sensor of the AR device (Spaas teaches the concept of utilizing a depth sensor for measuring distances (Spaas: 0013, 0046)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the depth sensor of Spaas, because this enables an effective method for measuring distances.
Regarding claim 7, the combined teachings teach the method of claim 5, wherein the determining the test vision compensation value comprises determining, based on a reciprocal (1/D) of the focal distance (D), the test vision compensation value (the vision test results (Samec: 1334) uses diopter values (Samec: 1713), wherein by definition is reciprocal of the focal distance).
Claim(s) 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Samec in view of Needham as applied to the claim(s) above, and further in view of Glynn et al. (PGPUB Document No. US 2018/0190019).
Regarding claim 8, the combined teachings do not expressly teach Glynn teaches the method of claim 1, further comprising: identifying a color of the first region (determines a color and brightness of a background predetermined region wherein a virtual object is to be displayed (Glynn: 0053));
And determining a color of the virtual object for measuring the vision of the user based on the color of the first region (applying the teaching of Glynn results in the virtual eye test chart of Samec rendered at an adjusted color and brightness (Glynn: 0053)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to apply the teachings of Glynn to the combined teachings above, because this enhances visibility of rendered objects.
Regarding claim 9, the combined teachings teach the method of claim 8, wherein the color of the virtual object for measuring the vision of the user is determined to have a maximum contrast with the color of the first region (adjusts the color and brightness of virtual user interface to maximize contrast (Glynn: 0053)).
Regarding claim 10, the combined teachings teach teaches the method of claim 8, further comprising lowering brightness of a plurality of pixels included in the second region, wherein the plurality of pixels do not output the virtual object for measuring the vision of the user (lowering the brightness of pixels to not output the virtual object corresponds to the system no longer displaying the virtual object, wherein at least the moment at terminating the treatment (Samec: 1605-1606) the virtual eye test chart 1420 of Samec is no longer displayed).
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Samec in view of Needham as applied to the claim(s) above, and further in view of Gibby et al. (PGPUB Document No. US 2019/0365498).
Regarding claim 12, the combined teachings do not expressly teach but Gibby teaches the method of claim 1, wherein the determining the second region corresponding to the first region comprises: determining the second region, and overlaying, by using an object locking mechanism, the virtual object for measuring the vision of the user on the first region (Gibby teaches the concept of locking a virtual object in place within an AR environment (Gibby: 0047)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to display the virtual eye test chart of Samec in a locked position as taught by Gibby, because this enables the user to view the virtual eye test chart at any location and orientation without the virtual eye testing moving.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Samec in view of Needham as applied to the claim(s) above, and further in view of De Salvo et al. (PGPUB Document No. US 2022/0246060).
Regarding claim 13, the combined teachings do not expressly teach but Salvo teaches the method of claim 1, further comprising: recognizing a gaze direction of the user (determining gaze tracking (Salvo: 0055)); and based on identifying that the gaze direction of the user is not toward the virtual object, outputting a guide indicator to the display (utilizing an arrow to instruct the user to view the desired gaze area (Salvo: 0113, 0048)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to guide the user to view the desired area, because this aids the user in focusing on the area of interest.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Samec in view of Needham as applied to the claim(s) above, and further in view of Edwin et al. (PGPUB Document No. US 2019/0107719).
Regarding claim 14, the combined teachings do not expressly teach but Edwin teaches the method of claim 1, further comprising controlling, based on the vision prescription value of the user, a variable focus lens of the AR device (variable focus lens assembly is adjusted in accordance with a prescription (Edwin: 0127)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the variable focus lens teaching of Edwin, because this aids the user with enhancing visibility.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H CHU/Primary Examiner, Art Unit 2616