DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim comparison table
Claims of Application # 19/208518
Claims of U.S. Patent# 12,333,069
1. A system comprising: a wearable head device; one or more sensors configured to detect visible light of a first wavelength; and one or more processors configured to perform a method comprising: detecting, via the one or more sensors, light reflected from an eye of a user of the wearable head device, the light comprising light of the first wavelength, concurrently with an illumination of the eye by a second light; determining, based on the detected light reflected from the eye, a focus of content presented to the eye via a display of the wearable head device; determining whether the focus of the content is below an image quality threshold; in accordance with a determination that the focus of the content is below the image quality threshold, adjusting the focus of the content; and in accordance with a determination that the focus of the content is not below the image quality threshold, forgoing adjusting the focus of the content.
1. A wearable head device comprising: a see-through display configured to present content to an eye of a user of the wearable head device, such that presenting the content to the eye produces an eye glint; a combiner; one or more sensors configured to capture an image of the eye glint; and one or more processors, wherein: the eye glint comprises light of a first wavelength, the combiner is configured to absorb light of the first wavelength, the one or more sensors are configured to detect light of the first wavelength, the content comprises visible light, and the one or more processors are configured to perform a method comprising: capturing, via the one or more sensors, an image of the eye glint, wherein the image of the eye glint comprises a portion of the content reflected from the eye; determining, based on the image of the eye glint, a focus of the content; determining whether the focus of the content is below an image quality threshold; in accordance with a determination that the focus of the content is below the image quality threshold, adjusting the focus of the content; and in accordance with a determination that the focus of the content is not below the image quality threshold, forgoing adjusting the focus of the content.
2. The system of claim 1, the method further comprising: determining a gaze direction of the user based on the light reflected from the eye.
2. The wearable device of claim 1, the method further comprising: capturing, via the one or more sensors, an image of the iris of the eye; and determining a gaze location of the user based on the image of the eye glint and further based on the image of the iris.
3. The system of claim 2, the method further comprising: capturing, via the one or more sensors, an image of the iris of the eye, wherein the gaze location is determined further based on the image of the iris.
2. The wearable device of claim 1, the method further comprising: capturing, via the one or more sensors, an image of the iris of the eye; and determining a gaze location of the user based on the image of the eye glint and further based on the image of the iris.
4. The system of claim 1, further comprising a combiner configured to transmit the visible light.
4. The wearable head device of claim 1, wherein the combiner is further configured to transmit visible light.
5. The system of claim 1, further comprising a combiner configured to absorb at least a portion of the visible light.
1. A wearable head device comprising…; a combiner; …, the combiner is configured to absorb light of the first wavelength, ….
6. The system of claim 1, wherein the one or more sensors are further configured to receive the light reflected from the eye via a filter configured to transmit light of the first wavelength.
5. The wearable head device of claim 1, wherein the one or more sensors are configured to receive the eye glint via a filter configured to transmit light of the first wavelength.
7. The system of claim 1, further comprising a filter configured to limit transmission of environmental light of the first wavelength.
6. The wearable head device of claim 1, wherein the combiner comprises a filter configured to limit transmission of environmental light of the first wavelength.
8. The system of claim 1, wherein the one or more sensors comprise an infrared camera.
7. The wearable head device of claim 1, wherein the one or more sensors comprise an infrared camera.
9. A method comprising: detecting, via one or more sensors, light reflected from an eye of a user of a wearable head device, wherein: the eye is illuminated by a light, the light reflected from the eye comprises visible light of a first wavelength, and the one or more sensors are configured to detect visible light of the first wavelength; determining, based on the detected light reflected from the eye, a focus of content presented to the eye via a display of the wearable head device; determining whether the focus of the content is below an image quality threshold; in accordance with a determination that the focus of the content is below the image quality threshold, adjusting the focus of the content; and in accordance with a determination that the focus of the content is not below the image quality threshold, forgoing adjusting the focus of the content.
8. A method comprising: presenting, via a see-through display of a wearable head device, content to an eye of a user of the wearable head device, such that presenting the content to the eye produces an eye glint, wherein the content comprises visible light; capturing, via one or more sensors of the wearable head device, an image of the eye glint, wherein the image of the eye glint comprises a portion of the content reflected from the eye; determining, based on the image of the eye glint, a focus of the content; determining whether the focus of the content is below an image quality threshold; in accordance with a determination that the focus of the content is below the image quality threshold, adjusting the focus of the content; and in accordance with a determination that the focus of the content is not below the image quality threshold, forgoing adjusting the focus of the content, wherein: the eye glint comprises light of a first wavelength, the wearable head device comprises a combiner, the combiner is configured to absorb light of the first wavelength, and the one or more sensors are configured to detect light of the first wavelength.
10. The method of claim 9, further comprising: determining a gaze direction of the user based on the light reflected from the eye.
9. The method of claim 8, further comprising: capturing, via the one or more sensors, an image of the iris of the eye; and determining a gaze location of the user based on the image of the eye glint and further based on the image of the iris.
11. The method of claim 10, further comprising: capturing, via the one or more sensors, an image of the iris of the eye, wherein the gaze location is determined further based on the image of the iris.
9. The method of claim 8, further comprising: capturing, via the one or more sensors, an image of the iris of the eye; and determining a gaze location of the user based on the image of the eye glint and further based on the image of the iris.
12. The method of claim 9, further comprising transmitting, via a combiner, the visible light.
8. A method comprising: …, the wearable head device comprises a combiner, the combiner is configured to absorb light of the first wavelength, and the one or more sensors are configured to detect light of the first wavelength.
13. The method of claim 9, further comprising absorbing, via a combiner, at least a portion of the visible light.
8. A method comprising: …comprises a combiner, the combiner is configured to absorb light of the first wavelength,…
14. The method of claim 9, further comprising receiving, via the one or more sensors, the light reflected from the eye via a filter configured to transmit light of the first wavelength.
12. The method of claim 8, wherein the one or more sensors are configured to receive the eye glint via a filter configured to transmit light of the first wavelength.
15. The method of claim 9, further comprising limiting, via a filter, transmission of environmental light of the first wavelength.
13. The method of claim 8, wherein the combiner comprises a filter configured to limit transmission of environmental light of the first wavelength.
16. The method of claim 9, wherein the one or more sensors comprise an infrared camera.
14. The method of claim 8, wherein the one or more sensors comprise an infrared camera.
17. A non-transitory computer-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform a method comprising: detecting, via one or more sensors, light reflected from an eye of a user of a wearable head device, wherein: the eye is illuminated by a light, the light reflected from the eye comprises visible light of a first wavelength, and the one or more sensors are configured to detect visible light of the first wavelength; determining, based on the detected light reflected from the eye, a focus of content presented to the eye via a display of the wearable head device; determining whether the focus of the content is below an image quality threshold; in accordance with a determination that the focus of the content is below the image quality threshold, adjusting the focus of the content; and in accordance with a determination that the focus of the content is not below the image quality threshold, forgoing adjusting the focus of the content.
15. A non-transitory computer-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform a method, comprising: presenting, via a see-through display of a wearable head device, content to an eye of a user of the wearable head device, such that presenting the content to the eye produces an eye glint, wherein the content comprises visible light; capturing, via one or more sensors of the wearable head device, an image of the eye glint, wherein the image of the eye glint comprises a portion of the content reflected from the eye; determining, based on the image of the eye glint, a focus of the content; determining whether the focus of the content is below an image quality threshold; in accordance with a determination that the focus of the content is below the image quality threshold, adjusting the focus of the content; and in accordance with a determination that the focus of the content is not below the image quality threshold, forgoing adjusting the focus of the content, wherein: the eye glint comprises light of a first wavelength, the wearable head device comprises a combiner, the combiner is configured to absorb light of the first wavelength, and the one or more sensors are configured to detect light of the first wavelength.
18. The non-transitory computer-readable medium of claim 17, the method further comprising: determining a gaze direction of the user based on the light reflected from the eye.
16. The non-transitory computer-readable medium of claim 15, wherein the method further comprises: capturing, via the one or more sensors, an image of the iris of the eye; and determining a gaze location of the user based on the image of the eye glint and further based on the image of the iris.
19. The non-transitory computer-readable medium of claim 17, the method further comprising receiving, via the one or more sensors, the light reflected from the eye via a filter configured to transmit light of the first wavelength.
19. The non-transitory computer-readable medium of claim 15, wherein the one or more sensors are configured to receive the eye glint via a filter configured to transmit light of the first wavelength.
20. The non-transitory computer-readable medium of claim 17, wherein the one or more sensors comprise an infrared camera.
20. The non-transitory computer-readable medium of claim 15, wherein the one or more sensors comprise an infrared camera.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 4-9, 12-16, 19 and 20 of U.S. Patent No. 12,333,069. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of instant application is similar in scope with claim 1 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 2 of instant application is similar in scope with claim 2 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 3 of instant application is similar in scope with claim 2 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 4 of instant application is similar in scope with claim 4 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 5 of instant application is similar in scope with claim 1 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 6 of instant application is similar in scope with claim 6 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 7 of instant application is similar in scope with claim 6 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 8 of instant application is similar in scope with claim 7 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 9 of instant application is similar in scope with claim 8 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 10 of instant application is similar in scope with claim 9 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 10 of instant application is similar in scope with claim 9 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 11 of instant application is similar in scope with claim 9 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 12 of instant application is similar in scope with claim 8 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 13 of instant application is similar in scope with claim 8 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 14 of instant application is similar in scope with claim 12 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 15 of instant application is similar in scope with claim 13 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 16 of instant application is similar in scope with claim 14 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 17 of instant application is similar in scope with claim 15 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 18 of instant application is similar in scope with claim 16 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 19 of instant application is similar in scope with claim 19 of US. Patent# 12,333,069 as shown in the claim comparison table above. Claim 20 of instant application is similar in scope with claim 20 of US. Patent# 12,333,069 as shown in the claim comparison table above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Clavin et al. (2013/0147686) teaches a system comprising: a wearable head device (Fig 2); one or more sensors configured to detect visible light of a first wavelength (para [0054] one camera is used to obtain images using visible light.); and one or more processors configured to perform a method comprising: detecting, via the one or more sensors, light reflected from an eye of a user of the wearable head device, the light comprising light of the first wavelength (para [0140]).
Wheeler et al. (2013/0128364) teaches A head-mounted display (HMD) may include an eye-tracking system, an HMD-tracking system and a display configured to display virtual images. The virtual images may present an augmented reality to a wearer of the HMD and the virtual images may adjust dynamically based on HMD-tracking data. However, position and orientation sensor errors may introduce drift into the displayed virtual images. By incorporating eye-tracking data, the drift of virtual images may be reduced. In one embodiment, the eye-tracking data could be used to determine a gaze axis and a target object in the displayed virtual images. The HMD may then move the target object towards a central axis. The HMD may also record data based on the gaze axis, central axis and target object to determine a user interface preference. The user interface preference could be used to adjust similar interactions with the HMD.
Hillis et al. (2012/0236257) teaches methods and systems for improving and enhancing vision are disclosed. Adjustable lenses or optical systems may be used to provide adaptive vision modification. In some embodiments, vision modification may be responsive to the current state of the user's visual system. Certain embodiments provide correction of the subject's near and far vision. Other embodiments provide enhancement of vision beyond the physiological ranges of focal length or magnification.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PREMAL PATEL whose telephone number is (571)270-5892. The examiner can normally be reached Mon-Fri 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATTHEW EASON can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PREMAL R PATEL/Primary Examiner, Art Unit 2624