DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 09/03/2025 have been fully considered but they are not persuasive.
Applicant argues that Yamada does not actually teach the Claimed limitation of Claim 1, “…uses light from the plurality of lights to convert at least one of the at least one image and the at least one video viewed on the plurality of lenses from two-dimensional to three-dimensional.”
The Examiner respectfully disagrees.
Yamada specifically teaches, “the HMD 1 detects an identifying object such as a two-dimensional code (for example, a QR code) and performs a display control in which content information associated with the identifying object is displayed.” [Par 44], and further teaches “he HMD 1 of this embodiment includes a CCD (Charge Coupled Device) sensor 2 which constitutes an imaging unit for imaging at least a partial area within a viewing field of the user P, selects content information associated with the identifying object out of plural kinds of content information under the condition that the identifying object is present within an imaging area of the CCD sensor 2, and displays the selected content information.”, wherein the LED 3 illuminates the imaging area of the CCD sensor 2 [Par 45-47]. Therefore the LED illuminates the imaging area of the CCD sensor, and the HMD selects information that is displayed on the image [Par 44-45]. This display information can be seen in in Fig. 4b, wherein the information is displayed overlapping the image area A [Par 83-86]. The displaying of information over the image area, adds a third visual dimension to the image (see the combined depth of displayed information and image in Fig. 4a), thus satisfying the claimed limitation.
Thus the rejection of Claims 1-4 is sustained and made FINAL.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The rejection of Claim 1 under 112(b) is withdrawn.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Franklin (US 20200096775 A1) in view of Costello (US 20210326594 A1), and further in view of Yamada (US 20110158478 A1).
Re Claim 1, Franklin discloses on Fig. 1-2, a view enhancing eyewear (head-mounted display unit 110), comprising: a frame (housing 112) to be worn on at least a portion of a face of a user; a plurality of lenses (removable lenses 120) removably connected to at least a portion of the frame (See Fig. 2); at least one sensor disposed within at least a portion of the frame to detect light received thereon and send data therefrom (“meaning a system uses one or more image sensor(s) to capture images of the physical environment”) [Par 114].
But Franklin does not explicitly disclose, a control unit disposed within at least a portion of the frame to convert at least one of at least one image and at least one video viewed on the display from two-dimensional to three-dimensional in response to determining the data from the at least one sensor is from at least one of a television, a movie screen, a movie theater, a play, a concert, and a video, and a plurality of lights disposed on at least a portion of the frame to emit a beam of light to illuminate a surrounding area in response to being turned on, wherein the control unit turns on the plurality of lights in response to the at least one sensor detecting at least one of dark and low light conditions and uses light from the plurality of lights to convert at least one of the at least one image and the at least one video viewed on the plurality of lenses from two-dimensional to three-dimensional.
However, within the same field of endeavor, Costello teaches, on Fig. 1-3, that it is desirable in augmented reality for a control unit (electronic device 105) disposed within at least a portion of the frame to convert at least one of at least one image and at least one video (scene information and video content) [Par 34] viewed on the display from two-dimensional to three-dimensional (Fig. 3: display virtual content mimicking that displayed in the video) [Par 37] in response to determining the data from the at least one sensor (sensor 152) is from at least one of a television, a movie screen, a movie theater, a play, a concert, and a video (once scene information is identified electronic device 105 displays virtual content to supplement video content, thus one of ordinary skill could have device 105, toggle the displayed content based on the scene) [Par 34-35].
Therefore, it would have been obvious to one of ordinary skill in the art before the filing date of the invention to modify the system of Franklin with Costello in order to generate virtual content that is supplemental to the video content, as taught by Costello [Par 35].
But in Franklin in view of Costello does not explicitly disclose, a plurality of lights disposed on at least a portion of the frame to emit a beam of light to illuminate a surrounding area in response to being turned on, wherein the control unit turns on the plurality of lights in response to the at least one sensor detecting at least one of dark and low light conditions and uses light from the plurality of lights to convert at least one of the at least one image and the at least one video viewed on the plurality of lenses from two-dimensional to three-dimensional.
However, within the same field of endeavor, Yamada teaches, on Fig. 1-2 (See Fig. 1-2 for the invention and Fig. 4a-4b for the resulting image affects), that it is desirable in augmented reality to include a plurality of lights (LED 3 and image light generating part 20, BGR laser drivers) disposed on at least a portion of the frame to emit a beam of light to illuminate a surrounding area in response to being turned on (when sensor 8 detects brightness of surroundings, LED 3 turns on to illuminate imaging area of CCD sensor 2) [Par 47], wherein the control unit turns on the plurality of lights in response to the at least one sensor detecting at least one of dark and low light conditions and uses light from the plurality of lights (when sensor 8 detects brightness of surroundings, LED turns on to illuminate imaging area of CCD sensor 2) [Par 47] to convert at least one of the at least one image and the at least one video viewed on the plurality of lenses from two-dimensional to three-dimensional (Yamada teaches: “…displaying necessary and sufficient number of display information in an easily viewable manner even when a large number of identifying objects are detected.” And “performs a display control in which content information associated with the identifying object is displayed.”, see Fig. 4a for examples where image area A, contains identifying objects, content information is displayed overlapping the image area A, as shown in Fig. 4b thus creating at least a slightly 3D image) [Par 12, 44, and Par 81-86]
Therefore, it would have been obvious to one of ordinary skill in the art before the filing date of the invention to modify the system of Franklin in view of Costello with Yamada in order display information in an easily viewable manner, as taught by Yamada [Par 12].
Re Claim 2, Franklin in view of Costello and Yamada discloses, the view enhancing eyewear of claim 1, and Franklin further discloses on Fig. 3A-3B, wherein each of the plurality of lenses corrects vision of the user (Removable lens element can include corrective lens 330) [Par 7 and 45].
Re Claim 3, Franklin in view of Costello and Yamada discloses, the view enhancing eyewear of claim 1, and Costello further discloses, wherein the control unit abstains from converting at least one of the at least one image and the at least one video (Costello teaches, “electronic device 105 may generate virtual content that is supplemental to the video content for display in an XR environment”, thus electronic device 105 can abstain from generating virtual content) [Par 35] viewed on the plurality of lenses from two-dimensional to three-dimensional (Fig. 3: display virtual content mimicing that displayed in the video) [Par 37 in response to determining the data from the at least one sensor is an object other than at least one of the television, the movie screen, the movie theater, the play, the concert, and the video (once scene information is identified electronic device 105 displays virtual content to supplement video content, thus one of ordinary skill could have device 105, toggle the displayed content based on the scene) [Par 34-35].
Re Claim 4, Franklin in view of Costello and Yamada discloses, and Costello further discloses, the view enhancing eyewear of claim 1, wherein the control unit (electronic device 105) determines the data from the at least one sensor (sensor 152 or camera 150) is from at least one of the television, the movie screen, the movie theater, the play, the concert, and the video based on light received therefrom (electronic device 105 may identify scene information and attributes, using titles, ID’s etc. directly from images of the video content) [Par 33-34].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Edwin (US 20200018968 A1) teaches an augmented reality heads up display.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAY ALEXANDER DEAN whose telephone number is (571)272-4027. The examiner can normally be reached Monday-Friday 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bumsuk Won can be reached at (571)-272-2713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAY ALEXANDER DEAN/ Examiner, Art Unit 2872
/BUMSUK WON/ Supervisory Patent Examiner, Art Unit 2872