DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/5/2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 11 recite “adjusting at least one parameter related to a quality of a projected image based on the information regarding the first distance, the information regarding the third distance and the information corresponding to the visual acuity of the user; and projecting, using the projector, the projected image based on the adjusted at least one parameter; projecting, using the projector, light corresponding to a user interface (UI); and based on an input corresponding to one of a plurality of UI items included in the user interface being received, adjusting the at least one parameter based on the input.” When referring to the specification of the Applicant’s publication, the respective description of Fig. 3 discloses that in conjunction with Fig. 2 the system can adjust at least one parameter based on the distance information and visual acuity of the user; and the respective description of Figs. 5-6 disclose that in conjunction with Fig. 2 the system will adjust a parameter based on input from the user interface. However the claim language recites “based on an input corresponding to one of a plurality of UI items included in the user interface being received, adjusting the at least one parameter based on the input.” Although Fig. 2 shows the both the distance acquisition module with respect to elements 11-14 and the user input module with respect to elements 18-19, the specification expresses the modules as functioning independent of each other to perform the task of adjusting the parameter. Since the modules appear to function independent of each other it is unclear when or how the system would adjust “the at least one parameter” using the user interface when the “parameter” is being adjusted by determining the distance of the user.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 8, 11-12, 14, and 16-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shin (US 2015/0189248) in view of Santiago (US 2023/0119707) and Zeng (US 2023/0043455) .
Regarding claims 1 and 11, Shin teaches An electronic device comprising: a projection unit including a projector (Fig. 4); at least one sensor (Fig. 4 image sensor #1-#n); a memory configured to store at least one instruction (memory 123); and at least one processor (Processor 121), comprising processing circuitry, individually and/or collectively, configured to execute the at least one instruction, wherein at least one processor, individually and/or collectively, is configured to: obtain information regarding a first distance between the electronic device and a screen through the at least one sensor([0035-0036][0038] teach the system may obtain projection surface position or location through sensors, as can be seen in Figs. 6 and 8); obtain information regarding a second distance between the electronic device and a user through the at least one sensor([0035-0036][0038] teach the system may obtain user position or location through sensors, as can be seen in Fig. 6 and 8); obtain information regarding a third distance between the user and the screen based on the information regarding the first distance and the information regarding the second distance ([0040-0041] as can be seen in Figs, 6 and 8); adjust at least one parameter related to a quality of a projected image based on the information regarding the first distance, the information regarding the third distance and information corresponding to the visual acuity of the user; and control the projection unit to emit light corresponding to the image based on the adjusted at least one parameter ([0012] teaches the system will perform color correction and brightness correction on the input image input image correction unit 1070 [0051][0061]). Although Shin teaches the limitations as discussed above, he does not explicitly teach obtain information corresponding to visual acuity of a user stored in the memory.
However in the field of displaying an image to users, Santiago teaches a method for displaying an image to user based in and HMD mobile device 108 where the system obtains information corresponding to visual acuity of a user stored in the memory ([0091-0092] in [0092] Santiago teaches if a user is color blind, the system may adjust the color or contrast information that form the presentation based on multiple vision impairments of a user 102. The adjustments may be provided based on preferences set and/or stored as user information in the memory of the mobile device 108.)
Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Shin with the method of storing user preferences as taught by Santiago. This combination would provide a user with an improved viewing experience by increases the ability to adjust to user preferences. Although the combination teaches the limitations as discussed above and how the projector is controlled to emit light of an image, the combination fail to teach the image corresponds to a user interface (UI); and based on an input corresponding to one of a plurality of UI items included in the user interface being received, adjust the at least one parameter based on the input.
However in the field of controller parameters for a user preference Zeng teaches a device that displays an image corresponding to one of a plurality of UI items included in the user interface being received, adjust the at least one parameter based on the input ([0039] teaches that Fig. 3 represents an example of GUI 300 with controls through which a user may adjust global color preference. [0040-0042] discuss different types of user controls for global adjustment as seen in Fig. 3. [0030] teaches the interface controls may be touch screen with soft buttons, slider (bar as seen in Fig. 3) etc…)..
Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Shin with the method of storing user preferences as taught by Santiago, and the method of adjusting user preferences as taught by Zeng. This combination would provide a user with an improved viewing experience by increases the ability to adjust to user preferences as taught by Zeng.
Regarding claims 2 and 12, Shin teaches wherein the at least one parameter includes a parameter regarding at least one of contrast, sharpness, color, or contrast enhancement of the projected image ([0012][0051]).
Regarding claims 4, 14, and 16, Santiago teaches wherein the at least one processor, individually or collectively, is configured to: obtain information regarding contrast sensitivity perceivable by the user corresponding to the information regarding the visual acuity of the user stored in memory and adjust the at least one parameter based on the information regarding the contrast sensitivity ([0091-0092] in [0092] Santiago teaches if a user is color blind, the system may adjust the color or contrast information that form the presentation based on multiple vision impairments of a user 102. The adjustments may be provided based on preferences set and/or stored as user information in the memory of the mobile device 108.) and Shin teaches adjusting at least one parameter based on the information regarding the third distance ([0012] teaches the system will perform color correction and brightness correction on the input image input image correction unit 1070 [0051][0061]).
Regarding claim 8, Shin teaches adjusting the at least one parameter based on the information regarding the first distance, the information regarding the third distance ([0012] teaches the system will perform color correction and brightness correction on the input image input image correction unit 1070 [0051][0061])and includes adjusting contrast and color of the projected image([0012][0051]), and Santiago teaches information corresponding to visual acuity of user includes adjusting contrast and color and the visual acuity of the user includes contrast sensitivity perceivable by the user([0091-0092] in [0092] Santiago teaches if a user is color blind, the system may adjust the color or contrast information that form the presentation based on multiple vision impairments of a user 102. The adjustments may be provided based on preferences set and/or stored as user information in the memory of the mobile device 108.), and although it is well known that adjusting color and/or contrast could impact the sharpness of an image, Shing and Santiago do not explicitly teach adjusting sharpness of an image. However Zeng teaches a method where contrast, sharpness, and color of an image can be adjusted (Fig. 3 [0039-0042].).
Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Shin with the method of storing user preferences as taught by Santiago, and the method of adjusting user preferences as taught by Zeng. This combination would provide a user with an improved viewing experience by increases the ability to adjust to user preferences as taught by Zeng.
Regarding claim 17, Santiago teaches receiving a user input from the user relating to identification of an object in the image; determining the information regarding visual acuity of the user based on the received user input; and storing the information regarding the visual acuity of the user in the memory ([0091-0092] in [0092] Santiago teaches if a user is color blind, the system may adjust the color or contrast information that form the presentation based on multiple vision impairments of a user 102. The adjustments may be provided based on preferences set and/or stored as user information in the memory of the mobile device 108. It’s obvious the system provides a means for user to input vision impairments.), and Shin teaches projecting an image to the user ([0008]).
Regarding claim 18, Santiago teaches receiving a user input from the user relating to identification of an object in the image; determining the information regarding visual acuity of the user based on the received user input; and storing the information regarding the visual acuity of the user in the memory ([0091-0092] in [0092] Santiago teaches if a user is color blind, the system may adjust the color or contrast information that form the presentation based on multiple vision impairments of a user 102. The adjustments may be provided based on preferences set and/or stored as user information in the memory of the mobile device 108. It’s obvious the system provides a means for user to input vision impairments.) obtain information regarding contrast sensitivity perceivable by the user based on the information regarding the visual acuity of the user and adjust the at least one parameter based on the information regarding the contrast sensitivity ([0091-0092] in [0092] Santiago teaches if a user is color blind, the system may adjust the color or contrast information that form the presentation based on multiple vision impairments of a user 102. The adjustments may be provided based on preferences set and/or stored as user information in the memory of the mobile device 108.) and Shin teaches adjusting at least one parameter based on the information regarding the third distance ([0012] teaches the system will perform color correction and brightness correction on the input image input image correction unit 1070 [0051][0061]) and projecting an image to the user ([0008]).
Claims 3,5-6, 13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Shin (US 2015/0189248) in view Santiago (US 2023/0119707) Zeng (US 2023/0043455) and Solomon (US 2007/0099700) .
Regarding claims 3 and 13, Shin teaches the limitations as discussed above, but fail to teach wherein at least one processor, individually and/or collectively, is configured to: obtain information regarding a size of the projected image based on the information regarding the first distance; obtain information regarding brightness of the projected image based on the information regarding the size of the projected image; and adjust the at least one parameter based on the information regarding the brightness of the projected image.
However in the field of projecting an image Solomon teaches wherein at least one processor, individually and/or collectively, is configured to: obtain information regarding a size of the projected image based on the information regarding the first distance; obtain information regarding brightness of the projected image based on the information regarding the size of the projected image; and adjust the at least one parameter based on the information regarding the brightness of the projected image ([0030][0031] teaches the system will project a test shape or figure on the test screen. The size and shape of the test shape will vary depending upon the projection distance and angle will adjust the focus or other parameter (e.g. brightness) accordingly).
Therefore it would have been obvious to one of ordinary skill in the art to combine the system as taught by Shin with the method of storing user preferences as taught by Santiago and the method of projection as taught by Solomon. This combination would provide an improved viewing experience for the viewers/users.
Regarding claims 5 and 15, Shin teaches the limitations as discussed above, but fail to teach wherein at least one processor, individually and/or collectively, is configured to: based on a request for providing an image being received, control the projection unit to emit light corresponding to the image; based on the light emitted through the projection unit being projected onto a screen, identify a size and shape of the projected image projected onto the screen through the at least one sensor; perform a keystone correction on the projected image based on the identified size and shape; obtain information regarding a size of the projected image corrected according to the keystone correction; and further adjust the at least one parameter based on the information regarding the size of the corrected projected image.
However in the field of projecting an image Solomon teaches wherein at least one processor, individually and/or collectively based on a request for providing an image being received, control the projection unit to emit light corresponding to the image; based on the light emitted through the projection unit being projected onto a screen, identify a size and shape of the projected image projected onto the screen through the at least one sensor; perform a keystone correction on the projected image based on the identified size and shape; obtain information regarding a size of the projected image corrected according to the keystone correction; and further adjust the at least one parameter based on the information regarding the size of the corrected projected image([0030][0031] teaches the system will project a test shape or figure on the test screen. The size and shape of the test shape will vary depending upon the projection distance and angle will adjust the focus or other parameter (e.g. brightness) accordingly. The device adjust the keystone correction of the projected image).
Therefore it would have been obvious to one of ordinary skill in the art to combine the system as taught by Shin with the method of projection as taught by Solomon. This combination would provide an improved viewing experience for the viewers/users.
Regarding claim 6, Solomon teaches wherein at least one processor, individually and/or collectively, is configured to: further adjust the at least one parameter based on the information regarding the size of the corrected projected image to compensate for degradation in resolution of the projected image based on the size of the projected image being reduced by the keystone correction([0030][0031] teaches the system will project a test shape or figure on the test screen. The size and shape of the test shape will vary depending upon the projection distance and angle will adjust the focus or other parameter (e.g. brightness) accordingly. The device adjust the keystone correction of the projected image).
Claims 9 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shin (US 2015/0189248) in view of Santiago (US 2023/0119707), Zeng (US 2023/0043455), and Walsh (US 2021/0386285).
Regarding claim 9, Shin in view of Santiago and Zeng teach the limitations as discussed above but fail to teach wherein the user interface includes a graph representing contrast sensitivity corresponding to the visual acuity of the user and a plurality of UI items representing each of a plurality of locations of the graph.
However in the field of determining the visual acuity of a user, Wash teaches a method where a user interface includes a graph representing contrast sensitivity corresponding to the visual acuity of the user and a plurality of UI items representing each of a plurality of locations of the graph (Fig. 43 and the respective description describe a contrast sensitivity test where a system display a stimulus and/or image having bars 4301 [0445]. The system instructs the subject to either press a button or respond to the letter shown in the image when perceived by the subject [0446]. From this it reasonable that the letters combined with the bars displayed in 4301 would be the UI since it requires a user to interact once the letter is recognized).
Therefore it would have been obvious to one of ordinary skill in the art to combine the device as taught by Shin with the method of storing user preferences as taught by Santiago, the method of adjusting user preferences as taught by Zeng and the method of determining contrast sensitivity of a user as taught by Walsh. This combination would provide a user with an improved viewing experience by detecting an ability of a user to view an image as taught by Wash.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE L MATTHEWS whose telephone number is (571)270-5806. The examiner can normally be reached Mon-Fri 9:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRE L MATTHEWS/ Primary Examiner, Art Unit 2621