DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. See MPEP § 606.
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/14/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 05/27/20025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 8-9, 11-13 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al. (US 20150091943 A1, hereinafter “LEE”) in view of Lin et al. (US 20200186775 A1, hereinafter “Lin”).
Regarding claim 1. LEE discloses an electronic apparatus (0042; Figure 2; ‘wearable display device 200’) comprising:
a display (Figure 2; ‘a display unit 217’);
at least one camera (Figure 2; ‘a camera unit 213’);
a memory storing instructions (Figure 2; ‘a storage unit 214’); and
at least one processor (Figure 2; ‘an image processing unit 216’), comprising processing circuitry;
wherein at least one processor, individually and/or collectively, is configured to execute the instructions, and to cause the electronic apparatus to:
display a three-dimensional (3D) image including an object positioned in a 3D virtual space through the display (0058; Figure 2; “[0058] Additionally, the display unit 217 outputs a video signal of a content that is being executed in the wearable display device 200. The content may be received from any one of the external digital device 250, the camera unit 213, and the storage unit 214. The display unit 217 may correspond to a liquid crystal display, a thin film transistor liquid crystal display, a light emitting diode, an organic light emitting diode, a flexible display, a three-dimensional display (3D display), and so on. Additionally, the display unit 217 may also correspond to an empty space (or the air) or a transparent glass that can display a virtual display screen. More specifically, any object that can visually deliver video signals to a human being may be used as the display unit 217.”),
identify movement information corresponding to at least one of a user head or eyes in a 3D space where a user is positioned based on a captured image acquired by the camera (0077, 0082 and 0058; Claims 1; Figures 2-7; “[0077] In order to do so, the camera unit 213 captures an image (or takes a picture) of the user's face and outputs the captured image to the controller 211. The controller 211 extracts an image of the user's eye (i.e., eye image) from the image of the face (i.e., face image), which is captured by the camera unit 213, and then calculates a center point of the pupil from the extracted eye image.”) , and
identify position movement information corresponding to the 3D virtual space based on the movement information (0077, 0082 and 0058; Claims 1; Figures 2-7).
LEE failed to disclose control the display to display the object by changing the display position and depth of the object within the 3D virtual space included in the 3D image based on the position movement information.
Lin, however, in the same field of endeavor, shows an electronic apparatus (Figure 3) comprising:
control the display to display the object by changing the display position and depth of the object within the 3D virtual space included in the 3D image based on the position movement information (0045; Figure 3-4 and 8; Claim 8; “[0045] In process block 164, the processor 44 determines an interpupillary distance of a user. For example, the processor 44 may receive pupil position information from the pupil tracking sensor 26 shown in FIG. 3, and instruct the interpupillary distance determination engine 48 to determine the interpupillary distance (e.g., the IPD as shown in FIG. 4) based on the pupil position information. … The interpupillary distance determination engine 48 may dynamically determine or estimate the interpupillary distance of the user 10 as the interpupillary distance may change as the user 10 views different objects (virtual or real). ...”).
It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to combine the a pupil tracking sensor configured to detect and provide an indication of the user's pupil position, and a convergence adjustment system in a wearable display device that can be worn on the user's face of Lee in order to yield a predictable result which is providing adjustments for displaying the virtual object of the virtual image and an indication that the virtual object is changing its virtual depth based on the interpupillary distance (see Lin: claim 1)
Regarding claim 2. LEE discloses the apparatus as claimed in claim 1, wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to:
identify first movement distance information and first movement direction information corresponding to the position movement information (0064, 0068, 0074 and 0089; Figures 3-7; Claims 4-5), and
control the display to display the object by changing the display position and depth of the object in the 3D virtual space based on the first movement distance information and the first movement direction information (0064, 0068, 0074 and 0089; Figures 3-7; Claims 4-5).
Regarding claim 3. LEE discloses the apparatus as claimed in claim 2, wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to:
identify second movement distance information corresponding to a difference between a first position and a second position and second movement direction information from the first position to the second position based on a position of the at least one of the user head or eyes in the captured image acquired by the camera being changed from the first position to the second position (0064, 0068, 0074 and 0089; Figures 3-7; Claims 4-5), and
identify the first movement distance information and the first movement direction information based on the second movement distance information and the second movement direction information (0064, 0068, 0074 and 0089; Figs. 3-7; Claims 4-5).
Regarding claim 8. Claim 8 has similar limitations as to those treated in the above rejections, and are met by the references as discussed above, and has been rejected for the same reasons of obviousness as used in the rejection to claim 1 above as it could be easily derived from the fact that if the user's gaze is directed at one of the virtual objects in the first layer and the virtual objects in the second layer, the system includes a controller that moves the virtual object in the layer that the user's gaze.
Regarding claim 9. Lin further shows the apparatus as claimed in claim 1, wherein the camera includes a plurality of cameras spaced apart from each other by a specified distance, and
at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to:
identify disparity information based on first and second captured images acquired by the plurality of cameras (0045; Figures 3, 4 and 8), and
identify the movement information corresponding to the at least one of the user head or eyes in the 3D space where the user is positioned based on the disparity information (0045; Figures 3, 4 and 8; wherein the processor (44) determines the user's interpupillary distance, receives pupil position information from the pupil tracking sensor (26), and instructs the interpupillary distance determination engine (48) to determine the interpupillary distance based on the pupil position information).
The motivation used in the rejection of claim 1 to combine Lee prior art with Lin prior art still applies to the combination of the prior arts on the rejection of claim 9.
Regarding claims 11-13 and 18-19. Method claims 11-13 and 18-19 are drawn to the method of using the corresponding apparatus claimed in claims 1-3 and 8-9. Therefore method claims 11-13 and 18-19 correspond to apparatus claims 1-3 and 8-9 are rejected for same reasons of obviousness as used above.
Regarding claim 20. Non-transitory computer-readable storage medium claim 20 is drawn to the non-transitory computer-readable storage medium of using the corresponding to the method of using the same as claimed in claim 11. Therefore, non-transitory computer-readable storage medium claim 20 corresponds to the method claim 11, and is rejected for the same reasons of obviousness as used above.
Claim Rejections - 35 USC § 103
Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over LEE in view of Lin as applied to claims 3 and 13 above, and further in view of TERAHATA (US 20180025531 A1, hereinafter “TERAHATA”).
Regarding claim 5. LEE in view of Lin shows the apparatus as claimed in claim 3, but failed to show wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to:
identify the second movement distance information based on three-axis coordinate values corresponding to the first position and three-axis coordinate values corresponding to the second position, and
identify the second movement direction information based on three-axis angular velocity values from the first position to the second position.
TERAHATA, however, in the same field of endeavor, shows wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to:
identify the second movement distance information based on three-axis coordinate values corresponding to the first position and three-axis coordinate values corresponding to the second position (0112-0113, 0028 and 0253; Figures 1, 7A and 7B), and
identify the second movement direction information based on three-axis angular velocity values from the first position to the second position (0112-0113, 0228 and 0253; Figures 1, 7A and 7B; “[0112] The right controller 320 and the left controller 330 may detect the positions and inclinations of themselves using the sensor 306 instead of the controller sensor 140. In this case, for example, a three-axis angular velocity sensor (sensor 306) of the right controller 320 detects rotation of the right controller 320 about three orthogonal axes. The right controller 320 detects how much and in which direction the right controller 320 has rotated based on the detection values, and calculates the inclination of the right controller 320 by integrating the sequentially detected rotation direction and rotation amount. ...”).
It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to combine the method of providing a virtual experience to a user includes identifying a plurality of virtual objects of TERAHATA in the wearable display device of Lee in view of the a pupil tracking sensor configured to detect and provide an indication of the user's pupil position of Lin in order to yield a predictable result.
Regarding claim 6. Claim 6 has similar limitations as to those treated in the above rejections of claim 5, and is met by the references as discussed above, and has been rejected for the same reasons of obviousness as used in the rejection to claim 5 above.
Regarding claims 15-16. Method claims 15-16 are drawn to the method of using the corresponding apparatus claimed in claims 5-6. Therefore method claims 15-16 correspond to apparatus claims 5-6 are rejected for same reasons of obviousness as used above.
Claim Rejections - 35 USC § 103
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over LEE in view of Lin as applied to claim 1 above, and further in view of Kaku (US 20230244311 A1, hereinafter “Kaku”).
Regarding claim 10. LEE in view of Lin shows the apparatus as claimed in claim 1, but failed to show wherein the display includes a light field display (LFD).
Kaku, however, in the same field of endeavor, shows the display includes a light field display (LFD) (0030-0036; Figure 1; “[0030] The system 1 includes an information processing apparatus 10, a first light field display (LFD) 20, a first camera 30, a second LFD 40, and a second camera 50. The information processing apparatus 10, the first LFD 20, and the second LFD 40 are communicably connected to a network 60.”).
It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to combine the LFD of Kaku in the image display of LEE in view of Lin in order to yield a predictive result which is providing glass free the three dimensional view of the AR or VR.
Allowable Subject Matter
Claims 4, 7, 14, and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASMAMAW TARKO whose telephone number is (571)272-9205. The examiner can normally be reached Monday -Friday 9:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASMAMAW G TARKO/ Patent Examiner, Art Unit 2482