DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 7-9, 10-12, and 16-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Irina Kezele et al., KR 2016-0007423.
Independent claim 1, Kezele discloses a method comprising:
determining, using at least one processing device, that an inter-pupillary distance (IPD) between left and right display lenses of a video see-through (VST) extended reality (XR) device has been adjusted with respect to a default IPD of the VST XR device (i.e. In order customization, individual users have the opportunity to adjust the calibration parameters to the desired accuracy. If a standard level of accuracy is applied, the order customization process may include the user's dynamic spatial distance – Fig. 12; The order customization process can have adjustable accuracy and is user-friendly. The user interaction is minimized and preferably reduced by adjusting only five (of the total twelve) parameters (two unique parameters and three foreign parameters). The two intrinsic parameters are the focal length and the co-spatial distance (IPD). – Fig. 13);
obtaining, using the at least one processing device, an image captured using a see-through camera of the VST XR device, the see-through camera configured to capture images of a three-dimensional (3D) scene (i.e. Feature-based matching algorithms are widely used in this computer vision because of its use in obtaining perspective (i.e., 3D) information… Examples of feature-based matching algorithms include… Scale-Invariant Feature Transform (SIFT)… As is known in the art, the SIFT algorithm scans the image and identifies the points of interest or feature points – Fig. 5);
transforming, using the at least one processing device, the image to match a viewpoint of a corresponding one of the display lenses according to a change in IPD with respect to the default IPD in order to generate a transformed image (i.e. Epipolarity also forms the basis for homography, or projection transformation. Homography describes what happens to the perceived location of the observed object when the viewpoint of the observer changes. An example of this is shown in FIG. 4, where the shape of the square 12 is shown distorted in the two image projection planes 14, 16 when viewed at two different points in time V1, V2, respectively. As before, the image planes 14 and 16 can be thought of as windows viewing the square 12. The homography identifies a common point between the image projection plane 14, 16 and the square 12 (i.e., point registration). – Fig. 4);
correcting, using the at least one processing device, distortions in the transformed image based on one or more lens distortion coefficients corresponding to the change in IPD in order to generate a corrected image (i.e. A method for stereoscopic correction of left and right views of an imaginary optical viewing (OST) head-mounted display (HMD) comprises the steps of:… modeling left and right correction matrices that define a 3D-2D point correspondence between the 2D positions of virtual objects in left and right OST HMD projected images. – Fig. 9, 11, 12); and
initiating, using the at least one processing device, presentation of the corrected image on a display panel of the VST XR device (i.e. The result of the customization process is used to update the associated unique default correction matrix and the out-of-line default correction matrix. As a result, these matrices are used to associate a 3D virtual object with a 3D actual reference object. The calibration procedure is quick, simple, and user-friendly.- Fig. 12).
Claim 2, Kezele discloses the method of Claim 1, wherein: the corrected image is presented on a left display panel associated with a left eye of a user when the image is captured using a left see-through camera; and the corrected image is presented on a right display panel associated with a right eye of the user when the image is captured using a right see-through camera (i.e. A method for stereoscopic correction of left and right views of an imaginary optical viewing (OST) head-mounted display (HMD) comprises the steps of:… modeling left and right correction matrices that define a 3D-2D point correspondence between the 2D positions of virtual objects in left and right OST HMD projected images. – Fig. 9, 11, 12; The geometric information including the attitude (e. G., Translational movement and rotation) of the tracking device relative to the left and right eyes of the average head model (i. E., The average human user's head) is directly included in the outward correction matrix for each eye . The intrinsic parameters are based on the average head model eye position for the two HMD displays, the distance of the virtual image from the two eyes, the size and resolution of the projected image (assuming image distortion is negligible) . In this way, a default correction matrix for the right and left eyes is transmitted.; The correction is based on specifying the intrinsic and extrinsic parameters of a complex eye-0ST HMD system (hereinafter referred to as "virtual camera") for two left and right virtual cameras associated with an average head model - Fig. 12).
Claim 3, Kezele discloses the method of Claim 1, wherein: the see-through camera represents a left see-through camera (i.e. OST HMD – Fig. 14); the viewpoint corresponds to the left display lens (Fig. 2); the display panel represents a left display panel associated with a left eye of a user (Fig. 14 “103, 107”; Fig. 16, 25); and the method further comprises: obtaining, using the at least one processing device, a second image captured using a right see-through camera of the VST XR device (i.e. The HMD left optical axis 112 and the HMD right optical axis 114 of the left and right virtual cameras (as indicated by the left and right eyes 107 and 105) are shown, respectively - Fig. 14 “105”); transforming, using the at least one processing device, the second image to match a viewpoint of the right display lens in order to generate a second transformed image (i.e. Align the hypothetical object along the x, y, and z directions in the reference coordinate system for the left eye and right eye simultaneously. – Figs. 13-15); correcting, using the at least one processing device, distortions in the second transformed image in order to generate a second corrected image (i.e. The stereoscopic correction method of a virtual camera consists of defining a default correction matrix for left and right virtual cameras using the direct geometric characteristics of the system and customizing the matrices simultaneously to reach the desired accuracy for the individual user. While some of the following description is directed to a single virtual camera, it should be appreciated that similar descriptions will apply to each of the two virtual cameras of a stereoscopic imaging system (i.e., stereoscopic correction) – Fig. 13); and initiating, using the at least one processing device, presentation of the second corrected image on a right display panel of the VST XR device, the right display panel associated with a right eye of the user (Fig. 15 “101b, 117b”).
Claim 7, Kezele discloses the method of Claim 1, wherein: the display panel and the corresponding one of the display lenses are associated with one eye of a user; and transforming the image to match the viewpoint of the corresponding one of the display lenses comprises dynamically matching a principal point of the see-through camera with a principal point of the display panel (i.e. Epipolarity also forms the basis for homography, or projection transformation. Homography describes what happens to the perceived location of the observed object when the viewpoint of the observer changes. An example of this is shown in FIG. 4, where the shape of the square 12 is shown distorted in the two image projection planes 14, 16 when viewed at two different points in time V1, V2, respectively. As before, the image planes 14 and 16 can be thought of as windows viewing the square 12. The homography identifies a common point between the image projection plane 14, 16 and the square 12 (i.e., point registration). – Fig. 4).
Claim 8, Kezele discloses the method of Claim 1, wherein transforming the image to match the viewpoint of the corresponding one of the display lenses comprises mapping a see-through camera frame to a virtual camera frame in order to dynamically correct for parallax errors (i.e. Such 3D-2D transformations, as is known in the art, can be termed perspective projection or image projection and are described in the pinhole camera model. In general, this projection operation is modeled by a ray emitted from the camera and passing through the focus of the camera. Each modeled outgoing ray will correspond to a single point in the captured image - Figs. 2, 3).
Claim 9, Kezele discloses the method of Claim 1, wherein: the display panel and the corresponding one of the display lenses are associated with one eye of a user (Figs. 12 -14); and correcting the distortions in the transformed image comprises: dynamically adapting one or more display lens geometric distortion and chromatic aberration models based on the change in IPD (i.e. If known, the user enters the correct value and fine-tunes the IPD. Otherwise, the user adjusts the IPD… After aligning the view of the virtual object plane-wise with the marker, the user adjusts the size of the test object by unilaterally adjusting the focal length while still keeping the marker in the vertical position. If the IPD is set correctly and the focus length is adjusted appropriately, after performing steps 3 and 5 above, the object depth from the user should be optimally adjusted – Figs. 25-26, steps 3 and 5); and using the one or more adapted display lens geometric distortion and chromatic aberration models to correct for display lens geometric distortions and chromatics aberrations (i.e. The necessary adjustments are made in the selected coordinate system (basically the marker coordinate system) and the search for optimal IPD and planar translation is included in the object alignment procedure. – Figs. 25-26, step 7).
Independent claim 10, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Claims 11-12 and 16-17, the corresponding rationale as applied in the rejection of claims 2-3 and 7-8 apply herein.
Independent claim 18, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Allowable Subject Matter
Claims 4-6, 13-15 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANTE E HARRISON/Primary Examiner, Art Unit 2615