Prosecution Insights
Last updated: April 19, 2026
Application No. 18/719,218

VALIDATION SYSTEM FOR VIRTUAL REALITY (VR) HEAD MOUNTED DISPLAY (HMD)

Non-Final OA §103§112
Filed
Jun 12, 2024
Examiner
LI, RAYMOND CHUN LAM
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Loft Dynamics AG
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
10 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§103
55.6%
+15.6% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1-7 are objected to because of the following informalities: inconsistent application of parenthesis, such as “the validation system for a movement of the [[the]] left camera [[(11)]] and the right camera [[(12)]][[,]]” in Claim 1; “an hysteresis validation” in Claim 18, which should read “a hysteresis validation”. Claims 1 and 14 is objected to because of the following informalities: “VR HMD” should be spelled out as “virtual reality head mounted display” when introduced to avoid confusion. Similarly, “IPD” should be spelled out as “interpupillary distance” when introduced. Appropriate correction is required. Applicant is advised that should Claims 14-20 be found allowable, Claims 1-7 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are a “measurement device” in Claims 1, 8, and 14, and a “control unit” in Claims 1, 3, 8, 10, 14, and 16. Because these claim limitations is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. Claims 1, 8, and 14 contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Claim 1 recites “determine the distance from the left camera [[(11)]] as well as from the right camera [[(12)]] via a triangulation, e.) comparing the determined distance with the chosen distance, and f.) making a distance validation of the chosen distance, when the difference or relative error between the determined distance and the chosen distance is inside a validation interval, or denying the distance validation, if said difference or relative error is outside said interval”, and Claims 8 and 14 recite “determining the distance from the left camera as well as from the right camera via a triangulation; comparing the determined distance with the chosen distance; and making a distance validation of the chosen distance, when the difference or relative error between the determined distance and the chosen distance is inside a validation interval, or denying the distance validation, if said difference or relative error is outside said interval”. The claim language is inconsistent with specification language, as the specification states “in each image as taken by the camera 11 and 12 and determine distance with triangulation which is shown with reference numeral 405, reflecting the stereoscopic approach of the two images. The distance 140 as determined in the VR simulation is compared to the intended data point. This provides in a comparison step 330 a comparison value for the distance error D between the simulated and intended object distance in the image and the observed image distance”. Distance 140 is defined as a “target distance”, illustrated by Fig. 3B, which depicts a distance measured perpendicularly with regards to the measurement device and the virtual target measurement device 150. Reference number 405 is referred to as “triangulation” in the specs, with Fig. 6A referring to the distance from the left and right cameras 11 and 12 from a target 401. The claim language refers to the “distance from the left camera as well as from the right camera via a triangulation”, in which it is unclear whether the applicant is intending to specify a distance akin to target distance 140, or if the distance from both the left camera and the right camera to the target point, which may or may not be equivalent to one another, are being compared in some manner to the “chosen distance”. In the case of the latter, there is no support in the specification for comparing two separate distance values for the left and right cameras from the target. It becomes unclear in what manner triangulation is performed in the invention, as the language of Claims 1, 8, and 14 potentially state that triangulation is being performed to determine the distance between each camera to the target, and subsequently being compared to the chosen value, while the specification implies that triangulation is being performed to obtain a singular distance from both cameras to the target. Due to the unclear claim language and lack of support in the specification regarding the use of triangulation and subsequent distance comparison, one ordinarily skilled in the art would not be able to make or use the invention without undue experimentation. All dependent claims of Claims 1, 8, and 14 are rejected under 35 U.S.C. 112(a) with regards to the above analysis. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 8, and 14 recites the limitation "the distance from the left camera as well as from the right camera via a triangulation" in step e of Claim 1, and within the last 8 lines of Claims 8 and 14. There is insufficient antecedent basis for this limitation in the claim. All dependent claims of 1, 8, and 14 are rejected under 35 U.S.C. 112(b) with regards to the above indefinite elements. Claims 2, 9, and 15 recite the limitation “the IPD validation” in the last two lines of each claim respectively. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7-11, 13-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Segura (Improved virtual reality perception with calibrated stereo and variable focus for industrial use, 2017), in view of Ye (US 9734419 B1), and Yoon (US 10277893 B1). Regarding Claim 14, Segura teaches a calibration system for VR HMDS having a left eye display and a right eye display (Fig. 3 demonstrates left HMD and right HMD), with an adjustable HMD IPD between them, (Section 2.1, pg. 97: “This calibration step is performed only once for our CCs and is used later on for the calibration of any HMD”), comprising: a measurement device, wherein the measurement device comprises a left camera and a right camera (Fig. 3 demonstrates an HMD measurement device with a left and right camera), wherein the left camera as well as the right camera are intrinsically and extrinsically calibrated (Section 2.1: “The camera pair is previously calibrated, intrinsically (each camera) and extrinsically (the right camera with respect to the left)”); wherein the left camera as well as the right camera are mounted within the system (Fig. 3 demonstrates a left and right camera mounted on an apparatus); creating a virtual target object at a specific chosen position in a space in the field of view of the measurement device when viewing an image, wherein the virtual target object is created based on the virtual target object through the VR HMD (Fig. 3: “HMD stereo projections calibration. A pair of rigidly attached calibration cameras (CC) substitute the eyes. The HMD displays’ misalignment is exaggerated in the illustration”, where Figure 3 also demonstrates a target point X as perceived on the left and right displays of the HMD in relation to the measurement device) wherein the specific chosen position has a chosen distance from the measurement device (Fig. 3 Clearly demonstrates that x, y, z axis is centered with respect to the camera. Therefore, distance to a point X in 3D space is implicit); calculating and transmitting the images of said virtual target object to the displays of the VR HMD (Fig. 3, Fig. 4 clearly demonstrate virtual targets which are implicitly calculated and transmitted since they have been rendered and displayed on the HMD; Section 2.1, pg. 98: “Our calibration process is based on the projection of an arbitrary point in 3D space onto two different image planes, each with its own coordinate system: the HMD displays coordinate systems and the calibration cameras coordinate systems. In the equations in this section xcam will denote points projected on a CC image, and xdisp will denote points projected on an HMD display. Both are in homogeneous coordinates, as they are used in projective geometry equations”); representing the virtual target object on the displays of the VR HMD (Fig. 3, Fig. 4 clearly demonstrate a virtual target object on the displays of the VR HMD); taking the images of the displays with the left camera as well as the right camera (Fig. 4 clearly demonstrates an image of the displays; Section 2, pg. 97: “The camera captures a pattern presented in the HMD display. From analysis of the captured pattern, they compute a mapping between the camera image and the HMD display coordinates”); detecting the virtual target object within the images of the displays (Section 2.1, d, pg. 99: “We know the display coordinates of the points, as we have rendered them, and the user marks their corresponding camera image coordinates”); Segura does not teach a validation system per se, nor a VR HMD that is attachable to it in the same way as the VR HMD would be removably attached to the head of a user; It does not teach the mounting of the left and right camera to adjust the camera IPD between the left and the right camera; it does not explicitly teach a control unit that connects to the left as well as the right camera, wherein the control unit is configured to deliver control signals to the VR HMD, when the VR HMD is removably attached to the measurement device and when the HMD IPD of the VR HMD is manually or automatically adjusted to the prechosen camera IPD, wherein the control unit is also configured to perform a validation measurement method. It does not explicitly teach determining the distance from the left camera as well as from the right camera via a triangulation, and does not compare the determined distance with the chosen distance, and make a subsequent validation of the chosen distance, where the difference or relative error between the determined distance and the chosen distance is inside a validation interval, or denying the distance validation, if said difference or relative error is outside said interval. However, while Segura does not explicitly teach that its validation system is attachable to the HMD, the broadest reasonable interpretation of removably attached to the head of a user would just mean anything that fits inside the HMD, and can be secured via a fixation element such as a strap; this is implicit in Segura, considering the device is placed close to/within the HMD for viewing the left and right displays. Furthermore, Ye teaches a validation system with a control unit configured to perform a validation method (Drawing of pg. 1 demonstrates a system with a computer which is obvious in the art as a controller), where the validation system determines the distance from the left camera as well as from the right camera via a triangulation, compares it to a chosen distance, and makes a subsequent validation of the chosen distance, where the difference or relative error between the determined distance and the chosen distance is inside a validation interval, or denying the distance validation, if said difference or relative error is outside said interval. (Column 5, Lines 58-67: “By way of further illustration, and as provided in the Tsai publication, FIG. 1A illustrates the basic geometry of the camera model. (X.sub.w,Y.sub.w,Z.sub.w), defining the 3D coordinates of the object point P in the 3D world coordinate system. (X,Y,Z) define the 3D coordinates 185 of the object point P in the 3D camera coordinate system 188, which is centered at point O, the optical center, with the z axis the same as the optical axis (see also FIG. 1). (x,y) is the image coordinate system centered at O.sub.i (intersection of the optical axis z and the front image plane 186) and parallel to x and y axes”; Fig. 1A also illustrates the point in relation to the camera in terms of its optical center and orientation with regards to the xyz axis. Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration” Notes: Ye teaches a calibration system similar in structure to that of Segura, with 2 cameras focused on calibration target that is represented in 3D space in relation to cameras. Considering the xyz system is oriented with respect to the camera, triangulating the position of a target point P within the 3D space is in relation to the cameras, and therefore, distance is also inherent with the triangulation of point P, since the origin of the system of coordinates is at the camera(s). With regards to the validation process, the triangulated point P is compared to the intersect of the two rays, such that if the distance between the two points is greater than a threshold, the calibration is repeated. The threshold can be represented as a threshold range instead, where the direction of the distance is relevant. A validation system is interpreted to be a system that performs validation). Segura and Ye are considered analogous in the art with regards to calibration of twin camera systems using target calibration points. Validation of results is a well-known motivation within the art, as ensuring proper calibration via validation of a test value (in which a point and distance to a point are inherent to each other in triangulation as demonstrated above) enables the efficient function of the system via verification of known variables. Furthermore, control units for systems that perform triangulation are obvious in the art; a motivation for using a control unit for triangulation is to automate the task and make it more efficient. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the calibration unit of Segura with the validation system with a control unit and process of Ye; Doing so would yield the predictable result of verifying proper calibration of the system, as well as double checking known variables, and automating the triangulation process. Furthermore, Yoon teaches a measurement device comprising of a left camera and a right camera, wherein the left camera as well as the right camera can move to adjust the camera IPD between the left camera and the right camera (Col 9, Line 61-67, Col 10, Line 1-16: “In one embodiment, the controller 450 generate first imaging instructions for the characterization camera 430 and second instructions for the characterization camera 440. The first imaging instruction include positions for the characterization camera 430, and the second imaging instructions include corresponding positions of the second camera characterization 440. The characterization cameras 430 and 440 move to each position in the first and second imaging instructions, resulting in a different IPD value. The first and second imaging instructions further provide a number of exposures for each position of the characterization cameras 430 and 440. Sometimes, one of the two characterization cameras 430 and 440 moves and the other one does not move. For example, a position of the characterization camera 430 corresponds to multiple positions of the characterization camera 440 (or the other way), resulting in multiple IPD values. The characterization camera 430 may capture multiple images at the position through multiple exposures so that the characterization camera 430 captures at least one image for each IPD value. The characterization cameras 430 and 440 can each be the characterization camera described in conjunction with FIG. 3”; Fig. 4 illustrates the movement capacity of the cameras) with a controller that is attached to both cameras, as well as the VR HMD (Drawing on pg 1 clearly illustrates a controller attached to the HMD and camera assembly), wherein the control unit is configured to deliver control signals to the VR HMD, when the VR HMD is attached to the measurement device and when the HMD IPD of the VR HMD is manually or automatically adjusted to the prechosen camera IPD (Col 9, Lines 33-41: “The controller 450 provides presenting instructions that cause the HMD under test 410 to present test patterns and imaging instructions that cause the camera assembly 420 to captures images of the test patterns. In some embodiments, the presenting instructions and imaging instructions are received from a user of the HMD under test 410. Alternatively, the presenting instructions and imaging instructions are generated by the controller 450, e.g., based on input parameters received by the controller 450”. Notes: attached in its broadest reasonable interpretation is any connection, which is implicit in the controller being connected to the VR HMD for sending instructions. Additionally, HMDs are well established in the art as being adjustable to IPD). Segura and Yoon are considered analogous in the art with regards to the utilization of a left and right camera representing the eyes of a user in conjunction with HMDs for measurement purposes. A motivation for implementing adjustable left and right cameras would be to better represent the position of the eyes of a specific person. Additionally, control units for systems that are attached to both an HMD with adjustable IPD and cameras are obvious in the art with regards to being able to control the HMD with adjustable IPD. A motivation for doing so would be to consolidate control of the system to a single control unit, as well as automate the tasks being performed. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the calibration system of Segura with the validation system with a control unit and validation method of Yoon; Doing so would yield the predictable result of allowing the system of Segura to better model the IPD of a specific person, as well as consolidate the operation and control of the system to a single control unit. Claims 1 and 8, which are similar in scope to Claim 14, are rejected under the same rationale. Regarding Claim 15, the system of Claim 14 is rejected over Segura as modified. Segura as modified teaches a validation measurement (Ye, Column 5, Lines 58-67: “By way of further illustration, and as provided in the Tsai publication, FIG. 1A illustrates the basic geometry of the camera model. (X.sub.w,Y.sub.w,Z.sub.w), defining the 3D coordinates of the object point P in the 3D world coordinate system. (X,Y,Z) define the 3D coordinates 185 of the object point P in the 3D camera coordinate system 188, which is centered at point O, the optical center, with the z axis the same as the optical axis (see also FIG. 1). (x,y) is the image coordinate system centered at O.sub.i (intersection of the optical axis z and the front image plane 186) and parallel to x and y axes”; Fig. 1A also illustrates the point in relation to the camera in terms of its optical center and orientation with regards to the xyz axis. Ye, Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration”; Notes: Ye teaches a calibration system similar in structure to that of Segura, with 2 cameras focused on calibration target that is represented in 3D space in relation to cameras. Considering the xyz system is oriented with respect to the camera, triangulating the position of a target point P within the 3D space is in relation to the cameras, and therefore, distance is also inherent with the triangulation of point P, since the origin of the system of coordinates is at the camera(s). With regards to the validation process, the triangulated point P is compared to the intersect of the two rays, such that if the distance between the two points is greater than a threshold, the calibration is repeated. The threshold can be represented as a threshold range instead, where the direction of the distance is relevant. A validation system is interpreted to be a system that performs validation) conducted for a plurality of positions of a virtual target object having different predetermined virtual distances from the validation system and optionally different positions in the field of view of the VR HMD (Segura, Section 2.1, c, 98-99: “For each of the left and right sides, any virtual 3D point X is projected onto a point xdisp in the display coordinate system as expressed in Eq. 3. The same 3D point is projected onto a point xcam in the camera image coordinate system, as expressed in Eq. 1. These two projected points have to be equivalent but they are in different coordinate systems so their equations cannot be combined. A mapping between these two coordinate systems is needed to solve the problem. In a low distortion environment this mapping can be approximated by a 3 × 3 transform matrix M in homogenous coordinates (i.e. a homography) as shown in (Eq. 4)”; Segura, Section 2.1, e, 99: where xi for i = 1...N are the projections of a set of arbitrary virtual 3D points Xi… [the] set of N random points in the space in front of the viewer and visible by both eyes and projects them with the current value of the projection parameters) wherein the IPD validation is only passed if the distance validation of all or of a predetermined percentage of distance validations are passed (Ye, Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration”; Notes: Given that a method of validation is available for one point, it would obvious to a person having ordinary skill in the art to base validation of a plurality of points off of some threshold for the successful validation of said plurality of points). Claims 2 and 9, which are similar in scope to Claim 15, are rejected under the same rationale. Regarding Claim 16, the system of Claim 15 is rejected over Segura as modified. Segura as modified teaches a control unit configured to conduct the validation measurement (Ye, Drawing of pg. 1 demonstrates a system with a computer which is obvious in the art as a controller; Ye, Column 5, Lines 58-67: “By way of further illustration, and as provided in the Tsai publication, FIG. 1A illustrates the basic geometry of the camera model. (X.sub.w,Y.sub.w,Z.sub.w), defining the 3D coordinates of the object point P in the 3D world coordinate system. (X,Y,Z) define the 3D coordinates 185 of the object point P in the 3D camera coordinate system 188, which is centered at point O, the optical center, with the z axis the same as the optical axis (see also FIG. 1). (x,y) is the image coordinate system centered at O.sub.i (intersection of the optical axis z and the front image plane 186) and parallel to x and y axes”; Fig. 1A also illustrates the point in relation to the camera in terms of its optical center and orientation with regards to the xyz axis. Ye, Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration”; Notes: Ye teaches a calibration system similar in structure to that of Segura, with 2 cameras focused on calibration target that is represented in 3D space in relation to cameras. Considering the xyz system is oriented with respect to the camera, triangulating the position of a target point P within the 3D space is in relation to the cameras, and therefore, distance is also inherent with the triangulation of point P, since the origin of the system of coordinates is at the camera(s). With regards to the validation process, the triangulated point P is compared to the intersect of the two rays, such that if the distance between the two points is greater than a threshold, the calibration is repeated. The threshold can be represented as a threshold range instead, where the direction of the distance is relevant. A validation system is interpreted to be a system that performs validation)) for at least two of the plurality of virtual target objects in one pass (Segura, Section 2.1, c, 98-99: “For each of the left and right sides, any virtual 3D point X is projected onto a point xdisp in the display coordinate system as expressed in Eq. 3. The same 3D point is projected onto a point xcam in the camera image coordinate system, as expressed in Eq. 1. These two projected points have to be equivalent but they are in different coordinate systems so their equations cannot be combined. A mapping between these two coordinate systems is needed to solve the problem. In a low distortion environment this mapping can be approximated by a 3 × 3 transform matrix M in homogenous coordinates (i.e. a homography) as shown in (Eq. 4)”; Segura, Section 2.1, e, 99: where xi for i = 1...N are the projections of a set of arbitrary virtual 3D points Xi… [the] set of N random points in the space in front of the viewer and visible by both eyes and projects them with the current value of the projection parameters; Segura, Section 2.1, e, 99: “In each iteration the algorithm uses a set of N random points in the space” Notes: Given that a method of validation is available for one virtual point, it would obvious to a person having ordinary skill in the art to validate additional virtual points). Claims 3 and 10, which are similar in scope to Claim 16, are rejected under the same rationale. Regarding Claim 17, the system of Claim 14 is rejected over Segura as modified. Segura as modified teaches a validation measurement (Ye, Column 5, Lines 58-67: “By way of further illustration, and as provided in the Tsai publication, FIG. 1A illustrates the basic geometry of the camera model. (X.sub.w,Y.sub.w,Z.sub.w), defining the 3D coordinates of the object point P in the 3D world coordinate system. (X,Y,Z) define the 3D coordinates 185 of the object point P in the 3D camera coordinate system 188, which is centered at point O, the optical center, with the z axis the same as the optical axis (see also FIG. 1). (x,y) is the image coordinate system centered at O.sub.i (intersection of the optical axis z and the front image plane 186) and parallel to x and y axes”; Fig. 1A also illustrates the point in relation to the camera in terms of its optical center and orientation with regards to the xyz axis. Ye, Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration”; Notes: Ye teaches a calibration system similar in structure to that of Segura, with 2 cameras focused on calibration target that is represented in 3D space in relation to cameras. Considering the xyz system is oriented with respect to the camera, triangulating the position of a target point P within the 3D space is in relation to the cameras, and therefore, distance is also inherent with the triangulation of point P, since the origin of the system of coordinates is at the camera(s). With regards to the validation process, the triangulated point P is compared to the intersect of the two rays, such that if the distance between the two points is greater than a threshold, the calibration is repeated. The threshold can be represented as a threshold range instead, where the direction of the distance is relevant. A validation system is interpreted to be a system that performs validation)) comprising the distance validation of distances for a plurality of different HMD IPDs (Yoon, Col 9, Line 61-67, Col 10, Line 1-16: “In one embodiment, the controller 450 generate first imaging instructions for the characterization camera 430 and second instructions for the characterization camera 440. The first imaging instruction include positions for the characterization camera 430, and the second imaging instructions include corresponding positions of the second camera characterization 440. The characterization cameras 430 and 440 move to each position in the first and second imaging instructions, resulting in a different IPD value. The first and second imaging instructions further provide a number of exposures for each position of the characterization cameras 430 and 440. Sometimes, one of the two characterization cameras 430 and 440 moves and the other one does not move. For example, a position of the characterization camera 430 corresponds to multiple positions of the characterization camera 440 (or the other way), resulting in multiple IPD values. The characterization camera 430 may capture multiple images at the position through multiple exposures so that the characterization camera 430 captures at least one image for each IPD value. The characterization cameras 430 and 440 can each be the characterization camera described in conjunction with FIG. 3”. Notes: considering Segura as modified teaches a left and right camera that can adjust IPD of the measurement device, the measurement device is clearly capable of performing the distance validation for any of the different IPDs as adjusted by the measurement device). wherein the distance validation is performed with a reduction of the target HMD IPD, starting from the highest possible IPD of the VR HMD to the lowest possible IPD of the VR HMD providing a IPD range validation for each IPD value of the VR HMD, or vice versa (Yoon, Col 4, Lines 49-67, Col 5, Lines 1-9: “The camera assembly 350 includes one or more characterization cameras that capture images of test patterns presented by the HMD under test 310 (i.e., images presented by an electronic display through one or more lenses) in accordance with imaging instructions. A characterization camera is a camera configured to mimic a human eye that is used to characterize lenses of a HMD under test. A characterization camera is configured to mimic movement of the human eye, optical qualities of a human eye, physical dimensions of a human eye, or some combination thereof. For example, the characterization camera may have multiple degrees of freedom of movement in order to, e.g., change orientation about a center of rotation in the same manner as a human eye changes orientation. And different positions (e.g., orientations) of the characterization camera could correspond to different gaze angles of a human eye. Additionally, in some embodiments where there are two characterization cameras to mimic the left and right eyes of a user, the two characterization cameras are able to translate relative to each other to, e.g., measure effects of inter-pupillary distance (IPD) on the device under test. For example, an IPD between the two characterization cameras may be adjusted over some range of values. In alternate embodiments, the IPD may be fixed at a particular distance (e.g., 63.5 mm). In some embodiments, a characterization camera may translate away from or closer to the device under test. This would, e.g., measure effects of different eye relief on the images presented by the device under test”. Notes: Segura as modified teaches adjusting IPD over a range of values; it would be obvious to a person having ordinary skill in the art that adjusting the IPD over a range between the minimum and maximum IPD values is included in the idea of adjusting the IPD over a range of values). wherein the device validation is only passed if the IPD range validation of all or of a predetermined percentage of IPD interval validations are passed (Ye, Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration”; Notes: Given that a method of validation is available for one IPD, it would be obvious to a person having ordinary skill in the art to base validation of multiple IPDs in a device capable of adjusting the IPD across a range (which Segura as modified is capable of) off of a threshold of acceptance). Claims 4 and 11, which are similar in scope to Claim 17, are rejected under the same rationale. Regarding Claim 18, the system of Claim 17 is rejected over Segura as modified. Segura as modified teaches a validation measurement (Ye, Column 5, Lines 58-67: “By way of further illustration, and as provided in the Tsai publication, FIG. 1A illustrates the basic geometry of the camera model. (X.sub.w,Y.sub.w,Z.sub.w), defining the 3D coordinates of the object point P in the 3D world coordinate system. (X,Y,Z) define the 3D coordinates 185 of the object point P in the 3D camera coordinate system 188, which is centered at point O, the optical center, with the z axis the same as the optical axis (see also FIG. 1). (x,y) is the image coordinate system centered at O.sub.i (intersection of the optical axis z and the front image plane 186) and parallel to x and y axes”; Fig. 1A also illustrates the point in relation to the camera in terms of its optical center and orientation with regards to the xyz axis. Ye, Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration”; Notes: Ye teaches a calibration system similar in structure to that of Segura, with 2 cameras focused on calibration target that is represented in 3D space in relation to cameras. Considering the xyz system is oriented with respect to the camera, triangulating the position of a target point P within the 3D space is in relation to the cameras, and therefore, distance is also inherent with the triangulation of point P, since the origin of the system of coordinates is at the camera(s). With regards to the validation process, the triangulated point P is compared to the intersect of the two rays, such that if the distance between the two points is greater than a threshold, the calibration is repeated. The threshold can be represented as a threshold range instead, where the direction of the distance is relevant. A validation system is interpreted to be a system that performs validation) that further comprises the validation of distances for the same plurality of different HMD IPDs a second time (Yoon, Col 9, Line 61-67, Col 10, Line 1-16: “In one embodiment, the controller 450 generate first imaging instructions for the characterization camera 430 and second instructions for the characterization camera 440. The first imaging instruction include positions for the characterization camera 430, and the second imaging instructions include corresponding positions of the second camera characterization 440. The characterization cameras 430 and 440 move to each position in the first and second imaging instructions, resulting in a different IPD value. The first and second imaging instructions further provide a number of exposures for each position of the characterization cameras 430 and 440. Sometimes, one of the two characterization cameras 430 and 440 moves and the other one does not move. For example, a position of the characterization camera 430 corresponds to multiple positions of the characterization camera 440 (or the other way), resulting in multiple IPD values. The characterization camera 430 may capture multiple images at the position through multiple exposures so that the characterization camera 430 captures at least one image for each IPD value. The characterization cameras 430 and 440 can each be the characterization camera described in conjunction with FIG. 3”. Notes: considering Segura as modified teaches a left and right camera that can adjust IPD of the measurement device, the measurement device is clearly capable of performing the distance validation for any of the different IPDs as adjusted by the measurement device. Furthermore, it would be obvious to a person having ordinary skill in the art that performing the validations on a range of values can be performed any number of times, and doing so would be akin to “double checking” or “triple checking” results, which is obvious in the art), wherein the validation is performed with an increase of the target HMD IPD, starting from the lowest possible IPD of the VR HMD to the highest possible IPD of the VR HMD, or vice versa (Yoon, Col 4, Lines 49-67, Col 5, Lines 1-9: “The camera assembly 350 includes one or more characterization cameras that capture images of test patterns presented by the HMD under test 310 (i.e., images presented by an electronic display through one or more lenses) in accordance with imaging instructions. A characterization camera is a camera configured to mimic a human eye that is used to characterize lenses of a HMD under test. A characterization camera is configured to mimic movement of the human eye, optical qualities of a human eye, physical dimensions of a human eye, or some combination thereof. For example, the characterization camera may have multiple degrees of freedom of movement in order to, e.g., change orientation about a center of rotation in the same manner as a human eye changes orientation. And different positions (e.g., orientations) of the characterization camera could correspond to different gaze angles of a human eye. Additionally, in some embodiments where there are two characterization cameras to mimic the left and right eyes of a user, the two characterization cameras are able to translate relative to each other to, e.g., measure effects of inter-pupillary distance (IPD) on the device under test. For example, an IPD between the two characterization cameras may be adjusted over some range of values. In alternate embodiments, the IPD may be fixed at a particular distance (e.g., 63.5 mm). In some embodiments, a characterization camera may translate away from or closer to the device under test. This would, e.g., measure effects of different eye relief on the images presented by the device under test”. Notes: Segura as modified teaches adjusting IPD over a range of values; it would be obvious to a person having ordinary skill in the art that adjusting the IPD over a range between the minimum and maximum IPD values is included in the idea of adjusting the IPD over a range of values), and the hysteresis validation is only passed if the difference or relative error between the IPD range validation value of the upward adjusted HMD IPD and the IPD range validation value of the downward adjusted HMD IPD is inside a hysteresis validation interval (Ye, Column 9, Lines 22-51: “The accuracy of the calibration of an embodiment with two or more cameras with a common viewing area can be validated by acquiring a single image of the calibration object that is substantially similar to the one used for calibration (e.g. calibration object 170). The features of the calibration object are extracted from each image, using in an embodiment the above-described checkerboard feature extractor vision system software tool. Once the features are extracted, the correspondence between the features is established. In the case of the checkerboard calibration plate, a fiducial on the plate helps identify correspondence between features in the two images. Using the triangulation procedure described above (referencing FIG. 1B), the positions of these features in the 3D world coordinate system are computed. Given n corresponding points all the images, after triangulation, there will be n points computed in the world coordinate system, denoted by X.sub.i.sup.extracted, 1≦i≦n. Ideally, if the cameras are perfectly calibrated, in the absence of noise the rays obtained during triangulation would intersect at one point (see FIG. 1B). However, in practice the rays are not guaranteed to intersect. The triangulation discrepancy or residual is computed as the sum of shortest distance between the triangulated point P and the ray (R.sub.1 and R.sub.2) from each camera that was used to compute the triangulated point. In this embodiment, the root mean square (RMS) value of this parameter for all the feature points is computed using the images acquired during validation. This value is compared to the value obtained for the calibration images. If it is above a certain acceptance threshold, then the user is asked to repeat the calibration”; Notes: Given that a method of validation is available for one IPD, it would be obvious to a person having ordinary skill in the art to base validation of multiple IPDs in a device capable of adjusting the IPD across a range (which Segura as modified is capable of) off of a threshold of acceptance. The broadest reasonable interpretation of a hysteresis validation is checking whether the cumulative error of distance validations of the IPDs from high to low or vice versa of two validations are within some interval of one another. Segura as modified teaches validation with regards to a threshold interval, which can similarly be applied as a hysteresis validation interval on the cumulative errors from the two validations across a range of IPD values). Claim 5, which is similar in scope to Claim 18, is rejected under the same rationale. Regarding Claim 20, the system of Claim 14 is rejected over Segura as modified. Segura as modified teaches a left camera as well as a right camera are mounted within the validation system (Segura, Fig. 3 demonstrates a left and right camera mounted on an apparatus) for a movement of the left camera as well as of the right camera are to adjust the relief distance of the left camera as well as of the right camera from the VR HMD when removably attached, creating camera-eye HMD display distance deviating validation results (Yoon, Col 5, lines 5-9: “In some embodiments, a characterization camera may translate away from or closer to the device under test. This would, e.g., measure effects of different eye relief on the images presented by the device under test” Notes: the broadest reasonable interpretation of removably attached to the head of a user would just mean anything that fits inside the HMD, and can be secured via a fixation element such as a strap; this is implicit in Segura, considering the device is placed close to/within the HMD for viewing the left and right displays). Claims 7 and 13, which are similar in scope to Claim 20, are rejected under the same rationale. Claims 6, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Segura as modified, in further view of Fang (US 10735710 B2). Regarding Claim 19, the system of Claim 14 is rejected over Segura as modified. Segura as modified teaches a left camera as well as a right camera are mounted within the validation system for a movement of the left camera as well as the right camera, when the validation system is removably attached (Segura, Fig. 3 demonstrates a left and right camera mounted on an apparatus; Yoon, Col 9, Line 61-67, Col 10, Line 1-16: “In one embodiment, the controller 450 generate first imaging instructions for the characterization camera 430 and second instructions for the characterization camera 440. The first imaging instruction include positions for the characterization camera 430, and the second imaging instructions include corresponding positions of the second camera characterization 440. The characterization cameras 430 and 440 move to each position in the first and second imaging instructions, resulting in a different IPD value. The first and second imaging instructions further provide a number of exposures for each position of the characterization cameras 430 and 440. Sometimes, one of the two characterization cameras 430 and 440 moves and the other one does not move. For example, a position of the characterization camera 430 corresponds to multiple positions of the characterization camera 440 (or the other way), resulting in multiple IPD values. The characterization camera 430 may capture multiple images at the position through multiple exposures so that the characterization camera 430 captures at least one image for each IPD value. The characterization cameras 430 and 440 can each be the characterization camera described in conjunction with FIG. 3”; Yoon, Fig. 4 illustrates the movement capacity of the cameras; Notes: the broadest reasonable interpretation of removably attached to the head of a user would just mean anything that fits inside the HMD, and can be secured via a fixation element such as a strap; this is implicit in Segura, considering the device is placed close to/within the HMD for viewing the left and right displays). Segura as modified does not teach adjusting the height position of the left camera as well as the right camera vis-à-vis the height of the horizontal centreline of the VR HMD, creating height deviating from the Design Eye Point validation results. However, Fang teaches adjusting the height position of the left camera as well as the right camera (Col 3 lines 39 – 46: “As the first camera 12 and the second camera 14 are arranged along the second direction D2, which is not shown in figures, and arrangement of the first camera 12 and the second camera 14 along the first direction D1 is slightly deviated relative to the original production design, the operational processor 16 can execute the stereo vision image calibration procedure of the present invention to repair and acquire the correct stereo vision image computation result”; Figure 2 also illustrates the vertical adjustment of the cameras) Segura as modified and Fang are considered analogous in the art with regards to the calibration of cameras. A common motivation within the art is to adjust the positioning of cameras during the calibration to account for how the cameras perceive the calibration target; this is evident in Segura as modified as well, where adjustments to the camera positioning have been made horizontally. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the validation system of Segura as modified with the ability to adjust the height position of the left camera as well as the right camera of Fang; Doing so would yield the predictable result of a validation system that is more effectively calibrated due to increased camera position variance. While Segura as modified with Fang does not explicitly teach adjusting the position of the left and right cameras vis-à-vis the horizontal centreline of the VR HMD creating height deviating from the Design Eye Point validation results, It should be noted that a validation system with a left and right camera capable of vertical adjustment would inherently have the ability to adjust the left and right camera vis-à-vis the horizontal centreline of the VR HMD, and doing so would inherently create height deviating from the Design Eye Point validation results; Design Eye Point validation results, in their broadest reasonable interpretation, is the ideal position to view the VR HMD. Claims 6 and 12, which are similar in scope to Claim 19, are rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAYMOND CHUN LAM LI whose telephone number is (571)272-5124. The examiner can normally be reached M-F 8:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.C.L./Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jun 12, 2024
Application Filed
Feb 18, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month