DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1-20 are pending. Applicant amends claims 1, 7, 16, and 20.
Response to Arguments
Applicant's arguments filed 8/4/2025 have been fully considered but they are not persuasive. Applicant argues that the amendments made overcomes the Zhu reference. Examiner disagrees. In response, the Examiner respectfully points out that because applicant has the opportunity to amend the claims during prosecution, giving a claim its broadest reasonable interpretation (BRI) will reduce the possibility that the claim, once issued, will be interpreted more broadly than is justified (see MPEP § 2111, In re Yamamoto, 740 F.2d 1569, 1571 (Fed. Cir. 1984); In re Zletz, 893 F.2d 319, 321 (Fed. Cir. 1989). (“During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow.”); < In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-51 (CCPA 1969))
Under the interpretation that the 3d coordinate system having origin in the image sensor where the optical axis intersects the image sensor, and the vertical, horizontal and optical axis of the 3d coordinate system coinciding with the corresponding dimensions of the image sensor. Under such interpretation the limitation, “wherein the second perspective is a first distance in the vertical dimension from a location corresponding to a first eye of the user” is understood reasonably met under the provisions of BRI, since, first distance in the vertical dimensions is understood as the distance between the eye and the second perspective point, as shown in the illustration A below. Under BRI ‘in the vertical dimension’ is understood as in the direction of the vertical dimension, since dimension is generally a directional vector, and can be equivalently represented when transformed parallelly.
Thus, based on the arguments provided above, Examiner contends that the amended claims 1 and 16, are still anticipated by Zhu. For details see the rejection below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-11, 16-17 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2)as being anticipated by Zhu et al. [US 20190101758 A1, hereinafter Zhu].
Regarding claim 1, Zhu discloses a method (¶0006, figs. 16-19, claim 16 and dependents) comprising:
at a device having a three-dimensional device coordinate system (¶0089-0093, 0096-0097, 0099, fig. 11 coordinates 1100, claim 20… etc.) and including a first image sensor (Claim 16, "the head-mounted device including a stereo camera pair"), a first display (¶0047, "During use, a user of the computer system 100 is able to perceive information (e.g., a virtual-reality scene) through a display screen that is included among the 1/0 interface(s) 110 and that is visible to the user), one or more processors (Claim 16, "the method being implemented by one or more processors of the head-mounted device"), and non-transitory memory (¶0044, "The disclosed embodiments may comprise or utilize a special-purpose or general-purpose computer including computer hardware, such as, for example, one or more processors (such as processor 105) and system memory (such as storage 125)"), wherein a horizontal dimension of the three-dimensional coordinate system is aligned with a horizontal axis of the first image sensor, a vertical dimension of the three-dimensional coordinate system is aligned with a vertical axis of the first image sensor, and an optical dimension of the three- dimensional coordinate system is aligned with an optical axis of the first image sensor (3d coordinate system is understood having origin in the image sensor where the optical axis intersects the image sensor, and the vertical, horizontal and optical axis of the 3d coordinate system coinciding with the corresponding dimensions of the image sensor, fig. 13, illustration A below. Also in a limiting case where horizontal axis and dimensions are not colinear, the horizontal and vertical axis and dimensions are defined or understood as being parallel to horizontal and vertical axes of the 3d coordinate system):
capturing, using the first image sensor, a first image of a physical environment (Claim 16, "the stereo camera pair being used to capture images of a surrounding
environment”);
transforming the first image from a first perspective of the first image sensor to a second perspective based on a difference between the first perspective and the second perspective (Claim 16, "generating a left passthrough visualization and a right passthrough visualization by reprojecting the transformed left image and the transformed right image the reprojecting being performed using a result obtained from the depth map”, By performing a reprojection operation 1300, the images captured by the left camera 1305A are altered. In particular, the images are altered in such a manner so that it appears as though the camera was actually located at a different position. This different position corresponds to the user's pupil locations 1315A and 1315B. To clarify, pupil location 1315A corresponds to a location of the user's left pupil while pupil location 1315B corresponds to a location of the user's right pupil. The distance 1320 corresponds to the user's interpupil distance (i.e. the distance between the pupil location 1315A and the pupil location 1315B). In essence, the images captured by the left camera 1305A are altered so that they appear as though they were actually captured by a camera (i.e. the “simulated,” or rather “reprojected,” left camera 1310) that was located near (i.e. a predetermined distance from, or rather in front of) pupil location 1315A, ¶0108. Also see ¶0102-0117, fig. 13), wherein the second perspective is a first distance in the vertical dimension from a location corresponding to a first eye of a user (Claim 16, "the reprojecting causing the transformed left image's centerline perspective to be in alignment with a left pupil of the user and the transformed right image's center-line perspective to be in alignment with a right pupil of the user. Also see the illustration- A corresponding to figure 13, where the locations of first perspective, second perspective, first distance and second distance is labelled. Here, first distance is less than second distance because in a right triangle hypotenuse is always larger than any other side. In essence, the images captured by the left camera 1305A are altered so that they appear as though they were actually captured by a camera (i.e. the “simulated,” or rather “reprojected,” left camera 1310) that was located near (i.e. a predetermined distance from, or rather in front of) pupil location 1315A, ¶0108. Also see ¶0102-0117, fig. 13. Also see eqn. in ¶0102-0114, especially pertaining to λ, . Reprojection 1300 allows positions located at first and second perspective, based on λ, ¶0110-0114); and
PNG
media_image1.png
1032
1186
media_image1.png
Greyscale
Illustration A: Reproduction of figure 13, showing first and second perspectives and first and second distances.
displaying, on the first display, the transformed first image of the physical environment (steps 1660, 1840 etc. in figs. 16, 18, ¶0141).
Regarding claim 2, Zhu discloses the method of claim 1, wherein the second perspective and the location corresponding to the first eye of the user have a same coordinate value for at least one dimension of the device coordinate system (for x-dimension as shown in illustration-A above, has same value for second perspective and the location corresponding to the first eye of the user, fig. 13, ¶0110-0114, claim 16).
Regarding claim 3, Zhu discloses the method of claim 1, wherein the second perspective and the location corresponding to the first eye of the user have a same coordinate value for two dimensions of the device coordinate system (for x-dimension and y-dimension as shown in illustration-A above, has same value for second perspective and the location corresponding to the first eye of the user, fig. 13, ¶0110-0114, claim 16).
Regarding claim 4, Zhu discloses the method of claim 1, wherein the second perspective and the location corresponding to the first eye of the user have a same coordinate value for less than three dimensions of the device coordinate system (for x-dimension and y-dimension as shown in illustration-A above, has same value for second perspective and the location corresponding to the first eye of the user, fig. 13, ¶0110-0114, claim 16).
Regarding claim 5, Zhu discloses the method of claim 1, wherein the second perspective and the location corresponding to the first eye of the user have a same coordinate value for less than two dimensions of the device coordinate system (for x-dimension and/or y-dimension as shown in illustration-A above, has same value for second perspective and the location corresponding to the first eye of the user, fig. 13, ¶0110-0114, claim 16).
Regarding claim 6, Zhu discloses the method of claim 1, wherein the second perspective and the location corresponding to the first eye of the user have different coordinate values for each dimension of the device coordinate system (evident from fig. 13, and illustration-A, especially in a given use case y-coordinate value is different for eye and camera perspective for having different height of camera and eye from ground).
Regarding claim 7, Zhu discloses the method of claim 1,
wherein a first ratio between (1) a displacement in the vertical dimension between the first perspective and the second perspective (0 length, see illustration A) and (2) a displacement in the vertical dimension between the first perspective and the location corresponding to the first eye of the user (non-zero distance, which makes the ratio as 0, see illustration-A) is different than a second ratio between (1) a displacement in the horizontal dimension between the first perspective and the second perspective (non-zero length, see illustration-A) and (2) a displacement in the horizontal dimension between the first perspective and the location corresponding to the first eye of the user (another non-zero distance, making the second ration as a non-zero number; thus making it different from the first ratio, which is 0, see fig. 13 and illustration-A.
In a limiting case where horizontal axis and dimensions are not co0linear, in that case they could reasonably be defined as parallel to one another. In such a case λ, which establishes a relationship between the left camera 1305A, the reprojected left camera 1310, the right camera 1305B, and the left pupil location 1315 comes as the defining ratio claimed in claim 7).
Regarding claim 9, Zhu discloses the method of claim 7, wherein the first ratio is approximately one (λ is a value between 0 and 1, ¶0112-0116, fig. 13, illustration A).
Regarding claim 8, Zhu discloses the method of claim 7, wherein the first ratio is approximately zero (λ is a value between 0 and 1, ¶0112-0116, fig. 13, illustration A).
Regarding claim 9, Zhu discloses the method of claim 7, wherein the first ratio is approximately one (λ is a value between 0 and 1, ¶0112-0116, fig. 13, illustration A).
Regarding claim 10, Zhu discloses the method of claim 7, wherein the first ratio is between zero and one (λ is a value between 0 and 1, ¶0112-0116, fig. 13, illustration A).
Regarding claim 11, Zhu discloses the method of claim 1, further comprising:
capturing, using a second image sensor, a second image of a physical environment (step 1620, fig. 16; Claim 1, "the stereo camera pair including a left camera and a right camera, the stereo camera pair being used to capture images of a surrounding environment");
transforming the second image from a third perspective of the second image sensor to a fourth perspective based on a difference between the third perspective and the fourth perspective (Claim 1, "generate a left passthrough visualization and a right passthrough visualization by reprojecting the transformed left image and the transformed right image, the reprojecting being performed using a result obtained from the depth map, the reprojecting causing the transformed left image's center-line perspective to be in alignment with a left pupil of the user and the transformed right image's center-line perspective to be in alignment with a right pupil of the user"
By performing a reprojection operation 1300, the images captured by the left camera 1305A are altered. In particular, the images are altered in such a manner so that it appears as though the camera was actually located at a different position. This different position corresponds to the user's pupil locations 1315A and 1315B. To clarify, pupil location 1315A corresponds to a location of the user's left pupil while pupil location 1315B corresponds to a location of the user's right pupil. The distance 1320 corresponds to the user's interpupil distance (i.e. the distance between the pupil location 1315A and the pupil location 1315B). In essence, the images captured by the left camera 1305A are altered so that they appear as though they were actually captured by a camera (i.e. the “simulated,” or rather “reprojected,” left camera 1310) that was located near (i.e. a predetermined distance from, or rather in front of) pupil location 1315A, ¶0108. Also see ¶0102-0117, fig. 13); and
displaying, on a second display, the transformed second image of the physical environment (steps 1660, 1840 etc. in figs. 16, 18, ¶0141, Claim 1, "left passthrough visualization" and "right passthrough visualization"; Claim 16, "A method for reconstructing a perspective captured in an image so that the captured perspective matches a perspective of a user who is wearing a head-mounted device"; Implicit).
Regarding claim 16, Zhu discloses a device (Claim 1, "A computer system comprising: a head-mounted device") having a three-dimensional device coordinate system (¶0089-0093, 0096-0097, 0099, fig. 11 coordinates 1100, claim 20… etc.) and comprising:
a first image sensor (Claim 1, "a head-mounted device that includes a stereo camera pair"), wherein a horizontal dimension of the three-dimensional coordinate system is aligned with a horizontal axis of the first image sensor, a vertical dimension of the three-dimensional coordinate system is aligned with a vertical axis of the first image sensor, and an optical dimension of the three- dimensional coordinate system is aligned with an optical axis of the first image sensor (3d coordinate system is understood having origin in the image sensor where the optical axis intersects the image sensor, and the vertical, horizontal and optical axis of the 3d coordinate system coinciding with the corresponding dimensions of the image sensor, fig. 13, illustration A below. Also in a limiting case where horizontal axis and dimensions are not colinear, the horizontal and vertical axis and dimensions are defined or understood as being parallel to horizontal and vertical axes of the 3d coordinate system);
a first display (¶0047, "During use, a user of the computer system 100 is able to perceive information (e.g., a virtual reality scene) through a display screen that is included among the 1/0 interface(s) 110 and that is visible to the user”);
a non-transitory memory (§44, "The disclosed embodiments may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors (such as processor 105) and system memory (such as storage 125)”); and
one or more processors (Claim 16, "the method being implemented by one or more processors of the head-mounted device) to:
capture, using the first image sensor, a first image of a physical environment (Claim 1, "use the left camera to capture a raw left image of the surrounding environment and use the right camera to capture a raw right image of the surrounding environment");
transform the first image from a first perspective of the first image sensor to a second perspective based on a difference between the first perspective and the second perspective (Claim 1, "generate a left passthrough visualization and a right passthrough visualization by reprojecting the transformed left image and the transformed right image, the reprojecting being performed using a result obtained from the depth map, the reprojecting causing the transformed left image's center-line perspective to be in alignment with a left pupil of the user and the transformed right image's center-line perspective to be in alignment with a right pupil of the user"),
wherein a first ratio between (1) a displacement in the vertical dimension between the first perspective and the second perspective (0 length, see illustration A) and (2) a displacement in the vertical dimension between the first perspective and the location corresponding to the first eye of the user (non-zero distance, which makes the ratio as 0, see illustration-A) is different than a second ratio between (1) a displacement in the horizontal dimension between the first perspective and the second perspective (non-zero length, see illustration-A) and (2) a displacement in the horizontal dimension between the first perspective and the location corresponding to the first eye of the user (another non-zero distance, making the second ration as a non-zero number; thus making it different from the first ratio, which is 0, see fig. 13 and illustration-A.
In a limiting case where horizontal axis and dimensions are not co0linear, in that case they could reasonably be defined as parallel to one another. In such a case λ, which establishes a relationship between the left camera 1305A, the reprojected left camera 1310, the right camera 1305B, and the left pupil location 1315 comes as the defining ratio above); and
displaying, on the first display, the transformed first image of the physical environment (steps 1660, 1840 etc. in figs. 16, 18, ¶0141).
Regarding claim 17, Zhu discloses the device of claim 16, wherein the one or more processors are further to:
capture, using a second image sensor, a second image of a physical environment (step 1620, fig. 16; Claim 1, "the stereo camera pair including a left camera and a right camera, the stereo camera pair being used to capture images of a surrounding environment");
transform the second image from a third perspective of the second image sensor to a fourth perspective based on a difference between the third perspective and the fourth perspective (Claim 1, "generate a left passthrough visualization and a right passthrough visualization by reprojecting the transformed left image and the transformed right image, the reprojecting being performed using a result obtained from the depth map, the reprojecting causing the transformed left image's center-line perspective to be in alignment with a left pupil of the user and the transformed right image's center-line perspective to be in alignment with a right pupil of the user"
By performing a reprojection operation 1300, the images captured by the left camera 1305A are altered. In particular, the images are altered in such a manner so that it appears as though the camera was actually located at a different position. This different position corresponds to the user's pupil locations 1315A and 1315B. To clarify, pupil location 1315A corresponds to a location of the user's left pupil while pupil location 1315B corresponds to a location of the user's right pupil. The distance 1320 corresponds to the user's interpupil distance (i.e. the distance between the pupil location 1315A and the pupil location 1315B). In essence, the images captured by the left camera 1305A are altered so that they appear as though they were actually captured by a camera (i.e. the “simulated,” or rather “reprojected,” left camera 1310) that was located near (i.e. a predetermined distance from, or rather in front of) pupil location 1315A, ¶0108. Also see ¶0102-0117, fig. 13); and
display, on a second display, the transformed second image of the physical environment (steps 1660, 1840 etc. in figs. 16, 18, ¶0141, Claim 1, "left passthrough visualization" and "right passthrough visualization"; Claim 16, "A method for reconstructing a perspective captured in an image so that the captured perspective matches a perspective of a user who is wearing a head-mounted device"; Implicit).
and display, on the second display, the transformed second image of the physical environment (steps 1660, 1840 etc. in figs. 16, 18, ¶0141, Claim 1, "left passthrough visualization" and "right passthrough visualization"; Claim 16, "A method for reconstructing a perspective captured in an image so that the captured perspective matches a perspective of a user who is wearing a head-mounted device"; Implicit).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 12, 15, 18, and 20 rejected under 35 U.S.C. 103 as being unpatentable over Zhu.
Regarding claim 12, Zhu discloses the method of claim 11, wherein a vector line between the second perspective and the location corresponding to the first eye of the user is parallel to a vector line between the fourth perspective and a location corresponding to a second eye of the user (such parallel vectors can be configured, defined and envisioned using fig. 13. Also, Figure 13, see the dotted parallel lines representing the axes of the reprojected images (1310) and the corresponding pupils (1315A and 13158). To clarify, to be an accurate passthrough visualization (i.e. one without distortions), the center-line perspective of a left passthrough visualization should parallel a center-line perspective of the user's left eye. Similarly, to be an accurate passthrough visualization, the center-line perspective of a right passthrough visualization should parallel a center-line perspective of the user's right eye, ¶0084).
Zhu is not found disclosing that the parallel lines are indeed vectors.
However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to define the center-line perspective of the right and/or left passthrough visualization, which is parallel to the center-line perspective of the user's right and/or left eye as vectors, because, simple substitution of one known element (e.g., parallel lines) for another (e.g., parallel vectors) to obtain predictable results is obvious [see KSR Int'l Co. v. Teleflex Inc.].
Regarding claim 15, Zhu discloses the method of claim 11, wherein a vector line between the first perspective and the second perspective is parallel to a vector line between the third perspective and the fourth perspective (such parallel vectors can be configured, defined and envisioned using fig. 13. Also, Figure 13, see the dotted parallel lines representing the axes of the reprojected images (1310) and the corresponding pupils (1315A and 13158). To clarify, to be an accurate passthrough visualization (i.e. one without distortions), the center-line perspective of a left passthrough visualization should parallel a center-line perspective of the user's left eye. Similarly, to be an accurate passthrough visualization, the center-line perspective of a right passthrough visualization should parallel a center-line perspective of the user's right eye, ¶0084).
Zhu is not found disclosing that the parallel lines are indeed vectors.
However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to define the center-line perspective of the right and/or left passthrough visualization, which is parallel to the center-line perspective of the user's right and/or left eye as vectors, because, simple substitution of one known element (e.g., parallel lines) for another (e.g., parallel vectors) to obtain predictable results is obvious [see KSR Int'l Co. v. Teleflex Inc.].
Regarding claim 18, Zhu discloses the device of claim 17, wherein a vector line between the second perspective and the location corresponding to the first eye of the user is parallel to a vector line between the fourth perspective and a location corresponding to a second eye of the user (such parallel vectors can be configured, defined and envisioned using fig. 13. Also, Figure 13, see the dotted parallel lines representing the axes of the reprojected images (1310) and the corresponding pupils (1315A and 13158). To clarify, to be an accurate passthrough visualization (i.e. one without distortions), the center-line perspective of a left passthrough visualization should parallel a center-line perspective of the user's left eye. Similarly, to be an accurate passthrough visualization, the center-line perspective of a right passthrough visualization should parallel a center-line perspective of the user's right eye, ¶0084).
Zhu is not found disclosing that the parallel lines are indeed vectors.
However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to define the center-line perspective of the right and/or left passthrough visualization, which is parallel to the center-line perspective of the user's right and/or left eye as vectors, because, simple substitution of one known element (e.g., parallel lines) for another (e.g., parallel vectors) to obtain predictable results is obvious [see KSR Int'l Co. v. Teleflex Inc.].
Regarding claim 20, Zhu discloses a non-transitory computer-readable memory (¶0044, "The disclosed embodiments may comprise or utilize a special-purpose or general-purpose computer including computer hardware, such as, for example, one or more processors (such as processor 105) and system memory (such as storage 125)") having instructions encoded thereon which (¶0044), when executed by one or more processors of a device (¶0044) including a first image sensor, a second image sensor (Claim 1, "a head-mounted device that includes a stereo camera pair, the stereo camera pair including a left camera and a right camera”), a first display (Claim 1, "left passthrough visualization"; Implicit), and a second display (Claim 1, "right passthrough visualization"; Implicit), cause the device to:
capture, using the first image sensor, a first image of a physical environment (Claim 1, "the stereo camera pair including a left camera and a right camera, the stereo camera pair being used to capture images of a surrounding environment);
capture, using the second image sensor, a second image of the physical environment (ibid, Claim 1, "the stereo camera pair including a left camera and a right camera, the stereo camera pair being used to capture images of a surrounding environment);
transform the first image from a first perspective of the first image sensor to a second perspective based on a difference between the first perspective and the second perspective (Claim 1, "generate a left passthrough visualization and a right passthrough visualization by reprojecting the transformed left image and the transformed right image, the reprojecting being performed using a result obtained from the depth map, the reprojecting causing the transformed left image's center-line perspective to be in alignment with a left pupil of the user and the transformed right image's center-line perspective to be in alignment with a right pupil of the user);
transform the second image from a third perspective of the second image sensor to a fourth perspective based on a difference between the third perspective and the fourth perspective (ibid, Claim 1, "generate a left passthrough visualization and a right passthrough visualization by reprojecting the transformed left image and the transformed right image, the reprojecting being performed using a result obtained from the depth map, the reprojecting causing the transformed left image's center-line perspective to be in alignment with a left pupil of the user and the transformed right image's center-line perspective to be in alignment with a right pupil of the user),
wherein a vector line between the second perspective and the first perspective is parallel to and noncollinear with a vector line between the fourth perspective and the third perspective (such parallel vectors can be configured, defined and envisioned using fig. 13. Also, Figure 13, see the dotted parallel lines representing the axes of the reprojected images (1310) and the corresponding pupils (1315A and 13158). To clarify, to be an accurate passthrough visualization (i.e. one without distortions), the center-line perspective of a left passthrough visualization should parallel a center-line perspective of the user's left eye. Similarly, to be an accurate passthrough visualization, the center-line perspective of a right passthrough visualization should parallel a center-line perspective of the user's right eye – ¶0084. Parallel lines are non-colinear by definition);
display, on the first display, the transformed first image of the physical environment; and display, on the second display, the transformed second image of the physical environment (Claim 1,"left passthrough visualization" and "right passthrough visualization"; Claim 16, "A method for reconstructing a perspective captured in an image so that the captured perspective matches a perspective of a user who is wearing a head-mounted device"; Implicit).
Zhu is not found disclosing that the parallel lines are indeed vectors.
However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to define the center-line perspective of the right and/or left passthrough visualization, which is parallel to the center-line perspective of the user's right and/or left eye as vectors, because, simple substitution of one known element (e.g., parallel lines) for another (e.g., parallel vectors) to obtain predictable results is obvious [see KSR Int'l Co. v. Teleflex Inc.].
Allowable Subject Matter
Claims 13-14 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Claim 19 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 13, prior arts of record taken alone or in combination fails to reasonably disclose or suggest,
wherein the vector between the second perspective and the location corresponding to the first eye of the user is parallel to a midpoint vector between (1) the midpoint between the location corresponding to the first eye of the user and a location corresponding to a second eye of the user and (2) the midpoint between the first image sensor and the second image sensor.
Claim 14 is objected for its dependence on objected claim 13.
Regarding claim 19, Prior arts of record taken alone or in combination fails to reasonably disclose or suggest,
wherein the vector between the second perspective and the location corresponding to the first eye of the user is parallel to a midpoint vector between (1) the midpoint between the location corresponding to the first eye of the user and a location corresponding to a second eye of the user and (2) the midpoint between the first image sensor and the second image sensor.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NURUN FLORA whose telephone number is (571)272-5742. The examiner can normally be reached M-F 9:30 am -5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NURUN FLORA/Primary Examiner, Art Unit 2614