Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/05/2026 has been entered.
Response to Arguments
3. Applicant's arguments filed 02/05/2026 have been fully considered but they are not persuasive.
On page 14 of the amendment, Applicant argued that Forster fails to provide any disclosure in transforming an image using processors.
However, the Examiner respectfully disagrees. Forster clearly teaches transforming (i.e., warping, as taught in paragraph 0147) an image using processors (paragraph 0147, FIGS. 20a and 20b schematically illustrate an example of an image displayed by a display unit of an HMD that is generated by applying a warping to the image in accordance with a configuration of a first optical element. In embodiments of the disclosure, the control unit can be configured to control the processor to generate the image by applying a warping to the image, in which the control unit can be configured to control the warping applied to the image and the configuration of the first optical element in accordance with each other so that the viewpoint of the image observed by the user remains substantially the same irrespective of the user's gaze direction.)
On page 15 of the amendment, Applicant argued that Forster fails to display a blurred portion at a fixed position relative to a physical environment.
However, the Examiner respectfully disagrees. Forster clearly teaches display a blurred portion at a fixed position relative to a physical environment (FIGs. 20a and 20b; paragraph 0147, …the viewpoint of the image observed by the user remains substantially the same irrespective of the user's gaze direction).
On pages 16-17 of the amendment, Applicant argued that Forster fails to display a blurred portion at a fixed position relative to the head-mounted device.
However, the Examiner respectfully disagrees. Forster clearly teaches display a blurred portion at a fixed position relative to the head-mounted device (FIGs. 20a and 20b; paragraph 0147, …the viewpoint of the image observed by the user remains substantially the same irrespective of the user's gaze direction).
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). On pages 18-19 of the amendment, Applicant argued that Leyton fails to crop any image of a physical environment captured by a camera. Leyton is used as a second reference to show that an image is cropped in order to align with a second field of view (paragraphs 0080 and 0091, only the region of the virtual environment that is within the field of view of the virtual camera is displayed on the display screen). On the other hand, Forster teaches that the image of a physical environment is captured by a camera (paragraph 0050-0051, This could be by providing some degree of transparency or partial transparency in the display arrangements, and/or by projecting a view of the outside (captured using a camera, for example a camera mounted on the HMD) via the HMD's displays…A front-facing camera 122 may capture images to the front of the HMD, in use).
The arguments, related to “control circuitry configured to transform…”, on pages 19-20 of the amendment, are addressed in the first response above. The rest of the arguments, related to amended parts of claims 12 and 17, are addressed in the rejection below.
Allowable Subject Matter
4. Claim 16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 102
5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
6. Claim(s) 1-4, 6-10 and 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by FORSTER (US 2021/0088790).
As per claim 1, FORSTER discloses a method of operating a head-mounted device with a camera, a display (FIGs. 1-2; paragraphs 0046 and 0050-0051), and one or more processors (see FIG. 13b and paragraph 0033), the method comprising:
capturing an image of a physical environment using the camera (paragraph 0050-0051, This could be by providing some degree of transparency or partial transparency in the display arrangements, and/or by projecting a view of the outside (captured using a camera, for example a camera mounted on the HMD) via the HMD's displays…A front-facing camera 122 may capture images to the front of the HMD, in use);
emitting light from a transparent structure towards an eye box using the display, wherein the physical environment is viewable through the transparent structure from the eye box (FIG. 4; paragraphs 0064-0065, In the arrangement of FIG. 4, the display unit 150 and optical elements 200 cooperate to provide an image which is projected onto a mirror 210, which deflects the image towards the user's eye position 220…if the HMD is designed not to completely obscure the user's view of the external environment, the mirror 210 can be made partially reflective so that the user sees the external environment, through the mirror 210; see also paragraph 0110 and claim 1);
transforming (i.e., warping, as taught in paragraph 0147) the image of the physical environment from a perspective of the camera (e.g., images captured by the forward-facing camera 322; paragraph 0088) to a perspective of the eye box using the one or more processors (paragraph 0147, FIGS. 20a and 20b schematically illustrate an example of an image displayed by a display unit of an HMD that is generated by applying a warping to the image in accordance with a configuration of a first optical element. In embodiments of the disclosure, the control unit can be configured to control the processor to generate the image by applying a warping to the image, in which the control unit can be configured to control the warping applied to the image and the configuration of the first optical element in accordance with each other so that the viewpoint of the image observed by the user remains substantially the same irrespective of the user's gaze direction);
blurring at least a portion of the image of the physical environment (paragraph 0142, the control unit 1310 can be configured to control the processor 1320 to generate the image by applying a blurring function to the image depending upon a predetermined distance from the point of attention in the image. The processor 1320 may apply a blurring function 660 to the image so that a portion of the image not proximate to the point of attention in the image is blurred whilst no blurring or a smaller degree of blurring is applied for a portion of the image within the predetermined distance from the point of attention. The blurring function 660 can be applied so that the magnitude of the degree of blurring varies according to a distance from the point of attention in the image. This means that portions of the image that are peripheral to the user's line of sight can be generated by the processor 1320 with a greater apparent blurriness than portions of the image that are closer to the point of attention in the image); and
displaying, using the display, the blurred portion of the transformed image of the physical environment (paragraph 0142, the control unit 1310 can be configured to control the processor 1320 to generate the image by applying a blurring function to the image depending upon a predetermined distance from the point of attention in the image. The processor 1320 may apply a blurring function 660 to the image so that a portion of the image not proximate to the point of attention in the image is blurred whilst no blurring or a smaller degree of blurring is applied for a portion of the image within the predetermined distance from the point of attention. The blurring function 660 can be applied so that the magnitude of the degree of blurring varies according to a distance from the point of attention in the image. This means that portions of the image that are peripheral to the user's line of sight can be generated by the processor 1320 with a greater apparent blurriness than portions of the image that are closer to the point of attention in the image; see also paragraph 0143 and claims 1, 3, and 6) at a position that overlaps a corresponding portion of the physical environment (paragraph 0061, the displayed images could be arranged so as to be superposed (from the user's point of view) over the external environment).
As per claim 2, FORSTER discloses before transforming the image of the physical environment from the perspective of the camera to the perspective of the eye box, down-sampling the image of the physical environment from a first resolution to a second resolution that is lower than the first resolution (paragraphs 0137 and 0140-0141, an image displayed by the display unit 150 where a first portion of the image has a higher pixel density than a second portion of the image. The first portion 1820 of the image 1800 may be generated by the processor 1320 with a pixel density corresponding to the native pixel density of the display unit 150, such that the first portion 1820 can be rendered with a number of pixels per centimetre (or pixels per degree) that is equivalent to the greatest pixel density that is supported by the display unit 150. The second portion 1830 of the image 1800 may be generated with a lower pixel density by subsampling the pixels of the image 1800. It is clear that the processor 1320 generates the image with a portion having a lower resolution before it is displayed on the display unit 150).
As per claim 3, FORSTER discloses wherein displaying the blurred portion of the transformed image comprises displaying the blurred portion of the transformed image at a fixed position relative to the physical environment (FIGs. 20a and 20b; paragraph 0147, …the viewpoint of the image observed by the user remains substantially the same irrespective of the user's gaze direction).
As per claim 4, FORSTER discloses wherein transforming the image of the physical environment from the perspective of the camera to the perspective of the eye box using the one or more processors comprises transforming (i.e., warping, as taught in paragraph 0147) the image of the physical environment from the perspective of the camera (e.g., images captured by the forward-facing camera 322; paragraph 0088) to the perspective of the eye box (paragraph 0150, FIG. 20b schematically illustrates an example in which the gaze of the user's eye is directed away from the centre of the initial field of view 2050 observed by the user's eye via the first optical element 160, and thus away from the centre of the display unit 150. In this example, the user's gaze is directed so that the user's point of attention in the image corresponds to the image feature 2005… In the examples shown, the field of view observed via the first optical element 160 has moved to the left when comparing FIG. 20a with FIG. 20b, due to the change in the configuration of the first optical element 160. Therefore, a warping can be applied to the image 2000 displayed by the display unit 150, such that the image features in the initial field of view 2050 are warped to a position on the display unit 150 corresponding to the second field of view 2051 so that the image observed by the user's eye comprises the same image features even when the configuration of the first optical element 160 is adjusted. Therefore, the viewpoint of the observed second field of view 2051 can remain substantially the same as the viewpoint of the observed initial field of view 2050 when the first optical element 160 is adjusted from a first configuration (FIG. 20a) to a second configuration (FIG. 20b)) based on depth information (paragraph 0111, detect a physical direction in which at least one of the user's eyes is pointing, or in other words, the direction of the user's gaze…by comparing information about the orientation of each eye 810/811, the so-called vergence of the eyes can be detected. The vergence can then be used to detect where on a display unit 820 (or with respect to a virtual image of a display, as in an HMD) the viewer is looking, and at which apparent depth the viewer's attention is directed in the case of a 3D image).
As per claim 6, FORSTER discloses wherein the head-mounted device has a depth sensor, the method further comprising: obtaining the depth information with the depth sensor (paragraph 0111, the HMD apparatus may comprise one or more infrared or near-infrared light sources and one or more respective detectors 322, such as eye tracking cameras, which are used to detect the orientation of at least one of the user's eyes...FIG. 14 shows two eye tracking cameras 800 and 801 that are used to detect the orientation of the eyes 810 and 811 in an HMD apparatus and detect a point of attention in a displayed image. By comparing information about the orientation of each eye 810/811, the so-called vergence of the eyes can be detected. The vergence can then be used to detect where on a display unit 820 (or with respect to a virtual image of a display, as in an HMD) the viewer is looking, and at which apparent depth the viewer's attention is directed in the case of a 3D image).
As per claim 7, arguments analogous to those applied for claim 3 are applicable for claim 7.
As per claim 8, FORSTER discloses before capturing the image of the physical environment using the camera, placing the camera in a low-resolution mode (the camera 322 captures images at 25 images per second, as taught in paragraph 0091, which is considered a lower resolution than the alternative option of 200Hz capture rate taught in paragraph 0093).
As per claim 9, FORSTER discloses wherein the head-mounted device has an additional camera (paragraphs 0088 and 0067), the method further comprising:
capturing an additional image of the physical environment using the additional camera (paragraph 0050, This could be by providing some degree of transparency or partial transparency in the display arrangements, and/or by projecting a view of the outside (captured using a camera, for example a camera mounted on the HMD) via the HMD's displays; see also paragraph 0067), wherein transforming (i.e., warping, as taught in paragraph 0147) the image of the physical environment from the perspective of the camera to the perspective of the eye box using the one or more processors comprises transforming the image of the physical environment from the perspective of the camera and the additional image of the physical environment from the perspective of the additional camera (paragraph 0147, FIGS. 20a and 20b schematically illustrate an example of an image displayed by a display unit of an HMD that is generated by applying a warping to the image in accordance with a configuration of a first optical element. In embodiments of the disclosure, the control unit can be configured to control the processor to generate the image by applying a warping to the image, in which the control unit can be configured to control the warping applied to the image and the configuration of the first optical element in accordance with each other so that the viewpoint of the image observed by the user remains substantially the same irrespective of the user's gaze direction) into a single image from the perspective of the eye box (paragraph 0109, The first optical element 160 can be positioned in an optical path between the user's eye and the display unit 150 so that the eye of the user observes the image displayed by the display unit 150 via the first optical element 160. In some examples, the first optical element 160 can direct light from a first portion of the display unit 150 for viewing by the user so that the first eye of the user can observe the image in the first portion via the first optical element 160, and the second optical element 160 can direct light from a second portion of the display unit 150 for viewing by the user so that the second eye of the user can observe the image in the second portion via the second optical element 160. In this case a left image may be displayed to the left eye and a right image may be displayed to the right eye such that the user's eyes observe a stereoscopic image pair), wherein blurring at least the portion of the image of the physical environment comprises blurring at least a portion of the single image (paragraph 0142, the control unit 1310 can be configured to control the processor 1320 to generate the image by applying a blurring function to the image depending upon a predetermined distance from the point of attention in the image. The processor 1320 may apply a blurring function 660 to the image so that a portion of the image not proximate to the point of attention in the image is blurred whilst no blurring or a smaller degree of blurring is applied for a portion of the image within the predetermined distance from the point of attention. The blurring function 660 can be applied so that the magnitude of the degree of blurring varies according to a distance from the point of attention in the image. This means that portions of the image that are peripheral to the user's line of sight can be generated by the processor 1320 with a greater apparent blurriness than portions of the image that are closer to the point of attention in the image), and wherein displaying the blurred portion of the transformed image of the physical environment at the position that overlaps the corresponding portion of the physical environment comprises displaying the blurred portion of the single image (paragraph 0142, the control unit 1310 can be configured to control the processor 1320 to generate the image by applying a blurring function to the image depending upon a predetermined distance from the point of attention in the image. The processor 1320 may apply a blurring function 660 to the image so that a portion of the image not proximate to the point of attention in the image is blurred whilst no blurring or a smaller degree of blurring is applied for a portion of the image within the predetermined distance from the point of attention. The blurring function 660 can be applied so that the magnitude of the degree of blurring varies according to a distance from the point of attention in the image. This means that portions of the image that are peripheral to the user's line of sight can be generated by the processor 1320 with a greater apparent blurriness than portions of the image that are closer to the point of attention in the image; see also paragraph 0143 and claims 1, 3, and 6) at the position that overlaps the corresponding portion of the physical environment (paragraph 0061, the displayed images could be arranged so as to be superposed (from the user's point of view) over the external environment).
As per claim 10, FORSTER discloses wherein displaying, using the display, the blurred portion of the transformed image of the physical environment at the position that overlaps the corresponding portion of the physical environment (paragraph 0142, the control unit 1310 can be configured to control the processor 1320 to generate the image by applying a blurring function to the image depending upon a predetermined distance from the point of attention in the image. The processor 1320 may apply a blurring function 660 to the image so that a portion of the image not proximate to the point of attention in the image is blurred whilst no blurring or a smaller degree of blurring is applied for a portion of the image within the predetermined distance from the point of attention. The blurring function 660 can be applied so that the magnitude of the degree of blurring varies according to a distance from the point of attention in the image. This means that portions of the image that are peripheral to the user's line of sight can be generated by the processor 1320 with a greater apparent blurriness than portions of the image that are closer to the point of attention in the image; see also paragraph 0143 and claims 1, 3, and 6) comprises displaying the blurred portion of the transformed image of the physical environment at a frame rate and wherein blurring at least the portion of the image of the physical environment comprises blurring at least the portion of the image of the physical environment at a frequency that is within 10% of the frame rate (paragraph 0141, This means that the second portion 1830 of the image 1800 may be generated by the processor 1320 with a lower frame rate than the first portion 1820 of the image 1800. For example, the second portion 1830 of the image 1800 may be generated by reprojecting every frame at least once or reprojecting every other frame).
As per claim 15, arguments analogous to those applied for claims 2 and 9 are applicable for claim 15.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
9. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over FORSTER (US 2021/0088790) in view of LEYTON et al. (US 2024/0005613) hereinafter “LEYTON”.
As per claim 5, FORSTER discloses the method defined in claim 1,wherein the camera has a first field of view and wherein the physical environment is viewable through the transparent structure from the eye box with a second field of view smaller than the first field of view (FIGs. 4, 17a, 20a), the method further comprising: cropping the image of the physical environment (paragraph 0140, In some examples, every other row and every other column of pixels in the second portion 1830 of the image 1800 may be removed); However, FORSTER does not explicitly disclose cropping the image of the physical environment to align with the second field of view.
In the same field of endeavor, LEYTON discloses cropping the image of the physical environment to align with the second field of view (paragraphs 0080 and 0091, only the region of the virtual environment that is within the field of view of the virtual camera is displayed on the display screen).
One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the teachings of FORSTER, with those of LEYTON, because both references are drawn to the same field of endeavor, because indeed both references are related to the use of head mounted display for virtual or augmented reality, and because such a combination represents a mere combination of prior art elements, according to known methods, to yield a predictable result.
10. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over FORSTER (US 2021/0088790) in view of Smith et al. (US 2024/0000295) hereinafter “Smith”.
As per claim 11, arguments analogous to those applied for the last three limitations of claim 1 are applicable for claim 11; however, FORSTER does not explicitly disclose wherein the image of the physical environment is part of a real time video feed.
In the same field of endeavor, Smith discloses wherein the image of the physical environment is part of a real time video feed (paragraphs 0010 and 0033, The display output provided to the display assembly 200 can include a real-time or near-real-time feed of video captured by the imaging assembly 100).
One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to use the augmented reality apparatus of FORSTER in a real-time environment, as taught by Smith, because both references are drawn to the same field of endeavor, because indeed both references are related to the use of head mounted display for virtual or augmented reality, and because such a combination represents a mere combination of prior art elements, according to known methods, to yield a predictable result.
11. Claim(s) 12-14 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over FORSTER (US 2021/0088790) in view of ZARICHNYI (US 2023/0298222), previously cited by Examiner.
As per claim 12, arguments analogous to those applied for claim 1 are applicable for claim 12; however, FORSTER does not explicitly disclose the control circuit configured to: generate virtual content; using the display, display the virtual content over the blurred portion of the transformed image of the physical environment…, wherein the virtual content is viewable on the blurred portion of the transformed image of the physical environment.
In an analogous art, ZARICHNYI discloses wherein the control circuit is further configured to: generate virtual content; using the display, display the virtual content over the blurred portion of the transformed image of the physical environment…, wherein the virtual content is viewable on the blurred portion of the transformed image of the physical environment (FIG. 11; paragraphs 0135-0136, the processor 120 (see FIG. 2) of the augmented reality device 100 may perform rendering for synthesizing the virtual image 1120 with the peripheral area 1110 so that the virtual image 1120 is overlaid on the peripheral area 1110 from among entire area of the blur image 1100).
Therefore, It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of FORSTER in view of ZARICHNYI, by displaying virtual content over blurred portion of an image. Thus, improving user's work efficiency and concentration (ZARICHNYI, paragraphs 0127 and 0137).
As per claims 13-14, arguments analogous to those applied for claim 13 are applicable for claims 13-14.
As per claim 17, arguments analogous to those applied for claims 1 and 13-14 are applicable for claim 17.
As per claims 18-19, arguments analogous to those applied for the last two limitations of claim 1 are applicable for claims 18-19.
As per claim 20, arguments analogous to those applied for claim 6 are applicable for claim 20.
12. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (EP 2611172 A1, US 20150138395, US 20170296421, KR 101733694)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED JEBARI whose telephone number is (571)270-7945. The examiner can normally be reached M-F: 09:00am-06:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED JEBARI/Primary Examiner, Art Unit 2482