Prosecution Insights
Last updated: April 18, 2026
Application No. 18/700,002

HEAD MOUNT DISPLAY, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD

Non-Final OA §102§103
Filed
Apr 10, 2024
Examiner
VAZ, JANICE EZVI
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
48 granted / 62 resolved
+15.4% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
36.5%
-3.5% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3 are rejected under 35 U.S.C. 102(a)(2) as being unpatentable by Fujimaki (US 20130234914 A1). Regarding Claim 1, Fujimaki teaches a head mount display comprising: a left display that displays a left-eye display image ([0038]: a left optical-image display unit 28); a right display that displays a right-eye display image ([0038]: a right optical-image display unit 26); a housing that supports the left display and the right display so as to be located in front of eyes of a user ([0038]: a right optical-image display unit 26, a left optical-image display unit 28, [0038]: The right optical-image display unit 26 and the left optical-image display unit 28 are arranged in positions corresponding to positions before the right and left eyes of the user, see Fig. 2); and a left camera that captures a left camera image, and a right camera that captures a right camera image ([0018]: a second image pickup unit arranged in a position corresponding to the right eye or temple of the user or the periphery of the eye or the temple during wearing of the head-mounted display device and configured to pick up an image of the outside scene and acquire a second outside scene image; and a third image pickup unit arranged in a position corresponding to the left eye or temple of the user or the periphery of the eye or the temple during wearing of the head-mounted display device and configured to pick up an image of the outside scene and acquire a third outside scene image), the left camera and the right camera being provided outside the housing ([0091]: further includes a camera 62 and a camera 63, see Fig. 11, cameras 62 and 63 positioned on the outer area of the glasses), wherein an interval between the left camera and the right camera is wider than an interocular distance of the user ([0018]: a second image pickup unit arranged in a position corresponding to the right eye or temple of the user…a third image pickup unit arranged in a position corresponding to the left eye or temple of the user. Examiner notes cameras placed at the temples will have a wider distance between them than an interocular distance). Regarding Claim 2, Fujimaki teaches the head mount display according to claim 1. In addition, Fujimaki teaches wherein the left camera and the right camera are provided in the housing toward a direction of a line-of-sight of the user, and capture an outside world in the direction of the line-of-sight of the user ([0091]: further includes a camera 62 and a camera 63. See Fig. 11, cameras positioned on the outer area of the glasses, the glasses capturing images in the direction of a line of sight of a user when dawned on by the user, [0092] The camera 62 is arranged in a position corresponding to the right eye or temple of the user, [0093]: camera 63 is arranged in a position corresponding to the left eye or temple of the user). Regarding Claim 3, Fujimaki teaches the head mount display according to claim 1. In addition, Fujimaki teaches wherein the left camera and the right camera are provided in front of the left display and the right display in a direction of a line-of-sight of the user ([0092] The camera 62 is arranged in a position corresponding to the right eye or temple of the user, [0093]: camera 63 is arranged in a position corresponding to the left eye or temple of the user, see Fig. 11, cameras positioned on the outer part of the glasses, when the glasses are put on by a user, the eyes of the user will be positioned behind the cameras). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4-5, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Fujimaki (US 20130234914 A1) in view of Kurabayashi (US 20190311471 A1) and Kato (US 20180184077 A1). Regarding Claim 4, representative of Claims 17 and 20, Fujimaki teaches the head mount display according to claim 1. However, Fujimaki does not explicitly teach the remaining limitations of Claim 4. Kurabayashi teaches wherein the left-eye display image is generated by projecting the left camera image onto a viewpoint of the left display ([0099]: The left-eye transmissive display 202b and the left-eye camera 203b work similarly to the right-eye transmissive display 202a and the right-eye camera 203a, [0099]: the right-eye camera 203a is a camera for acquiring a real-space image to be visually recognized by the right eye of the user through the right-eye transmissive display 202a, [0099]: by way of projective transformation by applying a projection matrix calculated from the internal parameters to the image obtained from the right-eye camera 203a. The internal parameters are camera parameters, such as the angle of view and the visual field range, calculated from the positional relationship between the right-eye camera 203a and the right-eye transmissive display 202a) display ([0099]: the right-eye camera 203a is a camera for acquiring a real-space image to be visually recognized by the right eye of the user through the right-eye transmissive display 202a, [0099]: by way of projective transformation by applying a projection matrix calculated from the internal parameters to the image obtained from the right-eye camera 203a. The internal parameters are camera parameters, such as the angle of view and the visual field range, calculated from the positional relationship between the right-eye camera 203a and the right-eye transmissive display 202a) Neither Fujimaki nor Kurabayashi explicitly teach generating the left eye display image and right eye display image by sampling a pixel value. Kato teaches teach generating the left eye display image and right eye display image by sampling a pixel value ([0061]: the display images include the display image J.sub.L to be presented to the left eye and the display image J.sub.R to be presented to the right eye, [0074] In S15, the display image generating unit 206 acquires (samples) the pixel value of the light ray in the direction (θ,φ) from the corresponding image and stores the acquired pixel value into the corresponding pixel position in the display image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified the teachings of Fujimaki to include the teachings of Kurabayashi. Doing so would ensure the images captured for viewing the environment in the head mounted display are corrected by applying a projection matrix calculated from internal parameters before being displayed. Further, it would have been obvious to one of ordinary skill in the art to have modified the Fujimaki and Kurabayashi combination by substituting the generic mention of an image being transmitted to the display by Kato’s sampling of pixel values from an image to display. Doing so would provide the predictable result of providing image data to a display viewpoint in the head mounted display. Regarding Claim 5, representative of Claim 18, the Fujimaki, Kurabayashi, and Kato combination teaches the head mount display according to claim 4. In addition, Kato teaches wherein the left-eye display image is compensated by using the right camera image, and the right-eye display image is compensated using the left camera image ([0075]: Thus, the right and left display images may be switched. In other words, the pixel value for the left-eye display image may be sampled from the right-eye image, and the pixel value for the right-eye display image may be sampled from the left-eye image). Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Fujimaki (US 20130234914 A1) in view of Kurabayashi (US 20190311471 A1), Kato (US 20180184077 A1), and Izumi (US 20190132575 A1). Regarding Claim 6, the Fujimaki, Kurabayashi, and Kato combination teaches the head mount display according to claim 4. Fujimaki, Kurabayashi, and Kato all individually teach a left and right display image of a head mounted device. However, none explicitly teach the remaining limitations of Claim 6. Izumi teaches wherein the left-eye display image is compensated by using the left-eye display image in a past, and the right-eye display image is compensated using the right-eye display image in a past ([0168] It is to be noted that, for the inpainting, an image picked up in the past may be used). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified the teachings of Fujimaki, Kurabayashi, and Kato combination by the teachings of Izumi by including image inpainting using a previous image. Doing so would improve the accuracy of the display image by inpainting any missing values. Claim(s) 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Fujimaki (US 20130234914 A1) in view of Kurabayashi (US 20190311471 A1). Regarding Claim 7, Fujimaki teaches the head mount display according to claim 1. Although Fujimaki teaches depth measurements in at least [0090], Fujimaki does not explicitly teach using a distance measurement sensor in the housing. Kurabayashi teaches wherein a distance measurement sensor is provided in the housing toward a direction of a line-of-sight of the user ([0069]: in one example, the sensors 206 include an acceleration sensor, a gyro sensor, an infrared depth sensor, and a camera. The infrared depth sensor is a depth sensor based on infrared projection; however, the same function may be realized by using an RGB-D camera). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified Fujimaki to include the teachings of Kurabayashi by substituting depth measurements for depth measurements from a sensor on the HMD device in a line of sight of the user. Doing so would provide the predictable result of obtaining depth measurements. Regarding Claim 8, the Fujimaki and Kurabayashi combination teaches the head mount display according to claim 7. In addition, Kurabayashi teaches wherein the left camera image is projected onto a viewpoint of the left display by using a depth image obtained by the distance measurement sensor, and the right camera image is projected onto a viewpoint of the right display by using the depth image ([0090]: user-environment determining part 12 acquires image data by means of a camera, acquires shape data by means of a depth sensor, and determines the shapes of real objects visually recognizable by the user in the user-visual-field region, [0094]: rendering part 13 renders a virtual object corresponding to the user environment determined by the user-environment determining part 12 on the display unit 202 of the HMD 200. The rendering part 13 generates and renders a right-eye image (virtual object) to be visually recognized by the right eye of the user and a left-eye image (virtual object) to be visually recognized by the left eye of the user). Claim(s) 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Fujimaki (US 20130234914 A1) in view of Takahashi (US 20190377177 A1). Regarding Claim 13, Fujimaki teaches the head mount display according to claim 1. In addition, Fujimaki teaches left and right cameras positioned at the left and right temples of a user respectively, which is wider than an interocular distance. Fujimaki does not explicitly teach the remaining limitations of Claim 13. Takahashi teaches wherein the interocular distance is a distance from a center of a pupil of a left eye to a center of a pupil of a right eye ([0182]: E is the interocular distance, i.e. the distance between the eyes of the user, see Fig. 6 E is depicted as measuring from center to center). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified the teachings of Fujimaki by substituting the teachings of Takahashi, particularly Takahashi’s definition of the interocular distance. Doing so would provide the predictable result of a camera positioning in an HMD being wider than an interocular distance. Regarding Claim 14, Fujimaki teaches the head mount display according to claim 1. In addition, Fujimaki teaches left and right cameras positioned at the left and right temples of a user respectively, which is wider than an interocular distance. Fujimaki does not explicitly teach the remaining limitations of Claim 14. Takahashi teaches wherein the interocular distance of the user is a value obtained by statistics ([0142]: the average interocular distance E of the user). Claim(s) 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Fujimaki (US 20130234914 A1) in view of Stafford (US 20160260251 A1). Regarding Claim 15, Fujimaki teaches the head mount display according to claim 1. However, Fujimaki does not explicitly teach the remaining limitations of Claim 15. Stafford teaches wherein two of the left cameras and two of the right cameras are provided ([0046]: though in the illustrated embodiment, two cameras are shown on the front surface of the HMD 102, it will be appreciated that there may be any number of externally facing cameras,… and oriented in any direction). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified Fujimaki to include a plurality of cameras as taught by Stafford. Doing so would provide more image data to improve the accuracy of a user’s perception of the outside world while wearing the head mounted device. Regarding Claim 16, Fujimaki teaches the head mount display according to claim 3. However, Fujimaki does not explicitly teach the remaining limitations of Claim 16. Stafford teaches wherein one of two of the left cameras and one of two of the right cameras are disposed to be located above a height of an eye of the user, and another of the two left cameras and another of the two right cameras are disposed to be located below the height of the eye of the user ([0046]: though in the illustrated embodiment, two cameras are shown on the front surface of the HMD 102, it will be appreciated that there may be any number of externally facing cameras,… and oriented in any direction). Allowable Subject Matter Claims 9-12 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANICE VAZ whose telephone number is (703)756-4685. The examiner can normally be reached Monday-Friday 9:00-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JANICE E. VAZ/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Apr 10, 2024
Application Filed
Mar 31, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602831
METHOD AND SYSTEM FOR ENHANCING IMAGES USING MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12602811
IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602935
DRIVING ASSISTANCE DEVICE AND DRIVING ASSISTANCE METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12591847
SYSTEMS AND METHODS OF TRANSFORMING IMAGE DATA TO PRODUCT STORAGE FACILITY LOCATION INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591977
AUTOMATICALLY AUTHENTICATING AND INPUTTING OBJECT INFORMATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+27.5%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month