Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-10 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by
Yamakawa et al., Japanese Patent (JP 2020106655 A), hereinafter “Yamakawa”
Regarding claim 1 Yamakawa teaches a display device, Head Mounted Display, hereinafter abbreviated as HMD [Yamakawa para 0003] comprising: a reception circuit configured to receive a piece of first image data and a piece of second image data, the video see-through HMD described above, a CG to be drawn is rendered for a captured image captured by a video camera, and a display image in which the generated CG is superimposed on the captured image is displayed on the HMD internal panel. [Yamakawa para 0003] wherein the piece of first image data represents an entire image FIG. 1A shows an example of an image obtained by capturing the real world. [Yamakawa para 0010] having a first resolution, In the case of no head movement, it is displayed within the range of resolution H×V. [Yamakawa para 0015 and see Fig. 3] the piece of second image data represents a peripheral image When the head is yawed as in (b), the horizontal shift amount Hd of the image is obtained. The horizontal shift amount Hd can be calculated from the relationship between the distance L and the Yaw angle Yθ [Yamakawa para 0016] having a second resolution less than or equal to the first resolution, Therefore, the pixel 200 in the area of the width S′ pixel has the same size as that in FIG. 1A in FIG. 1B,. [Yamakawa para 0012] and
the peripheral image comprising an image outside the entire image When the head is yawed as in (b), the horizontal shift amount Hd of the image is obtained. [Yamakawa para 0016];
a display section including a plurality of pixels, The screen is displayed on the pixel display panel [Yamakawa para 0015] wherein the display section is configured to display a first image having an image range same as an image range of the entire image the MR image is being experienced. It looks like the observer is seen from the front. In the case of no head movement, it is displayed within the range of resolution H×V. [Yamakawa para 0015];
a sensor configured to detect a change in an orientation of the display device In addition to the optical sensor, the position and orientation of the HMD may be measured using a marker or an image feature that appears in the image captured by the image capturing unit 508 of the HMD. [Yamakawa para 0021];
an image processing circuit configured to generate a piece of display image data based on the piece of first image data, the piece of second image data, and a result of the detection by the sensor The picked-up image is subjected to image processing such as communication error detection between the HMD 100 and the information processing apparatus 110, pixel defect correction, and image pickup system distortion correction, and outputs the image of the processing result to the position/orientation estimation unit 1103 [Yamakawa para 0021]; and
a display drive circuit the information processing device 110 and the HMD 100 may transmit/receive data via a network such as a wireless LAN. The HMD 100 has an imaging unit 508 and a display unit 507. [Yamakawa para 0018]; configured to drive the display section based on the piece of display image data. The imaging unit 508 captures an image. The display unit 507 displays various information. The captured image obtained by … the position or orientation of the image capturing unit 508 is superimposed on the captured image. Then, the display unit 507 displays this composite image. [Yamakawa para 0018];
Regarding claim 2 Yamakawa teaches claim 1 in addition Yamakawa teaches wherein the sensor is further configured to determine that the orientation of the display device has changed, the image processing circuit is further configured to generate, based on the determination that the orientation of the display device has changed, The position and orientation measured here are used to generate a CG image in which a predetermined CG model is depicted in a predetermined position and orientation. It should be noted that this is a configuration necessary for creating a mixed reality image. The position/orientation estimation unit 1103 is not limited to the captured image, and may use a measurement value from an external sensor such as a magnetic sensor or an orientation sensor. [Yamakawa para 0024] the piece of display image data to include a piece of first data and a piece of second data, the piece of first data is based on the piece of first image data, The image acquisition unit 1100 acquires the image captured by the imaging unit 507 of the HMD 100 and the imaging time. [Yamakawa para 0022] and piece of second data is based on the piece of second image data. The imaging system image correction unit 1102 corrects the captured image based on the imaging system correction information. The imaging system correction information is a parameter for performing lens distortion correction processing corresponding to the imaging optical system 5080 [Yamakawa para 0023]
Regarding claim 3 Yamakawa teaches claim 2 in addition Yamakawa teaches wherein the piece of display image data further includes a piece of third data that represents an outside image outside a second image, Therefore, the pixel 200 in the area of the width S′ pixel has the same size as that in FIG. 1A in FIG. 1B, but is larger than that in FIG. 1A in FIG. 1C. [Yamakawa para 0012] and the piece of second data represents the second image. the pixel 201 outside the area of the width S' has the same size in (a), (b) and (c) of FIG. By showing the image of FIG. 1(c) to the observer, the sense of discomfort due to delay can be reduced. FIG. 1D is an image after the display system distortion correction is performed in order to show FIG. 1C to the observer. [Yamakawa para 0013]
Regarding claim 4 Yamakawa teaches claim 3 in addition Yamakawa teaches wherein the image processing circuit is further configured to generate the piece of third data Therefore, the pixel 200 in the area of the width S′ pixel has the same size as that in FIG. 1A in FIG. 1B, but is larger than that in FIG. 1A in FIG. 1C. [Yamakawa para 0012] having a specific pixel value. tThe display control unit 1107 corrects and displays the partial area of the display image to be shifted based on at least the control information. The display control unit 1107 corrects and displays the display image generated by the display image generating unit 1105 using the correction table of the correction table updating unit 1106. [Yamakawa para 0028]
Regarding claim 5 Yamakawa teaches claim 3 in addition Yamakawa teaches wherein the image processing circuit is further configured to generate the piece of third data based on the piece of second data. FIG. 1D is an image after the display system distortion correction is performed in order to show FIG. 1C to the observer. That is, the display control unit 1107 performs control to correct the input image of FIG. 1A to the display image of FIG. 1D. This makes it possible to present the user with a good-looking display in which the black band area is suppressed. [Yamakawa para 0014]
Regarding claim 6 Yamakawa teaches claim 1 in addition Yamakawa teaches wherein the sensor is further configured to determine that the orientation of the display device has not changed, and image processing circuit is further configured to generate, based on the determination that the orientation of the display device has not changed, the piece of display image data to include the piece of first data. the correction table for the entire screen is updated in order to convert from (a) of FIG. 1 to (d) of FIG. In the present modification, the entire image is not corrected by the correction, but the display shift process for forming the image corresponding to FIG. 1B is performed, and then the correction process is performed. By performing the display shift process before the distortion correction, the correction table is changed only in the area corresponding to s'in FIG. Since the correction table for the area corresponding to s″ can be used as it is as the correction table for correcting the normal optical distortion, it is not necessary to update the correction table for the s″ area. [Yamakawa para 0049]
Regarding claim 7 Yamakawa teaches claim 1 in addition Yamakawa teaches wherein the reception circuit is further configured to receive the piece of first image data and the piece of second image data in each of a plurality of reception periods repeatedly set, the image processing circuit is further configured to generate the piece of display image data corresponding to each of the plurality of reception periods, and the display drive circuit is further configured to drive the display section based on the display image data corresponding to each of the plurality of reception periods. the display image generation unit 1105 measures the posture of the HMD at a timing close to the time when the image synthesis ends, and displays the display image obtained by synthesizing the captured image and the CG image based on the posture information. shift. Head movement information is received from a movement information acquisition unit 1101 that detects the posture of the HMD 100 such as a posture sensor, and a shift amount described below is calculated from the head movement information and output to the display control unit 1107. [Yamakawa para 0025]
Regarding claim 8 Yamakawa teaches a display system, Head Mounted Display, hereinafter abbreviated as HMD [Yamakawa para 0003] comprising: an image generation device configured to transmit a piece of first image data and a piece of second image data, HMD described above, a CG to be drawn is rendered for a captured image captured by a video camera, and a display image in which the generated CG is superimposed on the captured image is displayed on the HMD internal panel. [Yamakawa para 0003] wherein the piece of first image data FIG. 1A shows an example of an image obtained by capturing the real world. [Yamakawa para 0010] represents an entire image having a first resolution, In the case of no head movement, it is displayed within the range of resolution H×V. [Yamakawa para 0015 and see Fig. 3] the piece of second image data represents a peripheral image having a second resolution less than or equal to the first resolution, Therefore, the pixel 200 in the area of the width S′ pixel has the same size as that in FIG. 1A in FIG. 1B,. [Yamakawa para 0012] and the peripheral image comprising an first image outside the entire image When the head is yawed as in (b), the horizontal shift amount Hd of the image is obtained. [Yamakawa para 0016]; and a display device that includes: a reception circuit configured to receive the piece of first image data and the piece of second image data the MR image is being experienced. It looks like the observer is seen from the front. In the case of no head movement, it is displayed within the range of resolution H×V. [Yamakawa para 0015]; a display section including a plurality of pixels, The screen is displayed on the pixel display panel [Yamakawa para 0015] wherein the display section is configured to display an second image having an image range same as an image range of the entire image he video see-through HMD described above, a CG to be drawn is rendered for a captured image captured by a video camera, and a display image in which the generated CG is superimposed on the captured image is displayed on the HMD internal panel. [Yamakawa para 0003]; a sensor configured to detect a change in an orientation of the display device In addition to the optical sensor, the position and orientation of the HMD may be measured using a marker or an image feature that appears in the image captured by the image capturing unit 508 of the HMD. [Yamakawa para 0021]; an image processing circuit configured to generate a piece of display image data based on the piece of first image data, the piece of second image data, and a result of the detection by the sensor The picked-up image is subjected to image processing such as communication error detection between the HMD 100 and the information processing apparatus 110, pixel defect correction, and image pickup system distortion correction, and outputs the image of the processing result to the position/orientation estimation unit 1103 [Yamakawa para 0021];; and a display drive circuit the information processing device 110 and the HMD 100 may transmit/receive data via a network such as a wireless LAN. The HMD 100 has an imaging unit 508 and a display unit 507. [Yamakawa para 0018];configured to drive the display section based on the piece of display image data. The imaging unit 508 captures an image. The display unit 507 displays various information. The captured image obtained by … the position or orientation of the image capturing unit 508 is superimposed on the captured image. Then, the display unit 507 displays this composite image. [Yamakawa para 0018];
Regarding claim 9 Yamakawa teaches claim 8 in addition Yamakawa teaches wherein the display device further includes a transmission circuit configured to transmit the result of the detection by the sensor to the image generation device, and the image generation device is further configured to generate the piece of first image data based on the received result of the detection by the sensor. the information processing device 110 and the HMD 100 transmit and receive data via an analog video port or the like, but the communication system of both devices … may transmit/receive data via a network such as a wireless LAN. The HMD 100 … captured image obtained by the imaging unit 508 is transmitted to the information processing device 110. Then, the information processing apparatus 110 generates a composite image in which the image (virtual model or the like) generated according to the position or orientation of the image capturing unit 508 is superimposed on the captured image. [Yamakawa para 0018];
Regarding claim 10 Yamakawa teaches a display method, Head Mounted Display, hereinafter abbreviated as HMD [Yamakawa para 0003] comprising: receiving a piece of first image data and a piece of second image data, the video see-through HMD described above, a CG to be drawn is rendered for a captured image captured by a video camera, and a display image in which the generated CG is superimposed on the captured image is displayed on the HMD internal panel. [Yamakawa para 0003] wherein the piece of first image data FIG. 1A shows an example of an image obtained by capturing the real world. [Yamakawa para 0010] represents an entire image having a first resolution In the case of no head movement, it is displayed within the range of resolution H×V. [Yamakawa para 0015 and see Fig. 3], the piece of second image data represents a peripheral image When the head is yawed as in (b), the horizontal shift amount Hd of the image is obtained. The horizontal shift amount Hd can be calculated from the relationship between the distance L and the Yaw angle Yθ [Yamakawa para 0016] having a second resolution less than or equal to the first resolution Therefore, the pixel 200 in the area of the width S′ pixel has the same size as that in FIG. 1A in FIG. 1B. [Yamakawa para 0012], and the peripheral image comprising a first image outside the entire image When the head is yawed as in (b), the horizontal shift amount Hd of the image is obtained. [Yamakawa para 0016];; detecting, by a sensor, a change in an orientation of a display device In addition to the optical sensor, the position and orientation of the HMD may be measured using a marker or an image feature that appears in the image captured by the image capturing unit 508 of the HMD. [Yamakawa para 0021]; generating a piece of display image data based on the piece of first image data, the piece of second image data, and a result of the detection by the sensor The picked-up image is subjected to image processing such as communication error detection between the HMD 100 and the information processing apparatus 110, pixel defect correction, and image pickup system distortion correction, and outputs the image of the processing result to the position/orientation estimation unit 1103 [Yamakawa para 0021]; and driving a display section based on the piece of display image data, the information processing device 110 and the HMD 100 may transmit/receive data via a network such as a wireless LAN. The HMD 100 has an imaging unit 508 and a display unit 507. [Yamakawa para 0018]; wherein the display section is configured to display a second image having an image range same as an image range of the entire image. The imaging unit 508 captures an image. The display unit 507 displays various information. The captured image obtained by … the position or orientation of the image capturing unit 508 is superimposed on the captured image. Then, the display unit 507 displays this composite image. [Yamakawa para 0018].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT J MICHAUD whose telephone number is (571)270-3981. The examiner can normally be reached 8:30 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached on 571-272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT J MICHAUD/Examiner, Art Unit 2622