DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
Applicant’s specification paragraphs 25 and 39 “alpha-cannel” should read “alpha-channel”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-12 are rejected under 35 U.S.C. 103 as being unpatentable over Ohashi (US 20200125312 A1, hereinafter “Ohashi”) in view of Grau (US 20060066614 A1, hereinafter “Grau”)
Regarding claim 1,
Ohashi teaches:
An image processing apparatus (Ohashi: ¶23, “. . . an image generating apparatus 200 . . .”) comprising:
at least one processor (Ohashi: ¶27, “. . . control unit 10 is a main processor that processes and outputs signals such as image signal and sensor signal and instructions and data.. . .”)
that functions as:
an acquisition unit configured to acquire a real image captured by an image capturing device and output the real image (Ohashi: ¶40-41, “. . . An HDMI transmitting-receiving unit 280 receives video of a real space photographed by the camera unit 80 from the head-mounted display 100 and supplies the video to an image signal processing unit 250. . . The image signal processing unit 250 supplies an RGB image . . . to an image generating unit 230. . .”; NOTE: camera unit 80 photographs a real space which is acquiring the real image >> unit 280 >> outputs the video to be supplied to unit 250 >> image generating unit 230. Therefore, the control unit 10 functions as an acquisition unit as claimed.);
a generation unit configured to receive the real image output from the acquisition unit, generate a virtual image, output the virtual image, and output the received real image or a converted image of the real image (Ohashi: ¶7, “. . . a rendering unit that carries out rendering of objects of a virtual space and generates a computer graphics image. . .”; ¶42-43, “. . . The image generating unit 230 generates an augmented reality image by reading out data for generation of computer graphics from an image storing unit 260 and rendering objects of a virtual space to generate a CG image and superimposing the CG image on a camera image of a real space provided from the image signal processing unit 250, and outputs the augmented reality image to the image storing unit 260. . .”; ¶55-56, “A chroma key generating unit 244 generates a chroma key image from a CG image based on the depth information of a camera image. . . The chroma key image is used in order to superimpose the CG image on the camera image to generate an augmented reality image. . .; NOTE: Ohashi’s image generation unit 230 generates a chroma key image, which is the converted image, via the chroma key generating unit 244. The image generation unit also generates a CG image, which is the virtual image >> Chroma key generating unit outputs the converted image, rendering unit outputs the CG image >> to be superimposed to the real image. Fig. 4 illustrates the structure of the image generating unit which includes a chroma key generating unit for generating a chroma key image, and a rendering unit to generate the CG image. Therefore, the control unit 10 functions as a generation unit as claimed.
a synthesis unit configured to combine the real image output from the acquisition unit, the virtual image output from the generation unit, and the real image or the converted image output from the generation unit; and (Ohashi: ¶42-43, “. . . The image generating unit 230 generates an augmented reality image . . . and rendering objects of a virtual space to generate a CG image and superimposing the CG image on a camera image of a real space provided from the image signal processing unit 250,. . .”; ¶55-56, “A chroma key generating unit 244 generates a chroma key image from a CG image based on the depth information of a camera image. . . The chroma key image is used in order to superimpose the CG image on the camera image to generate an augmented reality image. . .¶48, “. . . AR superimposing unit 234 generates an augmented reality image by superimposing the CG image. . .; NOTE: The AR superimposing unit 234, which is the sub unit of the image generating unit, combines the Chroma key image, and the CG image by superimposing to the image of a real space to generate an augmented reality image. Therefore, the control unit 10 functions as a synthesis unit as claimed).
a display control unit configured to display an image synthesized by the synthesis unit on a display device (Ohashi: ¶33-34, “The HDMI transmitting-receiving unit 90 receives an image generated by the image generating apparatus 200 . . . and supplies the image to the control unit 10 . . . The control unit 10 can supply an image or text data to the output interface 30 to cause the display panel 32 to display it. . .” ¶56, “The HDMI transmitting-receiving unit 280 reads out frame data of the augmented reality image generated by the image generating unit 230 from the image storing unit 260 and transmits the frame data to the head-mounted display 100 in accordance with the HDMI. . .”; NOTE: HDMI unit receives the synthesized image data and sends it to control unit 10 >> control unit 10 then supplies the synthesized image to interface 30 for display on the display panel 32. Therefore, the control unit 10 functions as a display control unit as claimed. ),
wherein the generation unit
changes a delay time of the real image in the image displayed on the display device (Ohashi: ¶55, “. . . The chroma key image is used in order to superimpose the CG image on the camera image to generate an augmented reality image. The chroma key image is generated by using the camera image that has a low resolution and involves delay . . . and the CG image is superimposed on the camera image with low delay and high resolution based on the chroma key image . . . Thereby, the augmented reality image without unnaturalness can be generated.”; NOTE: The real image is the camera image, the converted image is the chroma key image. Ohashi discloses a high-delay path that involves a delay due to additional chroma key image processing for displaying the real image; and a low-delay path for displaying the real image without chroma key processing.
LOW DELAY PATH because it does not generate a chroma key using the camera image and only superimposes CG directly onto the camera feed. Camera image >> Superimpose CG >> Synthesize and display AR image including the real image.
HIGH DELAY PATH because it “involves delay” caused by generating a chroma key image using the camera image. Camera image >> chroma key generation >> Superimpose CG using chroma key>> Synthesize and display AR image including the real image. Additional delay is inherent due to chroma key processing. Therefore the delay time of the real image is changed depending on whether to output the real image (without chroma key generation) or to output the real image (with chroma key generation)
Although Ohashi’s change in delay time depends on whether to output the real image (low-delay time without generating a chroma key) or to generate the converted image (involves delay, high delay time) which is the chroma key image, Ohashi fails to disclose that the generated chroma key image, is output in the display device.
The analogous art Grau teaches:
output the converted image (Grau: Fig. 2, ¶17, “FIG. 2 is the chroma-key image resulting from the process involved in FIG. 1”; ¶24, “. . . FIG. 2 illustrates a chroma-key representation of one view of the object 10 . . .”; NOTE: Fig. 2 shows that the chroma key image displayed as a representation based on a camera image of Fig. 1. The camera image which is the real image.).
It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine Ohashi and Grau such that Ohashi’s generated chroma key image to be optionally presented to a user. It would also have been an obvious design choice to output the converted image, which is the chroma key image generated by Ohashi (choice between outputting/or not outputting) to a PHOSITA.
The reason for doing so is to allow users to preview and verify the accuracy of the mask before synthesis as errors in volumetric reconstruction of moving objects results in a “visibly quite disturbing view to the viewer” as described in Grau paragraph 9.
(NOTE: When a chroma key image is generated and displayed, it inherently increases and changes the delay time of displaying the rea image because of the additional processing.)
Regarding claim 2, method claim 2 is drawn to the method corresponding to the configuration of the processor of using same as claimed in apparatus claim 1. Therefore, method claim 2 corresponds to the configuration of the processor in the apparatus of claim 1, and is rejected for the same reasons of obviousness as used above.
Regarding claim 12, CRM claim 12 is drawn to the CRM corresponding to the configuration of the processor of using same as claimed in apparatus claim 1. Therefore, CRM claim 12 corresponds to the configuration of the processor in the apparatus of claim 1, and is rejected for the same reasons of obviousness as used above.
Regarding claim 3, depending on 2,
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 2, wherein, the combining includes combining the virtual image output in the generating, the real image or the converted image output in the generating, and the real image output in the acquiring in this order. (Ohashi: ¶48, “. . . The AR superimposing unit 234 generates an augmented reality image by superimposing the CG image . . . on the camera image. . .”; ¶55, “. . . The chroma key image is used in order to superimpose the CG image on the camera image to generate an augmented reality image. . .”; NOTE: The virtual image (CG image) is on top of the real image. Since the converted image (chroma key image) is used to superimpose the CG image, therefore, the CG image and the chroma key image is on top of the real image. Top is the CG image , Middle is the chroma key image, bottom is the real image (camera image).
Regarding claim 4, depending on 2
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 2, wherein the generating includes changing a delay time of the real image in the image displayed on the display device based on whether the real image is output or not. (NOTE: As discussed in claim 1, Ohashi’s low-delay path displays the real camera image with low-delay. In the high-delay path, the real image is being processed for chroma key generation, and is not output until the chroma key is generated to be synthesized for outputting an AR image, and have a higher-delay to be displayed. Therefore, Ohashi’s low-delay path displays the real image with minimal delay and the real image is output, and the high-delay path does not output the real image and delay increases due to the chroma key generating processing times. Therefore, depending on which path is displayed, the delay time is changed whether the real image is output (low delay path) or not (high delay path).
Regarding claim 7, depending on 2,
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 2, wherein the generating includes outputting, as the converted image, an image in which color of part or whole of the received real image is converted (Ohashi: ¶55, “. . . generates a chroma key image obtained by painting out, with specific one color (for example red), the background of the virtual objects and part of the objects of the real space existing on the front side relative to the virtual objects in the CG image. . .”; NOTE: The generated chroma key image have a specific color for the parts where the CG is to be mapped.)
Regarding claim 8, depending on 7,
Ohashi teaches:
The method of image processing according to Claim 7, wherein the generating includes outputting, as the converted image, an image in which the color of part of the whole of the received real image is converted to a color representing chromakey information (NOTE: As discussed in claim 7 rejection, Ohashi uses a specific color for its chroma key, for example, the color red as disclosed in paragraph 55)
Regarding claim 9, depending on 2,
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 2,
Although Ohashi teaches automatic switching (Paragraph 88 discloses that chroma key processing is conditional in a case of using chroma key synthesis, therefore, it can switch between chroma key processing which is the high-delay path, and switches to a low-delay path if there is no chroma key processing) between outputting the real image (low-delay path) or outputting the converted image (high-delay path), Ohashi further teaches an input interface that accepts setting signal from a user.
It would have been obvious an obvious design choice among finite number of solutions (automatic switching by the system or manual switching by a user) to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to include wherein the generating includes switching between outputting the real image or outputting the converted image based on user instructions.
The reason for doing so is to provide options to allow users to select a preferred display path between a high-delay display and a low-delay path display.
Regarding claim 10, depending on 2,
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 2, further comprising controlling whether to output the real image or the converted image in the generating based on a state of the image output in the generating (Ohashi: ¶55, “A chroma key generating unit 244 generates a chroma key image from a CG image based on the depth information of a camera image. Specifically, the chroma key generating unit 244 determines the positional relationship between objects of a real space and objects of a virtual space and generates a chroma key image . . . by painting out, with specific one color (for example red), the background of the virtual objects and part of the objects of the real space existing on the front side relative to the virtual objects in the CG image.”; ¶88, “. . . chroma key processing may be executed after the step S86 in the case of using chroma key synthesis in order to superimpose the CG image on the camera image. . .”; NOTE: The chroma key processing is executed on a system logic as described in paragraph 88. The chroma key image is the converted image. The state of which the chroma key generation is executed is based on the positional information such as if a CG is supposed to be in front of a real object in the real world. A chroma key is then generated for that portion so the CG can be superimposed properly, this is the high-delay path. If occlusion is not required such as determined if a CG is supposed to be showing behind a real object in the real space, then, chroma key processing is not be executed and display via the low-delay path the real camera image. Therefore, the output is controlled where the real image is output via the low-delay path if occlusion is not required, and output the chroma key image if occlusion is required.).
Regarding claim 11, depending on 2,
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 2, wherein the acquiring includes adjusting and outputting the real image acquired from the image capturing device (Ohashi: ¶41, “The image signal processing unit 250 executes image signal processing (ISP) such as RGB conversion (demosaic processing), white balance, color correction, and noise reduction for a Raw image photographed by the camera unit 80 of the head-mounted display 100, and executes distortion correction processing of removing distortion and so forth due to the optical system of the camera unit 80. The image signal processing unit 250 supplies an RGB image for which the image signal processing and the distortion correction processing have been executed to an image generating unit 230”; NOTE: The real image, which is the raw image from the camera, is adjusted by processing the image with RGB conversion, white balance, and other image processing methods disclosed in paragraph 41. After adjusting the raw camera image, the real image converted to RGB is then output and supplied to image generating unit. Camera takes real image >> adjusted by image process i.e. color correction, noise reduction etc. >> output and supplied to image generating unit.).
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Ohashi (US 20200125312 A1, hereinafter “Ohashi”) in view of Grau (US 20060066614 A1, hereinafter “Grau”) and further in view of Proksch et al US 2022/0215539.
Regarding claim 5, depending on 2,
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 2, wherein the generating includes outputting, as the converted image, an image in which part or whole of the received real image is converted (Ohashi: ¶55, “. . . chroma key generating unit 244 generates a chroma key image from a CG image . . . generates a chroma key image obtained by painting out, . . . part of the objects of the real space existing on the front side relative to the virtual objects in the CG image. . .”; NOTE: The parts of the real image that are painted out are the part of the received real image in which transparency is converted. The painted out portion of the chroma key is for mapping the CG image relative to the camera image so it can be superimposed in the correct position. Therefore, by painting out the parts of the real space to generate a chroma key image, the transparency of part or whole of the received real image is converted.
Ohashi does not teach: wherein the generating includes outputting, as the converted image, an image in which transparency of part or whole of the received real image is converted.
Proksch teahes an converted image with a transparent region.(paragraph 66 augmentation region 502 is more transparent) of the real image (fig. 5) is converted,
Therefore, it would have been obvious to a person with ordinary skill in the art to have modifed Ohashi as modified to include: wherein the generating includes outputting, as the converted image, an image in which transparency of part or whole of the received real image is converted.
The reason of doing so would have allowed the user to see through a layer of a combined image for understand the image combining process.
Regarding claim 6, depending on 5
The combination of Ohashi and Grau teaches:
The method of image processing according to Claim 5,
Although Ohashi teaches painting out parts of the real image during chroma key generation process in which part or whole of the received real image is converted, (see rejection of claim 5).
Ohashi fails to disclose wherein the generating includes outputting, as the converted image, an image in which part or the whole of the received real image is made transparent.
Proksch teaches a converted image in which part or the whole of the received real image (fig. 5) is made transparent (paragraph 66 augmentation region 502 is more transparent).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Ohashi as modified to include: wherein the generating includes outputting, as the converted image, an image in which part or the whole of the received real image is made transparent.
The reason of doing so would have allowed the user to see through a layer of a combined image for understand the image combining process.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK GALERA whose telephone number is (571)272-5070. The examiner can normally be reached Mon-Fri 0800-1700 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PATRICK P GALERA/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617