DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10 December 2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1,15,19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3,7-8,19-20,22,25-26 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1)
Regarding claim 1, Iwabuchi teaches,
A method (¶32 and Fig. 1, “image capturing operation” by imaging device depicted in fig. 1) comprising:
obtaining a plurality of texture images of a scene, (¶34,38, Fig. 2 and 3, “images are captured” at a plurality of focused object distances D1 to D3 “of an object portion which is focused (hereafter focused object portion) Ob1 to Ob3 from the captured image” when at step S2 repeatedly “captures an image” at set “focused object distance” as disclosed in Fig. 3) each texture image having a different respective focal distance; (¶34,38, Fig. 2 and 3, “images are captured” at a “plurality of focused object distances D1 to D3” as depicted in fig. 2) and
obtaining a corresponding depth map (¶34-35,39, and Fig. 2, extracted “object image data OD1 to OD3” corresponds to “detected focused object portion Ob1 to Ob3” corresponds to object distances “D1 to D3”, disclosed in fig. 2) for each texture image; (¶34-35 and Fig. 2, “captured image data P1 to P3” which correspond to “focused object portion Ob1 to Ob3”) and
for each texture image, (¶40 and Fig. 3, “image processing” to the “focused object image data” as disclosed in Fig. 3) generating a focal plane image (¶40,60-93, and Fig. 3, steps S6-S9 image processing performs “transform” at step S6, “brightness” correct at step S7, “color saturation” correct at step S8, and “gradation processing” at step S9 on the captured “focused object image data”) by (i) determining a corresponding focal weight (¶87-90, Fig. 3,10, and 13, “respective color saturation” corresponding to step S8 disclosed in Fig. 3 “according to focused object distances D91 to D95” disclosed in Fig. 10) for the texture image, (¶87-90, Fig. 3,10, and 13, “respective color saturation” for each “focused object image data OD91 to OD95” according to focused object distances as disclosed in Fig. 10) wherein the focal weight (¶87-90, Fig. 3,10, and 13, “respective color saturation” corresponding to step S8 disclosed in Fig. 3 and 10) represents an amount by which the pixel is in focus, (¶87-90, Fig. 10 and 13, “focused object distance and color saturation are corresponded” of the pixel values of captured “focused object image data”) (ii) processing the texture image by the corresponding focal weight. (¶87-89, Fig. 3,10, and 13, color saturation correction processing “corrects the respective color saturation of the focused object image data OD91 to OD95, so that the color saturation corresponds to the focused object distance” which corresponds to step S8 disclosed in Fig. 3)
But does not explicitly teach,
(i) determining a corresponding focal weight for each of a plurality of pixels of the texture image, and (ii) multiplying a pixel value of each of the plurality of pixels by the corresponding focal weight,
wherein each of the corresponding depth maps comprises an indication of depth for each pixel in the corresponding texture image.
However, Lakhsman teaches additionally,
generating a focal plane image (¶96-98, “blending operations” performed to composite a “composited image C”) by (i) determining a corresponding focal weight (¶96, “different weights for the different warped texture images may be set”) for each of a plurality of pixels (¶96, different weights may be set based on individual “individual pre-warped depth values of pre-warped pixels, individual warped depth values of the warped pixel after the pre-warped pixels are warped to the warped pixel”) of the texture image, (¶96, “different weights for the different warped texture images” based on the depth values) and (ii) multiplying a pixel value of each of the plurality of pixels (¶96-99, “different weights” assigned to different “image portions with different depths” in compositing operations that include performing “weighted averaging of warped texture pixel values at a given warped pixel (position) of an overall warped image”) by the corresponding focal weight, (¶96-97, “Closer neighboring sampled views may be assigned higher weights in blending operations, whereas more distant neighboring sampled views may be assigned lower weights in the blending operations”)
wherein each of the corresponding depth maps (¶53-54 and fig. 1A, “single-view depth image 10” corresponding to single-view texture image 106 depicted in fig. 1A) comprises an indication of depth for each pixel (¶53-54 and fig. 1A, single-view texture image 106 and the single-view depth image 108 with “large numbers of pixels (e.g., texture image pixels, depth image pixels, etc.)” that correspondingly cover the field of view) in the corresponding texture image. (¶53-54 and fig. 1A, “single-view texture image 106”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman which performs a weighted averaging of texture image pixels by assigning different weights according to depth. This allows for accounting of occlusion/disocclusion of pixels in the image.
Regarding claim 2, Iwabuchi with Lakshman teaches the limitations of claim 1,
Iwabuchi teaches additionally,
displaying the focal plane images (¶30,40, and Fig. 3, “transferred to a display device 16” that displays a three-dimensional image based on the “parallax image data” generated by procedure which uses the “focused object image data” with “color saturation correction processing” of step S8, disclosed in Fig. 3) at the respective focal distance (¶30,40,87-90, Fig. 3,10, and 13, “focused object image data”, used in generating “parallax image data”, with “color saturation correction processing” of step S8 performed to adjust the color saturation according to “the focused object distances”, disclosed in Fig. 3,10, and 13) thereof in a multi-focal-plane display. (¶30 and 40, “display device 16” displays a “three-dimensional image based on the parallax image data”)
Regarding claim 3, Iwabuchi with Lakshman teaches the limitations of claim 2,
Iwabuchi teaches additionally,
focal plane images are displayed (¶30,40, and Fig. 3, “transferred to a display device 16” that displays a three-dimensional image based on the “parallax image data” generated by procedure which uses the “focused object image data” with “color saturation correction processing” of step S8, disclosed in Fig. 3) substantially simultaneously. (¶30 and 37, displayed three-dimensional image based on “parallax image data” includes right eye image data of combined “object image data rOD1 to rOD3” and left eye image data synthesized from “focused object image data OD1 to OD3”)
Regarding claim 7, Iwabuchi with Lakshman teaches the limitations of claim 1,
Iwabuchi teaches additionally,
corresponding depth map (¶34-35,39, and Fig. 2, extracted “object data OD1 to OD3” corresponding to “detected focused object portion Ob1 to Ob3” corresponds to object distances “D1 to D3”, disclosed in fig. 2) for each texture image (¶34-35 and Fig. 2, “captured image data P1 to P3” which correspond to “focused object portion Ob1 to Ob3 from the captured image data”) is captured at the focal distance (¶34-35 and Fig. 2, corresponding to “focused object distance D1 to D3”) of the corresponding texture image. (¶34,38, Fig. 2 and 3, “images are captured” at a plurality of focused object distances D1 to D3 “of an object portion which is focused (hereafter focused object portion) Ob1 to Ob3 from the captured image” when at step S2 repeatedly “captures an image” at set “focused object distance” as disclosed in Fig. 3)
Regarding claim 8, Iwabuchi with Lakshman teaches the limitations of claim 1,
Iwabuchi teaches additionally,
obtaining the plurality of texture images (¶34 and Fig. 2, “images are captured”) comprises capturing each of the plurality of texture images (¶34 and Fig. 2, images are captured at a “plurality of focused object distances D1 to D3”) at the respective focal distance; (¶34 and Fig. 2, “focused object distances D1 to D3”) and
obtaining the corresponding depth map (¶35 and Fig. 2, focused “object image data OD1 to OD3” corresponds to “focused object image data”) comprises capturing each depth map of the scene (¶35 and Fig. 2, “object image data OD1 to OD3 corresponding to each focused object portion Ob1 to Ob3” are extracted) focused at the respective focal distance. (¶35 and Fig. 2, “focused object image data OD1 to OD3 and the focused object distance D1 to D3, as the distance information, are corresponded”)
Regarding claim 19, Iwabuchi teaches,
A method (Title, “3D imaging system”) comprising:
obtaining a plurality of texture images (¶34,38, Fig. 2 and 3, “images are captured” at a plurality of focused object distances D1 to D3 “of an object portion which is focused (hereafter focused object portion) Ob1 to Ob3 from the captured image” when at step S2 repeatedly “captures an image” at set “focused object distance” as disclosed in Fig. 3) and respective corresponding depth maps (¶34-35,39, and Fig. 2, extracted “object image data OD1 to OD3” corresponds to “detected focused object portion Ob1 to Ob3” corresponds to object distances “D1 to D3”, disclosed in fig. 2) of a scene, (¶34-35,29, and Fig. 2, “captured image data P1 to P3” which correspond to “focused object portion Ob1 to Ob3” of light from object Ob that “forms an object image”) each texture image having a different respective focal distance; (¶34,38, Fig. 2 and 3, “images are captured” at a “plurality of focused object distances D1 to D3” as depicted in fig. 2) and
for each texture image, (¶40 and Fig. 3, “image processing” to the “focused object image data” as disclosed in Fig. 3) generating a focal plane image (¶40,60-93, and Fig. 3, steps S6-S9 image processing performs “transform” at step S6, “brightness” correct at step S7, “color saturation” correct at step S8, and “gradation processing” at step S9 on the captured “focused object image data”) by processing the texture image by a respective weight value, (¶87-89, Fig. 3,10, and 13, color saturation correction processing “corrects the respective color saturation of the focused object image data OD91 to OD95, so that the color saturation corresponds to the focused object distance” which corresponds to step S8 disclosed in Fig. 3) the respective weight value (¶87-90, Fig. 3,10, and 13, “respective color saturation” corresponding to step S8 disclosed in Fig. 3 and 10) being determined based at least in part on a depth value (¶87-90, Fig. 3,10, and 13, “respective color saturation” corresponding to step S8 disclosed in Fig. 3 “according to focused object distances D91 to D95” disclosed in Fig. 10) corresponding to the respective depth map (¶87-90, Fig. 3,10, and 13, “respective color saturation” for each “focused object image data OD91 to OD95” according to focused object distances as disclosed in Fig. 10) corresponding to the texture image. (¶87-90, Fig. 10 and 13, “focused object distance and color saturation are corresponded” of the pixel values of captured “focused object image data”)
But does not explicitly teach,
generating a focal plane image by multiplying a pixel value of each of the plurality of pixels by a respective weight value, the respective weight value being determined based at least in part on a depth value corresponding to the pixel in the respective depth map corresponding to the texture image,
wherein each of the corresponding depth maps comprises an indication of depth for each pixel in the corresponding texture image.
However, Lakshman teaches additionally,
generating a focal plane image (¶96-98, “blending operations” performed to composite a “composited image C”) by multiplying a pixel value of each of the plurality of pixels (¶96-99, “different weights” assigned to different “image portions with different depths” in compositing operations that include performing “weighted averaging of warped texture pixel values at a given warped pixel (position) of an overall warped image”) by a respective weight value, (¶96-97, “Closer neighboring sampled views may be assigned higher weights in blending operations, whereas more distant neighboring sampled views may be assigned lower weights in the blending operations”) the respective weight value (¶96, “different weights for the different warped texture images may be set”) being determined based at least in part on a depth value corresponding to the pixel (¶96, “different weights for the different warped texture images” set based on individual “individual pre-warped depth values of pre-warped pixels, individual warped depth values of the warped pixel after the pre-warped pixels are warped to the warped pixel”) in the respective depth map corresponding to the texture image, (¶96, “different weights for the different warped texture images” based on the depth values)
wherein each of the corresponding depth maps (¶53-54 and fig. 1A, “single-view depth image 10” corresponding to single-view texture image 106 depicted in fig. 1A) comprises an indication of depth for each pixel (¶53-54 and fig. 1A, single-view texture image 106 and the single-view depth image 108 with “large numbers of pixels (e.g., texture image pixels, depth image pixels, etc.)” that correspondingly cover the field of view) in the corresponding texture image. (¶53-54 and fig. 1A, “single-view texture image 106”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman which performs a weighted averaging of texture image pixels by assigning different weights according to depth. This allows for accounting of occlusion/disocclusion of pixels in the image.
Regarding claim 20, dependent on claim 19, it is the method limitation similar to claim 2, dependent on claim 1. Refer to rejection of claim 2 to teach the rejection of claim 20.
Regarding claim 22, Iwabuchi in view of Lakshman teaches the limitation of claim 19,
Iwabuchi teaches additionally,
the respective weight value (¶87-90, Fig. 3,10, and 13, “respective color saturation” corresponding to step S8 disclosed in Fig. 3 and 10) represents an amount by which the pixel is in focus. (¶87-90, Fig. 10 and 13, “focused object distance and color saturation are corresponded” of the pixel values of captured “focused object image data”)
Regarding claim 25, Iwabuchi in view of Lakshman teaches the limitation of claim 1,
Lakshman teaches additionally,
plurality of texture images (¶96-100, “L1 texture image “L1_t”, L2 texture image “L2_t””) each represent a capture of the scene from a same viewpoint, (¶86,60-63, and fig. 1B, “L1 texture image “L1_t” of the target view is different from “L2 texture image “L2_t” as different image layers of the “sampled view” in the view direction (104) as depicted in fig. 1B) and
wherein the plurality of texture images (¶96-100, “L1 texture image “L1_t”, L2 texture image “L2_t””) each use a different respective focal distance. (¶96-100,76, and fig. 1B, “texture image” transformed into visual object depicted in the second view associated with different layers at “different distances” as depicted in fig. 1B)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman which performs a weighted averaging of texture image pixels by assigning different weights according to depth. This allows for accounting of occlusion/disocclusion of pixels in the image.
Regarding claim 26, Iwabuchi in view of Lakshman teaches the limitation of claim 1,
Lakshman teaches additionally,
corresponding depth maps (¶60-63 and fig. 1B, “depth image 108-1 (denoted as “L1 depth”)” and “depth image 108-2 (denoted as “L2 depth”)” depicted in fig. 1B) is different depending on the respective focal distance. (¶60-63,76, and fig. 1B, “depth image 108-1” and “depth image 108-2” associated with respective texture image L1 and L2 associated with different layers at “different distances” as depicted in fig. 1B)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman which performs a weighted averaging of texture image pixels by assigning different weights according to depth. This allows for accounting of occlusion/disocclusion of pixels in the image.
Claim(s) 6,9 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of ALREGIB; Ghassan et al. (US 20120120192 A1)
Regarding claim 6, Iwabuchi with Lakshman teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 6,
However, Alregib teaches additionally,
wherein for each texture image, (¶88-90 and fig. 10, preprocessing of the “wrapped color image”) the focal weight (¶88-90 and Fig. 10, “w[i, j] is the assigned weight at pixel location [i, j]”) of each pixel (¶88-90 and fig. 10, “pixel location [i, j]”) in the texture image (¶88-90 and fig. 10, pixel location [i, j] of the depth adaptive preprocessed “wrapped color image”) is determined based at least in part on a difference (¶88-90 and fig. 10, assigned weight “w[i, j]” according to “mapping function” as a function of disparity “D[i, j]” expressed in terms of a proportional function of “focal length F, camera base line B, and depth Z”) between the focal distance (¶88-90 and Fig. 10, “focal length F”) of the texture image that includes the pixel (¶88-90 and Fig. 10, disparity D[i, j] “at pixel location [i, j]” as a function of focal length F adaptively preprocessing “wrapped color image”) and a depth value (¶88-90 and Fig. 10, “depth Z” expressed in disparity function as “Z[i, j]”) of the pixel in the corresponding depth map. (¶88-90 and Fig. 10, disparity D[i, j] “at pixel location [i, j]” as a function of depth Z at location “Z[i, j]” when adaptively preprocessing “wrapped color image”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the depth-based view synthesis of Alregib which assigns depth adaptive weights to particular pixel locations in a color image. This can help reduce blur by assigning higher weights to edges of an image feature that helps create a seamless and natural looking synthesized view.
Regarding claim 9, Iwabuchi with Lakshman teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 9,
However, Alregib teaches additionally,
focal weight wi(x,y) of a pixel (¶88-90 and fig. 10, assigned weight “w[i, j]” at “pixel location [i, j]”) in texture image i (¶88-90 and fig. 10, “wrapped color image” used to generate “depth-weighted color image” through depth adaptive preprocessing) is determined as a function of a depth zi(x,y) of the pixel, (¶88-90 and fig. 10, assigned weight w[i, j] at pixel location [i, j] determined based on disparity “D[i, j]” which further expresses depth Z “Z[i, j]” at pixel location “[i, j]”) such that wi(x,y) = wi[zi(x,y)]. (¶88-90 and fig. 10, assigned weight “w[i, j]” at “pixel location [i, j]” determined based on disparity “D[i, j]” which further expresses depth Z “Z[i, j]” at pixel location “[i, j]”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the depth-based view synthesis of Alregib which assigns depth adaptive weights to particular pixel locations in a color image. This can help reduce blur by assigning higher weights to edges of an image feature that helps create a seamless and natural looking synthesized view.
Claim(s) 10 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of ALREGIB; Ghassan et al. (US 20120120192 A1) in view of YOKOKAWA; Masatoshi et al. (US 20200007760 A1)
Regarding claim 10, Iwabuchi in view of Lakshman with Alregib teaches the limitation of claim 9,
But does not teach the additional limitations of claim 9,
However, Yokokawa teaches additionally,
w[zi(x,y)] has a maximum value when zi(x,y) is substantially equal to the focal distance of the texture image i. (¶165 and Fig. 11, “ weight of the pixel value of the divided pixel image is raised at the focal point located in a region identical to a depth of the AF position or a region at a depth close to this depth” of the image “Pic1” as depicted in Fig. 11)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the depth-based view synthesis of Alregib with the weighting of Yokokawa which is based on the focal point being located at a depth of the auto focus position. This provides for an appropriate blending position point determination onto the auto focusing position.
Claim(s) 15-16,23 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of YOKOKAWA; Masatoshi et al. (US 20200007760 A1)
Regarding claim 15, it is the system claim of method claim 1.
Iwabuchi teaches additionally,
A system (¶28 and Fig. 1, “imaging device” depicted in Fig. 1) comprising:
a processor; (¶28,31 and Fig. 1, imaging device to which “processor of this embodiment is applied” such as “processor 4 includes a microcomputer or ASIC” as depicted in Fig. 1)
cause the processor (¶31 and Fig. 1, processor 4 “systematically controls the operation of the imaging device 2” depicted in fig. 1) to:
but does not explicitly teach the non-transitory computer-readable medium of claim 15,
However, Yokokawa teaches additionally,
a non-transitory computer-readable medium (¶351, “recording medium”) storing instructions operative, (¶351, “a program constituting the software is installed from a recording medium”) when executed by the processor, (¶351, “general-purpose personal computer” capable of executing various functions under various programs installed into the computer “from a recording medium”) to cause the processor (¶351, general-purpose personal computer where “processes are executed by software”) to:
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the executable software of Yokokawa which performs a series of processes from dedicated hardware. The processes of this image processing can take advantage of the hardware configuration of a personal computer.
Refer to rejection of claim 1 to disclose the additional limitations of claim 15.
Regarding claim 16, Iwabuchi with Lakshman with Yokokawa teaches the limitations of claim 15,
Iwabuchi teaches additionally,
for each texture image, (¶34-35 and Fig. 2, “captured image data P1 to P3” which correspond to “focused object portion Ob1 to Ob3 from the captured image data”) amount by which the texture image is in focus (¶34-35 and Fig. 2, “detects an object portion which is focused (hereafter focused object portion) Ob1 to Ob3 from the captured image data P1 to P3”) is determined based at least in part on a depth value (¶34-35 and Fig. 2, “object portion which is focused (hereafter focused object portion) Ob1 to Ob3” for each “focused object distance D1 to D3, as the distance information” corresponds with focused object portion Ob1 to Ob3)
Lakshman teaches additionally,
for each of the plurality of pixels of the texture image, (¶96, “warped texture pixel values at a given warped pixel (position)” from individual warped texture images”) amount by which the pixel in the texture image is in focus (¶96-100, different “weights for the different warped texture images” based on the “depth values” of the “pixels in their respective single-view images”) is determined based at least in part on a depth value corresponding to the pixel. (¶97, Different weights “assigned to different images with different linear and/or angular distances”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the executable software of Yokokawa which performs a weighted averaging of texture image pixels by assigning different weights according to depth. This allows for accounting of occlusion/disocclusion of pixels in the image.
Regarding claim 23, Iwabuchi in view of Lakshman with Yokokawa teaches the limitation of claim 15,
Iwabuchi teaches additionally,
displaying the focal plane images (¶30,40, and Fig. 3, “transferred to a display device 16” that displays a three-dimensional image based on the “parallax image data” generated by procedure which uses the “focused object image data” with “color saturation correction processing” of step S8, disclosed in Fig. 3) at the respective focal distance (¶30,40,87-90, Fig. 3,10, and 13, “focused object image data”, used in generating “parallax image data”, with “color saturation correction processing” of step S8 performed to adjust the color saturation according to “the focused object distances”, disclosed in Fig. 3,10, and 13) thereof in a multi-focal-plane display.(¶30 and 40, “display device 16” displays a “three-dimensional image based on the parallax image data”)
Claim(s) 24 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of YOKOKAWA; Masatoshi et al. (US 20200007760 A1) in view of ALREGIB; Ghassan et al. (US 20120120192 A1)
Regarding claim 24, dependent on claim 15, it is the system claim of method claim 6, dependent on claim 1. Refer to rejection of claim 6 to teach rejection of claim 24.
Claim(s) 11 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of Sasaki; Takashi (US 20180259743 A1)
Regarding claim 11, Iwabuchi in view of Lakshman teaches the limitation of claim 1,
But does not explicitly disclose the additional limitation of claim 11,
However, Sasaki teaches additionally,
amount by which the pixel in the texture image is in focus (¶42, “302 calculates a defocus amount for each target pixel position” using the “A image and the B image”) is determined based at least in part on a defocus map generated from the texture image. (¶42, “defocus map generating unit (hereinafter, simply referred to as a “map generating unit” 302 calculates a defocus amount for each target pixel position” such that defocus amount is information related to “the distance distribution of the object, and represent the value of the defocus map data”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the map generation of Sasaki which generates a defocus map. The defocus map can represent spatial distribution of defocused areas where low exposure occurred.
Claim(s) 12-13 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of Das; Sujata (US 20170127046 A1)
Regarding claim 12, Iwabuchi in view of Lakshman teaches the limitation of claim 1,
But does not explicitly disclose the additional limitation of claim 12,
However, Das teaches additionally,
Generating a virtual viewpoint by shifting (¶97, “pixels are shifted horizontally between left and right images”) at least one of the focal plane images by an amount inversely proportional to the display focal distance of the respective focal plane image. (¶63 and 97, “applying depth associated with the plane having the fit to the at least one area to shift pixels in the two-dimensional image” by “shifting inversely proportional to the depth of the pixel from the viewer”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the depth correction of Das which shifts images based on the proportionality of the depth of the pixel from the viewer. This allows for producing a complete stereo image pair.
Regarding claim 13, Iwabuchi in view of Lakshman with Das teaches the limitation of claim 12,
Das teaches additionally,
displaying the generated virtual viewpoint as one of a stereo pair of viewpoints. (¶97, “apply the depth of the planes or masks to the areas in the image to produce a stereoscopic image, e.g., anaglyph, or stereoscopic image pair for display on visual output 120 by viewer 170”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the depth correction of Das which shifts images based on the proportionality of the depth of the pixel from the viewer. This allows for producing a complete stereo image pair.
Claim(s) 14 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of Das; Sujata (US 20170127046 A1) in view of Kroon; Bart (US 20140118509 A1)
Regarding claim 14, Iwabuchi in view of Lakshman with Das teaches the limitation of claim 12,
But does not explicitly disclose the additional limitation of claim 14,
However, Kroon teaches additionally,
displaying the generated virtual viewpoint in response to viewer head motion to emulate motion parallax. (¶96 and 99, “user moves his head, the presented images follow this movement to provide a motion parallax effect and a natural 3D experience” such that rendering viewpoint “changes continuously to follow the viewer's head movements”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the depth correction of Das with the display rendering viewpoint of Kroon which follows the movement of a user’s head. This allows for producing a natural user experience with a strong motion parallax effect.
Claim(s) 18 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of YOKOKAWA; Masatoshi et al. (US 20200007760 A1) in view of Das; Sujata (US 20170127046 A1)
Regarding claim 18, dependent on claim 15, it is the system claim of method claim 12, dependent on claim 1. Refer to rejection of claim 12 to teach rejection of claim 18.
Claim(s) 21 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of Akeley; Kurt et al. (US 20160307368 A1)
Regarding claim 21, Iwabuchi in view of Lakshman teaches the limitation of claim 19,
But does not explicitly disclose the additional limitation of claim 21,
However, Akeley teaches additionally,
the plurality of respective weight values associated with a respective texture image of the plurality of texture images add up to 1. (¶151, “weighted-color arithmetic may be used to combine the remapped colors, with weights chosen such that they sum to one, and are in inverse proportion to the distance of the hull-image RCoP from the view RCoP”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with the weight arithmetic of Akeley where weights are chosen so that they sum to one. This technique helps avoids large changes.
Claim(s) 27 rejected under 35 U.S.C. 103 as being unpatentable over IWABUCHI; Hiroshi et al. (US 20110316981 A1) in view of LAKSHMAN; Haricharan et al. (US 20180359489 A1) in view of Vondran, JR.; Gary Lee et al. (US 20150104074 A1)
Regarding claim 27, Iwabuchi in view of Lakshman teaches the limitation of claim 1,
But does not explicitly teach the additional limitation of claim 27,
However, Vondran teaches additionally,
corresponding depth maps comprises (¶69-70, captured “multiple images”) an indication of a sharp transition between content at different depths. (¶69-70, captured “multiple images” having different “focus points and/or different depths of field” then selectively “blending or combining sharp and blurred regions of the multiple images to simulate the effect of refocusing to a particular depth”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging device of Iwabuchi with the blending of Lakshman with multiple images of Vondran which are images at different focal points. This allows for an arrangement which can calibrate focusing onto specific scenes.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH G USTARIS/Supervisory Patent Examiner, Art Unit 2483
/JIMMY S LEE/Examiner, Art Unit 2483