Detailed Action
1. Claims 1-26 are pending in this Application.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless -
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
3. Claims 1-6,8-12, 19-22 and 25-26 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by SUZUKI HIROSH (hereafter SUZUKI), JP2007329619 (A) , pub 12/20/2007.
As to claim 1, SUZUKI teaches An image processing apparatus comprising: a processor, wherein the processor is configured to: acquire distance information data related to distance information between an image sensor and a subject ([0045], the distance information calculation unit 108 uses an external infrared sensor 109 to calculate distance information indicating the distance between the imaging device and the subject);
output first image data indicating a first image obtained by imaging with the image sensor ([0045]- [0046], the video signal is transferred to the buffer 105 in the same manner as in the pre photographing. The video signal in the buffer 105 is transferred to a photometry evaluation unit 106);
output first brightness information data indicating first brightness information created based on a first signal of the first image data ([0046] The video signal in the buffer 105 is transferred to a photometry evaluation unit 106 , a distance information calculation unit 108 , and a signal processing unit 110 .The photometry evaluation unit 106 obtains the luminance level in the video signal), for each of a plurality of regions into which the first image is classified according to the distance information ([0074], FIG. 10 is an explanatory diagram showing an example of a division pattern for photometric evaluation, in which the image is divided into 13 regions, and a luminance value (a<sub>i< /sub>, i=1 to 13) is obtained for each region. ); and
in a case in which a first instruction to adjust the first brightness information for at least one of the plurality of regions is received by a reception device, perform first processing of reflecting a content of the first instruction in the first image and/or the first brightness information ([0039], sequentially extracting local areas centered on a pixel of interest using the signal-processed video signal; creating a histogram (luminance) of the extracted area; performing clipping processing on the histogram based on the correction coefficient; generating a gradation conversion curve by accumulating and normalizing the histogram after the clipping processing; and performing gradation conversion(adjustment ) processing on the pixel of interest based on the gradation conversion curve).
As to claim 2, SUZUKI teaches the first brightness information is a first histogram ([0050], FIG. 3A shows the original histogram from the histogram creation unit 203 and the clip value versus frequency on the vertical axis, where the frequent represent the number of pixels, and horizontal axis represent the luminance of the frequent (number of pixels) .
As to claim 3, SUZUKI teaches the first histogram indicates a relationship between a signal value and the number of pixels ([0050], histogram calculation means for calculating a histogram of the target pixel and a neighboring region, 0050], FIG. 3A shows the original histogram from the histogram creation unit 203 and the clip value versus frequency on the vertical axis, where the frequent represent the number of pixels, and horizontal axis represent the luminance of the frequent ( number of pixels)).
As to claim 4, SUZUKI teaches the processor is configured to output second brightness information data indicating second brightness information created based on a second signal of the first image data for a second region among the plurality of regions ([0074], FIG. 10 is an explanatory diagram showing an example of a division pattern for photometric evaluation,
in which the image is divided into 13 regions, and a luminance value (ai, i=1 to 13) is obtained for each region. The subject distribution estimation unit 301 calculates, for example, parameters of equation (5) as photometric information from the luminance value ai of each region. This calculates the luminance distribution.), and
the first processing includes second processing of prohibiting the content of the first instruction from being reflected in the second region and/or the second brightness information. ([0029], inherent luminance value of each of the 13 divided regions are generated independently based on gradation conversion curve for luminance).
As to claim 5, SUZUKI teaches the processor is configured to output third brightness information data indicating third brightness information created based on a third signal of the first image data for a third region among the plurality of regions, ([0074], FIG. 10 is an explanatory diagram showing an example of a division pattern for photometric evaluation,
in which the image is divided into 13 regions, and a luminance value (ai, i=1 to 13) is obtained for each region. The subject distribution estimation unit 301 calculates, for example, parameters of equation (5) as photometric information from the luminance value ai of each region. This calculates the luminance distribution.), and
the first processing includes third processing of reflecting the content of the first instruction in the third region and/or the third brightness information.([0029], inherent luminance value of each of the 13 divided regions (i.e., ai, i =1 to 13)are adjusted independently based on gradation conversion curve for luminance)
As to claim 6, SUZUKI teaches the first processing is processing of changing the first signal according to the content of the first instruction, the third processing is processing of changing the third signal according to the content of the first instruction, and a change amount of a first signal value included in the first signal is different from a change amount of a second signal value included in the third signal([0074], FIG. 10, as discuss in claim 5 above SUZUKI teaches a method of dividing the image into 13 regions, adjusting a luminance value (ai, i=1 to 13) of each region using gradation conversion curve for luminance formula, the signal applied to adjust the a luminance value each region is different, since each region has different luminance value).
As to claim 8, SUZUKI teaches the first processing is processing of changing the first signal according to the content of the first instruction, the third processing is processing of changing the third signal according to the content of the first instruction, and a change amount of a second signal value included in the third signal varies depending on distances between a plurality of second pixels corresponding to the third region and the subject ([0010], A video signal processing device that performs gradation conversion processing is characterized by having a distance information acquisition means that acquires distance information between the object to be photographed and the photographing means, and a gradation conversion means that performs gradation conversion processing on a target pixel of the video signal using the distance information. As discussed calim5 above luminance value of each of the 13 divided regions (i.e., ai, i =1 to 13)are adjusted independently based on gradation conversion curve for luminance)
As to claim 9, SUZUKI teaches the first processing is processing of changing the first signal according to the content of the first instruction([0029], [0039], video signal processing program for performing gradation conversion processing on a video signal. a video signal processing program that perform high-quality gradation conversion according to the subject..).
As to claim 10, SUZUKI teaches the first instruction is an instruction to change a form of the first brightness information ([0029], a predetermined standard gradation conversion curve for luminance is read into the gradation conversion curve ROM 208, and the gradation curve modification unit 209 converts (changes) the gradation conversion curve using the correction coefficients calculated by the correction coefficient calculation unit 11).
As to claim 11, SUZUKI teaches wherein the first brightness information is a second histogram having a plurality of bins, and the first instruction is an instruction to move a bin corresponding to a third signal value selected based on the first instruction among the plurality of bins ([0059], [0072], . The conversion unit 111 sets a tone conversion curve using the histogram of the local region and the correction coefficient calculated by the correction coefficient calculation unit 112, and performs tone conversion processing on the video signal. For example, if the tone transformation curve for an input luminance value i (i=0 to 1) is given as γ(i), the corrected tone transformation curve γ'(i) is expressed by equation(4). Examiner Note: Adjusting the luminance in image processing and digital photography o involves manipulating a luminance histogram, which acts as a map of image brightness divided into bins).
As to claim 12, SUZUKI teaches the processor is configured to output second image data indicating a second image in which the plurality of regions are divided in different aspects according to the distance information ([0039], calculating distance information using the signal-processed video signal; calculating a correction coefficient based on the distance information; sequentially extracting local areas centered on a pixel of interest using the signal-processed video signal; creating a histogram of the extracted area; performing clipping processing on the histogram based on the correction coefficient; generating).
As to claim 19, SUZUKI teaches the first image data is moving image data ([0035], the "step of obtaining a video signal of an object to be imaged by an image pickup means. Thus, the object is a moving object ).
As to claim 20, SUZUKI teaches the image processing apparatus is an imaging apparatus ([0010], the video signal of the object to be imaged obtained from the imaging means).
As to claim 21, SUZUKI teaches the processor is configured to output the first image data and/or the first brightness information data to a display destination ([0046], the object photographed and displayed. . For example the tree is photographed based on the exposure conditions determined by the photometry evaluation unit 106 and the focusing conditions determined by the lens control unit 107).
As to claim 22, SUZUKI teaches the first processing is processing of changing a display aspect of the first image and/or the first brightness information displayed on the display destination ([0059], The gradation curve modification unit 209 corrects the gradation transformation curve read from the gradation transformation curve ROM 208 based on the correction coefficients of the correction coefficient calculation unit 112 .For example, if the tone transformation curve for an input luminance value i (i=0 to 1) is given as γ(i), the corrected tone transformation curve γ'(i) is expressed by equation (4).).
Claim 25 is rejected the same as claim 1 except claim 25 is method claim . All the limitations of claim 25 are addressed in claim 1. Thus, argument analogous to that presented above for claim 1 is applicable to claim 25
As to claim 26, SUZUKI teaches A non-transitory computer-readable storage medium storing a program for causing a computer to execute a process comprising([0039], The video signal processing program of the present invention is characterized in that it
causes a computer to execute the following steps );
regarding the remaining limitations of claim 26, all the remaining limitations are rejected
the same as claims 1 except claim 26 is directed to a computer program claim. All the remaining limitations of claim 26 are addressed in claim 1. Thus, argument analogous to that presented above for claim 1 is applicable to claim 26.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 13-18 are rejected under 35 U.S.C. 103 as being unpatentable over SUZUKI, JP2007329619 (A) in view of TSUTSUMI et al., (hereafter TSUTSUMI ), JP 2015188251 A, pub. 10/29/ 2015.
Regarding claim13, while SUZUKI teaches the processor is configured to: output third image data and output fourth image data (as discussed in claim 5 above where SUZUKI teaches apparatus equipped with the image sensor, and display devise that display 13 divided regions of the image (i.e., ai, i =1 to 13), and adjusted luminance value of each of the 13 divided regions (i.e., ai, i =1 to 13), but fail to teach the underline limitation of claim 13 “output third image data indicating a distance map image representing a distribution of the distance information with respect to an angle of view of a first imaging apparatus equipped with the image sensor; and output fourth image data indicating a reference distance image representing a reference distance for classifying the plurality of regions”
On the other hand TSUTSUMI teaches output third image data indicating a distance map image representing a distribution of the distance information with respect to an angle of view of a first imaging apparatus equipped with the image sensor; and output fourth image data indicating a reference distance image representing a reference distance for classifying the plurality of regions([0050], FIG. 10A is a diagram showing an example of a single-viewpoint image, and FIG. 10B shows a distance map as distance information derived from the single-viewpoint image of FIG.10A. In the scene obtained by the single-viewpoint image shown in FIG. 10A, three types of subjects (a person, a building, and a mountain) exist at different distances from the imaging device. The distance map shown in Figure 10 (b) is displayed in shades of gray according to the distance from the imaging device, with the subject "person" that is close to the imaging device being displayed in the darkest color, the subject "mountain" that is far from the imaging device being displayed in the brightest color, and the subject "building" that is between the person and the mountain being displayed in a color of intermediate brightness. Thus, in this step, distance information of the subject in the scene is derive).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a well- known method of determining a distance map, as distance information derived from the single-viewpoint taught by TSUTSUMI into SUZUKI.
The suggestion/motivation for doing so would have been to allow user of modified SUZUKI to produce accurate, geometrically correct, and perspective-aware distance metrics (depth maps) from a single image.
As to claim 14, TSUTSUM teaches the reference distance image is an image showing a scale bar and a slider, the scale bar indicates a plurality of distance ranges corresponding to the plurality of regions, the slider is provided on the scale bar, and a position of the slider indicates the reference distance ([0080](a) and (b) of Figure 14 are examples of focus information display images generated based on the rearranged image shown in (a) of Figure 12, and display information on the focus position 2801, focus control range 2802, and depth of field 2803.A focus position 2801 indicated by a black square mark on the slider bar indicates the focus position of the lens at the time of image capture included in the optical parameters, or the focus position specified by a user input, which will be described later…. The focus control range 2802 indicated by diagonal lines on the slider bar indicates the range from subject distance d1 (d1') to subject distance d2 (d2'))
As to claim 15, TSUTSUM teaches the scale bar is one scale bar collectively indicating the plurality of distance ranges ([0080], the focus control range 2802 indicated by diagonal lines on the slider bar indicates the range from subject distance d1 (d1') to subject distance d2 (d2'))).
As to claim 16, TSUTSUM teaches the scale bar is a plurality of scale bars separately indicating the plurality of distance ranges ([0084],. the user inputs data via a touch screen or the operation unit 1505 of the imaging device. For example, possible methods include directly specifying the subject to be newly focused on the rearranged image, directly specifying the subject distance to the new focus position, or manipulating the mark representing the focus position 2801 on the slide bar).
As claim 17, SUZUKI teaches the processor is configured to, in a case where the reception device receives a second instruction to output the third image data and/or the fourth image data, output fifth image data indicating a third image in which the plurality of regions are divided in different aspects (([0074], FIG. 10 is an explanatory diagram showing an example of a division pattern for photometric evaluation, in which the image is divided into 13 regions, and a luminance value (ai, i=1 to 13) is obtained for each region. The subject distribution estimation unit 301 calculates, for example, parameters of equation (5) as photometric information from the luminance value ai of each region. This calculates the luminance distribution)according to the distance information)
however, it is noted that SUZUKI does not specifically teach “plurality of regions are divided in different aspects according to the distance information.”
On the other hand TSUTSUM teaches plurality of regions are divided in different aspects according to the distance information (([0050], FIG. 10A is a diagram showing an example of a single-viewpoint image, and FIG. 10B shows a distance map as distance information derived from the single-viewpoint image of FIG.10A. In the scene obtained by the single-viewpoint image shown in FIG. 10A, three types of subjects (a person, a building, and a mountain) exist at different distances from the imaging device))
As claim 18, TSUTSUM teaches the processor is configured to, in a case where the reception device receives a third instruction related to the reference distance, perform fourth processing of reflecting a content of the third instruction in the reference distance image, and change the reference distance according to the content of the third instruction ([0041], the rearranged image generating unit 2205 performs processing to change the position (coordinates) of each subject area extracted by the subject area extracting unit 2204 in accordance with the distance information supplied from the distance deriving unit 2203 .
This process rearranges each subject area according to the distance from the imaging device, creating an image (hereinafter referred to as a "rearranged image") that makes it easier to grasp the sense of distance for each subject area.)
5. Claims 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over SUZUKI, JP2007329619 (A) in view of HOSONO et al., (hereafter HOSONO), US 20170359524 A1, pub. 12/14/2017.
Regarding claim 23, while SUZUKI teaches the limitation of claim 1, but fails to teach the limitation of claim 23.
On the other hand in the same field endeavor the image processing apparatus of HOSONO teaches the image sensor has a plurality of phase difference pixels, and the processor is configured to acquire the distance information data based on phase difference pixel data output from the phase difference pixels([0088], With this example, the image sensor 15 has an image plane Phase Difference AF function (capability of detecting distance information at each pixel of the image sensor), and it is possible to ascertain subject focus information (subject distance information from the camera) for each block. The region information processing section 201 carries out division of image regions based on subject distance information for each block.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a well- known Phase Difference AF(PDAF) function taught by HOSONO into SUZUKI.
The suggestion/motivation for doing so would have been to allow user of SUZUKI to maximize the speed and accuracy of tracking moving object.
As to claim 24, HOSONO teaches the phase difference pixel is a pixel for selectively outputting non-phase difference pixel data and the phase difference pixel data, the non-phase difference pixel data is pixel data obtained by performing photoelectric conversion on an entire region of the phase difference pixel ([0044], In the event that the image sensor 15 is utilizing an image plane phase difference imager, ranging is carried out over the entire pixel region. Subject brightness and ranging are carried out in increments of each region designated in the next step and the data is associated with its respective region.), and the phase difference pixel data is pixel data obtained by performing photoelectric conversion on a partial region of the phase difference pixel([0088], this limitation discussed in claim 23 above).
6. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over SUZUKI, JP2007329619 (A) in view of IKUTA SATOSHI et al., (hereafter IKUTA ), JP2013135308A, pub. 08/07/2013.
Regarding claim 7, while SUZUKI teaches the limitation of claim 6, but fails to teach the limitation of claim 7.
On the other hand in the same field endeavor the imaging apparatus of IKUTA teaches wherein, in a case where a range of distances between a plurality of first pixels corresponding to the first region and the subject is set as a first distance range, and a range of distances between a plurality of second pixels corresponding to the third region and the subject is set as a second distance range, the change amount of the first signal value is constant in the first distance range, and the change amount of the second signal value is constant in the second distance range(page 6 last par., - page 7 1st par., three distance ranges of “short distance”, “medium distance”, and “far distance” are applied as a plurality of distance ranges divided by a specific threshold. In this embodiment, the “short distance” is set to a distance range in which the distance to the subject is 0 to 3 m. “Medium distance” is set to a distance range of 3 to 8 m from the subject. “Far distance” is set to a distance range in which the distance to the subject is 8 to ∞m. The histogram generation unit 112 determines whether the distance information of each pixel is “short distance”, “medium distance”, or “far distance”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a method of calculating distances between the digital camera and the subject using the calculated defocus amount and each information of the lens and the aperture taught by IKUTA into SUZUKI.
The suggestion/motivation for doing so would have been to allow user of SUZUKI to achieve fast and accurate autofocus (AF) without requiring specialized hardware.
Additional Prior arts
The following prior art is additional prior art identified by the examiner as relevant but not used as the formal basis for rejecting claims under 35 U.S.C. 102 or 103.
“ ADJUSTMENT OF IMAGING PROPERTIES FOR AN IMAGING ASSEMBLY HAVING LIGHT-FIELD OPTICS”, US 20130002928, pub. 01/03/2013, to Imai; Francisco disclosed:
Image capture using an image capture device which includes an imaging assembly having a spectral sensitivity tunable in accordance with a spectral capture mask and light-field optics for projecting a light-field of a scene onto the imaging assembly. A first spectral capture mask is applied to the imaging assembly and preview image data of a scene is captured under the first capture mask. A designation of a region of interest, and a designation of a capture preference in the region of interest are received. A second spectral capture mask is calculated by calculations which use the preview image data and the capture preference for the region of interest. The second spectral capture mask is applied to the imaging assembly, and light-field image data of the scene is captured under the second spectral capture mask ( see abstract, claim 1, )
PNG
media_image1.png
654
476
media_image1.png
Greyscale
Contact Information
Any inquiry concerning this communication or earlier communication from the examiner should be directed to Mekonen Bekele whose telephone number is (469) 295-9077.The examiner can normally be reached on Monday -Friday from 9:00AM to 6:50 PM Eastern Time.
If attempt to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Eng, George can be reached on (571) 272-7495.The fax phone number for the organization where the application or proceeding is assigned is 571-237-8300. Information regarding the status of an application may be obtained from the patent Application Information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR.
Status information for unpublished application is available through Privet PAIR only.
For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have question on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866.217-919 (tool-free)
/MEKONEN T BEKELE/Primary Examiner, Art Unit 2699