Detailed Action
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgements
2. Applicant’s arguments/remarks, filed on 11/24/2025, are acknowledged. Amended
claims 1, 15, and 18 and cancelled claims 2, 16, and 19 are acknowledged. Claims 1, 3-15, 17-18, and 20 remain pending and have been examined.
Response to Arguments
3. Applicant’s arguments, filed on 11/24/2025, have been fully considered by they are not persuasive.
Applicant argues/remarks, see pg. 7-8, with respect to the rejection of independent claim 1, with noting distinguishing features of claim 1 of the instant application, wherein “…the features of the plurality of subject regions are determined based on motion amounts and noise amounts in the plurality of subject regions…”,
that Matsuoka fails to disclose the features of the plurality of subject regions are determined based on noise amounts.
Additionally, the Applicant, on pg. 9, states that the processing, as taught by Matsuoka, is based on motion vectors and does not account for noise amounts.
4. Response
Examiner, respectfully, disagrees for the following reasons:
In the office action, dated 08/27/2025, claim 2 is addressed as being taught by Matsuoka; thus claim 2 limits:
“…wherein the feature of the plurality of subject regions are determined based on:
at least one of motion amounts and noise amounts in the plurality of subject regions…”. and, Matsuoka is referenced as teaching:
“…as stated in [0031], image processor 107 performs a process of correcting blur in an object imaged that is caused by the motion of the digital camera; further, [0033] teaches that a motion vector detector 202 detects a motion vector of an object image in a target image for each region obtained by the image divider 201…”.
[0031] reads to specify blur correction caused by the motion of a digital camera. As such, blur correction caused by the motion of the camera can be viewed correcting a determined blur (noise) caused by motion during shooting; wherein [0031] further states that image processor 107 receives a target image to be corrected in which the blur (noise) is detected (relative to a reference image used for correction). Further, [0033] teaches the detection of a motion vector of an object image in the target image for each region, based on a movement of an image having the same pattern between the reference image and a target image.
Therefore, Matsuoka does specify to teach the detection of blur in a target image, as taught in [0031] and also further teaches motion vector detection of an object image within the target image.
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1, 5-7, 9, 11, 14-15, and 18 are rejected under 35 U.S.C. 103 as being
unpatentable over Matsuoka (US 2012/0013752 A1) in view of Parnell-Brookes
(Generate a Sense of Speed With Path Blur in Photoshop CC; further referred to as
Parnell-Brookes).
7. Regarding claim 1, an apparatus comprising:
at least one processor (…Matsuoka teaches [0031] teaches an image processor 107;
Fig. 1…); and
a memory that stores a program which, when executed by the at least one processor,
causes the at least one processor to function (…[0029] teaches ROM 102, a rewritable non-
volatile memory and that the operating program for each block included in the digital
camera 100 and parameters required to operate each block, such as the number of
regions into which an image is divided; Fig. 1…) as:
an acquisition unit configured to acquire an image (…[0028], digital camera 100, Fig.
1…);
a region determination unit configured to determine a plurality of subject regions in the
image (…[0032] teaches an image divider 201 which divides an input image into regions
of a predetermined size…); and
a combining unit configured to combine a plurality of images obtained by an imaging unit
configured to perform image capturing under a predetermined imaging condition (…[0075]
teaches an affine transformer 207 which combines images of regions; further as stated in
[0008] a determination unit configured to determine whether or not a difference between
the correction amount obtained from the coordinate transformation coefficient for the
local region calculated by the correction amount calculation unit, and the correction
amount obtained from the coordinate transformation coefficient for the neighboring
region calculated by the correction amount calculation unit, falls within a predetermined
limit value…), wherein
the combining unit has different subject blur correction characteristics
corresponding to features of the plurality of subject regions, and performs a combining process
with reference to the different subject blur correction characteristics in the plurality of subject
regions (…[0074], element 207 calculates correction amounts for all regions using affine
coefficients calculated by an affine coefficient interpolator 204 or affine coefficients for all the regions newly calculated by the affine coefficient modifier 206 and corrects blur in
the object image for each region…), and
wherein the features of the plurality of subject regions are determined based on
motion amounts and noise amounts in the plurality of subject regions (…as stated in [0031], image processor 107 performs a process of correcting blur in an object imaged that is caused by the motion of the digital camera; [0031] further states that image processor 107 receives a target image to be corrected in which the blur (noise) is detected (relative to a reference image used for correction). [0033] teaches that a motion vector detector 202 detects a motion vector of an object image in a target image for each region obtained by the image divider 201…).
Matsuoka, as cited above, teaches image correction caused by motion of an imaging
device ([0008]) but doesn’t further detail to teach:
wherein the subject blur correction characteristics in the plurality of subject regions are
configured such that, for a first region among the plurality of subject regions, both camera-shake
blur and local subject blur are corrected (…wherein camera-shake correction is taught by
Matsuoka; Parnell-Brookes teaches blur correction that is particular to a region(s); with
reference to the images depicted on pg. 3 (initial image) and pg. 10 (corrected image) a
determined region is corrected, wherein the portion with a motor-bicyclist may (on the
image on pg. 10) may correspond to a first region…), and for a second region among the
plurality of subject regions, only camera-shake blur is corrected (…wherein the regions not
including the motor-bicyclist may correspond to a second region wherein no blur
correction (due to object motion) is performed.
In another example, the motor-bicyclist can be viewed as a first region and the
spokes region of the motor-bicycle can be viewed as a second region; demonstrating
that blur correction can be applied in narrower or broader terms of choice.
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention that object motion within an image can either
be corrected or be left uncorrected thus to reflect a sense of motion within a captured
image…).
8. Regarding claim 5, Matsuoka in view of Parnell-Brookes teaches the apparatus
according to claim 1 (see claim 1 above), wherein:
the combining unit includes a position correction unit configured to correct a local
positional shift between a plurality of images (…[0056] teaches that an affine coefficient
calculator 203 uses motion vectors for the regions in the target image that have been
detected by the motion vector detector 202 to calculate affine coefficients that are
coordinate transformation coefficients that are used to correct regions. Specifically, the
affine coefficient calculator 203 calculates affine coefficients for each of local regions
that are defined as regions…).
9. Regarding claim 6, Matsuoka in view of Parnell-Brookes teaches the apparatus
according to claim 5 (see claim 5 above), wherein:
the combining unit combines the plurality of images after the local positional shift is
corrected by the position correction unit (…[0075] teaches that the affine transformer 207
combines images of regions where adjacent regions overlap each other, by linearly
changing the combination ratio, thereby causing the positional offset of the object image
between regions to be less noticeable…).
10. Regarding claim 7, Matsuoka in view of Parnell-Brookes teaches the apparatus
according to claim 6 (see claim 5 above), wherein:
the position correction unit determines the amount of correction of the local positional shift based on at least any of the features of the images in a first subject region and a subject
region adjacent to the first subject region (…[0036] states that a motion vector 1013
corresponding to a positional offset between a target block 1001 and a motion
compensation block 1012 is calculated. Further, [0056] teaches that an affine coefficient
calculator 203 uses motion vectors for the regions in the target image that have been
detected by the motion vector detector 202 to calculate affine coefficients that are
coordinate transformation coefficients that are used to correct regions …).
11. Regarding claim 9, Matsuoka in view of Parnell-Brookes teaches the apparatus
according to claim 7 (see claim 7 above), wherein:
the first subject region is a region where subject blur is to be suppressed among the
plurality of subject regions (…[0031] states that in order to detect blur in an object image
that is caused by a motion of digital camera 100 during shooting, the image
processor 107 receives a target image to be corrected in which the blur is detected, and a
reference image that is used as a reference for correction. [0032] teaches that an input
image is divided into regions and amount of correction of the input image to the image
divider is calculated. Thus, relative to the target image, [0036] states that a motion vector
1013 corresponding to a positional offset between a target block 1001 and a motion
compensation block 1012 is calculated…).
12. Regarding claim 11, Matsuoka in view of Parnell-Brookes teaches the apparatus
according to claim 1 (see claim 1 above), wherein:
the plurality of subject regions is determined based on a feature amount of the image
(…[0026] states that an image processing device that can correct blur in a captured
image of an object that is caused by motion of the imaging device on a region by region
basis. Further, [0033] teaches that a motion vector detector 202 detects a motion vector of the object image in the target image for each of the regions obtained by the image
divider 201. As such, in accordance with the specification of the instant application, in
[0033], motion vector information and noise amount are deemed the feature amounts in
selected regions…).
13. Regarding claim 14, an imaging apparatus comprising:
an imaging unit configured to capture a subject image formed via an optical system and
output the image (…Fig 1 depicts a block diagram of a digital camera 100 including an
image sensing unit 105 that is an image capture device that photoelectrically converts an
optical image formed on the image capture device by an optical system 104…); and
the apparatus according to claim 1 (see claim 1 above).
14. Claim 15 is rejected for reasons related to claim 1.
15. Claim 18 is rejected for reasons related to claim 1.
16. Claims 3-4, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over
Matsuoka (US 2012/0013752 A1) in view of Parnell-Brookes (Generate a Sense of
Speed With Path Blur in Photoshop CC; further referred to as Parnell-Brookes) and
further view of Tsutsumi (JP2011109619A).
17. Regarding claim 3, Matsuoka in view of Parnell-Brookes teaches the apparatus
according to claim 1 (see claim 1 above), further comprising:
an imaging condition determination unit configured to determine the predetermined
imaging condition (…[0072] teaches an affine coefficient modifier 206 which determines whether or not a difference in correction amount calculated by a correction amount
difference calculator 205 falls within a limit value that is previously determined to be a
value that causes a positional offset of an object image not to be noticeable between
corrected regions…),
a number of images to be combined corresponding to the different subject blur
correction characteristics for each of the plurality of subject regions based on attributes of the
plurality of subject regions in the image and amounts of motion in the plurality of subject regions
(…Matsuoka in [0074] teaches that the affine transformer 207 performs affine
transformation for each of the regions in the target image using affine coefficients for the
region to calculate positions where the target image is to be corrected, and corrects blur
in the object image for each region (and combines images of regions, [0075])…).
Matsuoka doesn’t further teach a determination unit that determines an exposure time.
However, Tsutsumi teaches an image processing method and apparatus involving a
blurring detection means that acquires multiple captured images with multiple exposure patterns
under control of an exposure control unit and detects blurring amount (blurring relative to
movements of the camera. Therein Tsutsumi teaches:
wherein the imaging condition determination unit determines an exposure time
(…wherein Tsutsumi in [0023] teaches a plurality of exposure conditions used at the time
of shooting which are defined in advance by the exposure condition defining unit 106;
wherein h(t) is a function of exposure condition taking on a value of 1 or 0, [0025].
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention that exposure conditions (timing) used
during shooting can be related to previously determined and stored exposure conditions
thus to calculate correction functions which are used for blur correction…).
18. Regarding claim 4, Matsuoka in view of Parnell-Brookes and further view of Tsutsumi
teaches the apparatus according to claim 3 (see claim 3 above), wherein
the attributes of the subject regions are discriminated by a recognition process (…Matsuoka, in [0033], teaches a motion vector detector 202 which detects a motion vector of the object image in the target image for each of the regions obtained by the image divider 201; the motion vectors are thus recognitive of regions which are detected (recognized) by the motion vector detector…).
19. Claim 17 is rejected for reasons related to claim 3.
20. Claim 20 is rejected for reasons related to claim 3.
21. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Matsuoka (US
2012/0013752 A1) in view of Parnell-Brookes (Generate a Sense of Speed With Path
Blur in Photoshop CC; further referred to as Parnell-Brookes) and further view of Naito
(US 2010/0220222 A1).
22. Regarding claim 8, Matsuoka in view Parnell-Brookes teaches the apparatus according
to claim 1 (see claim 1 above). However, doesn’t further specify wherein:
the combining unit determines a number of images to be combined based on a
difference of noise level between the first subject region and the subject region adjacent to the
first subject region.
However, Naito teaches a similar image processing device wherein:
the combining unit determines a number of images to be combined based on a
difference of noise level between the first subject region and the subject region adjacent to the
first subject region (…[0045], in part, describes how one combined image is formed by
carrying out four exposure times, where a maximum of four images are subjected to a combining processing (as depicted in Fig. 2).
Paired reference and target frames (images) are compared wherein movement information of the paired frames are computed and given to a correction unit of a combining process unit 105, as described in [0049].
Paired and aligned frames are output to a noise-level calculating units of a noise- level estimating unit, as described in [0050]. The noise-level calculating units compute the noise levels of the pixels in frame 1 and frame 2 on the basis of the relationship between the pixel values and the amounts of noise defined in advance, where-after the results are output to a maximum-value calculating unit, as stated in [0051].
The maximum-value calculating unit compares the noise level of each pixel in the frames (1 and 2) and determines from the difference of the noise levels whether or not the alignment of the target frame 2 with respect to reference frame 1 is successful, and thereby estimates the noise level.
Further, aligned pixels are made equal by a determined noise level (if they have different noise levels); after which the determined noise level is output to a combining ratio determining unit, as stated in [0052].
The combining ratio determining unit determines the combining ratio of the pixels in the target frame with respect to the pixels in the reference frame on the basis of the noise levels. Further, the determined combining ratio is output to a weighted-averaging processing unit. The weighted-averaging processing unit carries out weighted averaging processing on the reference frame and the target frame on the basis of the input combining ratio and generates a combined image.
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention that noise level difference determination, as
taught by Naito, could have been implemented in the image processing device of
Matsuoka’s to thus combine images thereby to form one image with less visual distortion
between boundaries of combined images…).
23. Claims 10, 12 and 13 are rejected under 35 U.S.C. as being unpatentable over
Matsuoka (US 2012/0013752 A1) in view of Parnell-Brookes (Generate a Sense of
Speed With Path Blur in Photoshop CC; further referred to as Parnell-Brookes) and
further view of Kaida (JP2020080518A).
24. Regarding claim 10, Matsuoka in view of Parnell-Brookes teaches the apparatus
according to claim 7 (see claim 7 above). However, Matsuoka doesn’t further teach wherein:
the first subject region is a region where subject blur is to be reproduced among the
plurality of subject regions.
However, Kaida teaches:
the first subject region is a region where subject blur is to be reproduced among the
plurality of subject regions (…[0015] teaches a motion blur control unit 200 which adds
motion blur to an image data recorded in a recording unit 108 to generate a motion blur
image; including a motion vector calculation unit 201, a reference motion vector
identification unit 202 as a reference motion blur identification unit, a motion blur
conversion characteristic calculation unit 203, a motion blur addition unit 204, and a
motion blur correction target area identification unit 205. [0017] details about the
functions of unit 200, given an example in Fig. 4, where movements of a subject are
identified through frames of taken images (nth frame and n+1th frame), where the
movements of the front and rear feet of a running dog are different from those of the
dog's torso, and therefore change between the Nth frame and the N+1th frame; thus unit
200 adds motion blur to the captured image of the Nth frame and displays the motion
blur image on a display unit. Further, [0018] states, based on a user’s instruction
regarding the motion blur, the blur is adjusted or no adjustment takes place.
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention that a motion blur adjustment mechanism can be processed, wherein a blur is added to be outputted or else be adjusted, in
accordance with a user’s instruction, as taught by Kaida, so to be included in the
teachings of Matsuoka’s image processing, so to present a controlled motion blur
adjustment in accordance to a user’s preference…).
25. Regarding claim 12, Matsuoka teaches the apparatus according to claim 1 (see claim 1
above), however, though Matsuoka teaches a predetermined limit value of a correction amount
difference calculated by a correction amount difference calculator, Matsuoka doesn’t specify
wherein the predetermined imaging condition includes:
at least one of shutter speed, aperture value, International Organization for
Standardization (ISO) sensitivity, exposure time, and a number of images to be combined
(…Kaida teaches an image processing apparatus with motion blur and further teaches, in
[0016], a control unit 101 which determines exposure time when an image capturing unit
captures an image; wherein unit 105 captures images based on a determined exposure
time and record the images. The basis of capturing the images, relative to exposure time,
is hence predetermined.
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention, that a predetermination of exposure time
allows a purposed processing determination relative to motion in photography, wherein
shorter or longer exposure times may set a range of blurring within a captured image…).
26. Regarding claim 13, Matsuoka teaches the apparatus according to claim 1 (see claim 1
above). However, doesn’t further teach the apparatus comprising:
a display unit configured to display the image, wherein the plurality of subject regions is
determined by a user on the display unit.
However, Kaida teaches an image processing apparatus comprising:
a display unit configured to display the image, wherein the plurality of subject regions is
determined by a user on the display unit (…[0017] details what takes place in Fig. 4, wherein
the movements of the front and rear feet of a running are different from those of the
dog’s torso and thus change in consecutively taken image frame. In step S302 of Fig. 3,
motion blur control adds motion blur to the captured image and displays the motion blur
image on display unit 109. Further, as stated in [0018], in step S303, a user confirms the
motion blur image displayed. Thus, the motion blur is to the confirmation of a user’s
determination to select. As such, [0039] gives an example of a motion blur correction
target wherein motion vectors 1011 and 1012, in FIG. 10B, are given as separate motion
blur correction regions.
Therefore, it would have been obvious to one skilled in the art before the effective
filing date of the claimed invention that a user confirmed blur motion blur area as taught
by Kaida can further be employed within the image processing teaching of Matsuoka,
thus giving the user of an image taking device the option to use processing resource
only in case when desired, instead of an automated system that may complete the
process and thereby consume more battery power…).
Conclusion
27. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SURAFEL YILMAKASSAYE whose telephone number is (703)756-1910. The examiner can normally be reached Monday-Friday 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached at (571)272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SURAFEL YILMAKASSAYE/Examiner, Art Unit 2639
/TWYLER L HASKINS/Supervisory Patent Examiner, Art Unit 2639