Prosecution Insights
Last updated: April 19, 2026
Application No. 18/600,285

DEBLURRING IMAGES

Non-Final OA §102§103
Filed
Mar 08, 2024
Examiner
BOYAR, NOAH WILLIAM
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
5 currently pending
Career history
5
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
26.7%
-13.3% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention (“Deblurring Images”) is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. In paragraph 18, “debluring” should read “deblurring”. Claim Interpretation The claims will be read under the broadest reasonable interpretation standard outlined in MPEP § 2111.01. The examiner interprets “identify the edge based on an angle between the edge and the motion direction” as recited by claims 4 and 16 to include the determination of an edge through a filtration process of candidate pixels, as recited in paragraph 129 of the specification. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 8-11, 13-14, and 20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Cho et. al (US 20160267629 A1) (Hereinafter, “Cho”). As to claim 1, Cho discloses an apparatus for deblurring images, the apparatus comprising: at least one memory ([0171]); at least one processor coupled to the at least one memory and configured to ([0171]): identify, using motion analysis, a portion of an image ([0088] “A selected or suitable region may be a region that includes edges of objects in the image (e.g., an outline of a person, a tree, etc.), over-exposed pixels within the image, under-exposed pixels within the image, and/or any features within the image that may break the linearity of motion causing blurring of the image (e.g., blurring due to camera movement”); identify an edge associated with the portion ([103]); determine an amount of blur of the edge ([0098] “The metrics module 930 may be configured to determine metrics for at least two of the plurality of regions based on a number of edge orientations within a region. The metrics module 930 may perform various algorithmic processes to identify image characteristics and/or features associated with favorable or suitable regions within the selected image. A suitable region may be any region determined to be favorable for deblurring but may not necessarily be the most suitable region within the blurred image. Such a region deemed suitable may be based on analyzing the image to identify contrast edges of many different directions, and/or few or no corrupted pixels (e.g., saturated pixels) within the selected image.”); based on the amount of blur of the edge exceeding a blur threshold ([0102]-[0103]): PNG media_image1.png 518 542 media_image1.png Greyscale deblur the portion of the image to generate a deblurred portion of the image (Fig. 11); PNG media_image2.png 1277 872 media_image2.png Greyscale and combine the deblurred portion of the image with other image data to generate a deblurred image ([0128] “Different blur kernels may then be used in other regions and the deblurred results may be combined or stitched together to form a final deblurred image (e.g., see FIG. 15)”; Fig 15): PNG media_image3.png 1031 1431 media_image3.png Greyscale As to claim 2, Cho discloses the apparatus of claim 1, wherein the edge is associated with an object depicted in the image and wherein the portion of the image includes the object ([0088] “A selected or suitable region may be a region that includes edges of objects in the image (e.g., an outline of a person, a tree, etc.), over-exposed pixels within the image, under-exposed pixels within the image, and/or any features within the image that may break the linearity of motion causing blurring of the image (e.g., blurring due to camera movement)”). As to claim 8, Cho discloses the apparatus of claim 1, wherein, to combine the deblurred portion of the image with the other image data, the at least one processor is configured to blend pixels of edges of the deblurred portion of the image with corresponding pixels of the other image data (Fig. 15; [0128]; [0144] “In the example blurred image 1500, blurred regions 1510 and 1530 are shown by way of example to overlap in region 1528. In such circumstances where two or more blurred regions overlap, the deblurring module 1340 may perform a variety of different techniques when blending deconvolution results to create a final, deblurred image. For example, FIG. 16A shows a diagram 1600 that displays a blending region 1615 between a First region 1610 occupied by a first blur kernel (e.g., blur kernel K1) and a second region 1620 occupied by a second blur kernel (blur kernel K2), wherein the first and second regions 1610, 1620 overlap. The first region 1610 is shown to be centered at (x1,y1) in the image and the second region 1620 is centered at (x2,y2) in the image. For example, using the methodology described herein, deconvolution results L1 and L2 may be determined using blur kernels K1 and K2, having centers at pixels (x1, y1) and (x2, y2), respectively, in the blurred image.”). As to claim 9, Cho discloses the apparatus of claim 1, wherein the at least one processor is configured to: compare the deblurred image with the image to determine a final image (Fig. 17 discloses a user interface wherein the image can be previewed following various deblur operations – to form intermediate image(s) that can be reset back to the original image for comparison purposes via drawn slider, preview toggle, and cancel functions. The user may then make further or different changes to decide upon a final image); and at least one of: store the final image; display the final image; process the final image; or transmit the final image (Fig. 17): PNG media_image4.png 872 1210 media_image4.png Greyscale As to claim 10, Cho discloses the apparatus of claim 1, wherein the final image is determined based on at least one of: a comparison of signal-to-noise ratios of the image and the deblurred image; a comparison of facial landmarks in the image and the deblurred image; or a comparison of human recognizability in the image and the deblurred image (Fig. 17; [0155] “The blur kernel zone 1727 is shown by way of example to include a blur trace pounds slider, which may specify a size of the blur kernel (e.g., if the slider is set to 41, then the size of the blur kernel is set to 41×41 pixels), a smoothing slider, which may specify how smooth the deblurred result will be, or how much noise is suppressed in the deblurred result, and an artifact suppression slider, which may specify how much of the deblurring artifacts (e.g., ringing artifacts, will be suppressed).”). As to claim 11, Cho discloses the apparatus of claim 1, wherein the other image data comprises image data from the image (Fig. 15). As to claim 13, the language is directed to a method which performs steps identical to the apparatus of claim 1. Accordingly, claim 13 is rejected under Cho as outlined above. As to claim 14, the language is directed to a method which performs steps identical to the apparatus of claim 2. Accordingly, claim 14 is rejected under Cho as outlined above. As to claim 20, the language is directed to a method which performs steps identical to the apparatus of claim 8. Accordingly, claim 20 is rejected under Cho as outlined above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cho in view of , Higaki et. al (US 20050147277 A1) (Hereinafter, “Higaki”). With respect to claim 3, Cho teaches the apparatus of claim 1 upon which claim 3 depends. Cho does not explicitly teach a processor configured to: identify a moving object based on a motion analysis of multiple images including the image; and identify edges of the moving object using an edge-detection technique. However, Higaki in the same field of endeavor of object detection, teaches a processor ([0128]) configured to: identify a moving object based on a motion analysis of multiple images including the image ([0037] “In order to solve the problem previously described, the moving object detection program that has a function to detect moving objects by means of plural video images, including image acquisition objects, taken by plural synchronized cameras, comprises…a motion information generating subprogram that generates motion information regarding motion of said moving objects on a basis of differences between video images input in time-series by one of said cameras.”) and identify edges of the moving object using an edge-detection technique ([0073]; [0121] “The contour detector 24 is to detect the contour of the moving object in the range (the object image area) of the moving object image area determination module 23 by using an existing contour technology. An example of existing contour technologies may be a dynamic contour model called SNAKES. The detection is carried out by deforming and shrinking a closed curve such that the predetermined energy is minimized thereon. A dynamic process such that the energy is computed in the region of moving object (object image are) is adopted and therefore it is possible to reduce the volume of the computation to detect the contour.”; Fig. 3): PNG media_image5.png 1752 1432 media_image5.png Greyscale It would have been obvious to one of ordinary skill of the art as of the effective filing date of the claimed invention, to modify Cho to include motion analysis of multiple images and edge detection of moving objects as taught by Higaki. One of ordinary skill in the art would be motivated to combine Higaki with Cho to provide additional functionality – the identification of moving objects in multiple images. By doing so, an additional means to evaluate blur becomes available. Furthermore, the systems of Higaki are readily integrated into Cho without comprising underlying operation. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Cho in view of Higaki and Nojima et. al (US 20110129167 A1) (Hereinafter, “Nojima”). With respect to claim 4, Cho teaches the apparatus of claim 1 upon which claim 4 ultimately depends. Higaki teaches the elements of claim 3 upon which claim 4 depends, as discussed above. Higaki further teaches the apparatus of claim 3, wherein, to identify the edge associated with the portion, the at least one processor is configured to: identify a motion direction associated with the moving object based on a motion analysis of multiple images including the image ([0012]; [0125] “The contour detector 24 evaluates and outputs the observation information (the barycenter, moving direction (azimuthal angle), etc.) of the moving object in the contour.”; Fig. 10): PNG media_image6.png 1593 1115 media_image6.png Greyscale Cho and Higaki do not explicitly teach a processor configured to: identify the edge based on an angle between the edge and the motion direction. However, Nojima, in the same field of endeavor of image correction, teaches the same ([0046]; Claim 10, “An image correction apparatus, comprising: a motion vector calculation unit to calculate a motion vector of an image based on a plurality of images sharing a shooting area; an edge detection unit to detect an edge of an object or a texture in an input image obtained from the plurality of images; a gradient direction detection unit to detect a pixel value gradient direction for each pixel positioned on the edge detected by the edge detection unit; an extraction unit to extract a pixel having the pixel value gradient direction, which forms a predetermined angle with respect to a direction of the motion vector, from among pixels positioned on the edge; and a correction unit to correct a pixel value of the pixel extracted by the extraction unit”; [0068]; Fig. 3; Fig. 4): PNG media_image7.png 3070 1708 media_image7.png Greyscale PNG media_image8.png 2184 1634 media_image8.png Greyscale It would have been obvious to one of ordinary skill in the art as of the effective filing date of the claimed invention, to modify the system of Cho and Higaki to include the angular and motion-based edge identification of Nojima. Doing so allows one to integrate blur correction into the moving object detection system of Higaki. Higaki already encourages the correction of its image edge outputs ([0011]; [0098]-[0103] “In other words, the foot portions of the smoothed curves do not expand and keep sharpness with being different from the dotted lines as shown in FIG. 7B. By keeping the sharpness, it is avoided that the neighboring two persons are merged in the histogram HI.”). With the increased capabilities for edge detection offered by Cho and Nojima, the system of Higaki is predictably improved to generate sharper edges as well. The underlying system of Cho also gains the ability to analyze objects in motion in a plurality of images – in such a way that can be readily integrated with its system of image stiching and thresholding. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Cho in view of Sijde et. al (US 20200166646 A1) (Hereinafter, “Sijde”). With respect to claim 5, Cho teaches the apparatus of claim 1 upon which claim 5 depends. Cho does not explicitly teach the apparatus of claim 1, wherein the amount of blur of the edge is based on number of pixels that are based on light reflected from a moving object and light reflected from a background behind the moving object. However Sijde, in the same view of endeavor of blur correction teaches the same ([0051]-[0052]; [0038] “The brightness image and the IR image are then analysed using pattern recognition or edge detection techniques. Where an edge is identified in the brightness image as well as the IR image Mmono, no correction is needed in the corresponding region of the RGB image MRGB. Where an edge is blurred in the brightness image but sharp in the IR image Mmono, the RGB image MRGB will be corrected using information from the IR image Mmono.”; [0039] “For an object B moving across the scene D, the colour in a motion blur region Bblur is essentially a blend of object colour and background colour, and more background is “mixed in” when the object B is moving quickly. Similarly, if the object B is moving towards the camera, the flash light will become more relevant and will highlight the object colour more than the background colour. The colour of any pixel X in blurred area of the image can be reconstructed from RGB values RBG, GBG, BBG of neighbouring background pixels (i.e. background pixels that are adjacent to the blurred image region), and image regions of the moving object B can be assigned the colour of the blurred region, corrected—for the mixing in of a suitable background colour. For the image pixels of the blurred object Bblur, colour is preferably reconstructed within regions corresponding to the moving object B identified in the “common” region R, as well as in regions Rblur”). It would have been obvious to one of ordinary skill in the art as of the effective filing date of the claimed invention, to modify the system of Cho to include the elements of background and moving object light detection methods of Sijde. The motivation for doing so would have been to better distinguish objects in motion from a background scene in challenging circumstances. As Sijde teaches, [0004]-[0005] “…The digital camera of a handheld device may be unable to capture a sharp image of a moving object in dim or poorly-lit conditions. Instead, the image can be blurred and unsatisfactory. Therefore, it is an object of the invention to provide a way of improving the quality of images captured by such an imaging arrangement.” These methods readily improve the underlying system of Cho by predictably improving the edge definition – a goal of Cho. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cho in view of Nojima. With respect to claim 6, Cho teaches the apparatus of claim 1 upon which claim 6 depends. Cho does not explicitly teach the apparatus of claim 1, wherein the amount of blur of the edge is based on a transition width. As explained in the specification of the claimed invention, [0132] “In some aspects, the amount of blur of the edge is based on a transition width. For example, edge-width determiner 808 of FIG. 8 may determine edge width 1118 of FIG. 11 based on edge width 1118 between low pixel value 1114 and high pixel value 1116”. However, Nojima, in the same field of endeavor of image correction, teaches edge blur amount based on a transition width ([0005] “To sharpen an edge, for example, as illustrated in FIG. 1, a brightness level of each pixel is decreased in an area (area A) where the brightness level is lower than a central level, whereas the brightness level of each pixel is increased in an area (area B) where the brightness level is higher than the central level. Note that the brightness level is not corrected outside the ramp area. With such corrections, the width of the ramp area is narrowed to sharpen the edge. This method is disclosed, for example, by J.-G Leu, Edge sharpening through ramp width reduction, Image and Vision Computing 18 (2000) 501-514”; Fig. 1): PNG media_image9.png 2916 1580 media_image9.png Greyscale It would have been obvious to one of ordinary skill in the art as of the effective filing date of the claimed invention, to modify the system of Cho to include blur detection based on transition width, as taught by Nojima and the related art by which Nojima incorporates. Nojima explicitly advocates this method as a means of sharpening the edges of images [0005] “With such corrections, the width of the ramp area is narrowed to sharpen the edge. This method is disclosed, for example, by J.-G Leu, Edge sharpening through ramp width reduction, Image and Vision Computing 18 (2000) 501-514.” One or ordinary skill in the art would consult Nojima to provide additional means to identify and correct edges in a blurred image. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Cho in view of Ollila (US 20240056564 A1) (Hereinafter, “Ollila”). With respect to claim 7, Cho teaches the apparatus of claim 1 upon which claim 7 depends. Cho does not explicitly teach the apparatus of claim 1, wherein, to deblur the portion of the image, the at least one processor is configured to process the portion of the image using a machine-learning model that is trained to deblur images. However, Ollila, in the same field of endeavor of image correction, teaches the same ([0034-0035]; [0045] “Optionally, the EDOF correction is applied by utilising at least one of: defocus map estimation, blind image deblurring deconvolution, non-blind image deblurring deconvolution. The defocus map estimation may utilize a defocussed (i.e., blurred) image to detect edges around said image, estimate an amount of blur around the edges, and interpolate the estimated amount of blur to determine the amount of blur in homogenous regions of the defocussed image. The blind image deblurring deconvolution may utilize a blur kernel that may be estimated based on a regularisation. The non-blind image deblurring deconvolution utilizes a point spread function (PSF) for image restoration. Thus, the EDOF correction may employ at least one of: a restoration filter based on deconvolving with a wiener filter, a constrained least-squares image restoration filter, a Lucy-Richardson deconvolution algorithm, an artificial neural network (ANN)-based image restoration algorithm. These techniques are well-known in the art.”). It would have been obvious to one of ordinary skill in the art as of the effective filing date of the claimed invention, to modify the system of Cho to include well-known techniques of machine-learning deblurring. Cho encourages the usage of automation using well-known techniques [0047] “Therefore, automatically estimating and/or determining the size of blur kernels may provide users with suitable blur kernels of appropriate size to adequately deblur images (e.g., via existing deblurring algorithms and/or the algorithms described herein). In an example embodiment, the automated estimation of the size of the blur kernel is calculated electronically without human input influencing the size of the blur kernel. In another example embodiment, the size of the blur kernel may at least partially be determined based on algorithms that do not depend on user input”. Ollila explicitly discloses the usage of machine-learning via artificial neural networks to achieve these means. One of ordinary skill could readily consult Ollila to supplement known techniques into the system of Cho which readily accommodates machine learning. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Cho in view of Lelescu et. al (US 20120147205 A1) (Hereinafter, “Lelescu”). With respect to claim 12, Cho teaches the apparatus of claim 1 upon which claim 12 depends. Cho does not explicitly teach the apparatus of claim 1, wherein the image comprises a first image and wherein the other image data comprises image data from a second image. However, Lelescu, in the same field of endeavor of image correction, teaches the same ([0007] “Systems and methods are disclosed that use super-resolution (SR) processes to fuse information from a plurality of low resolution images captured by an imager array to synthesize a high resolution image. In many embodiments, the objective and subjective quality of the obtained super-resolution image is increased through signal restoration. In several embodiments, the SR process incorporates cross-channel fusion. In a number of embodiments, the imager array includes imagers having different fields of view. In many embodiments, aliasing is introduced into the low resolution images to enable improved recovery of high frequency information through SR processing”). It would have been obvious to one of ordinary skill in the art as of the effective filing date of the claimed invention, to modify the system of Cho to include the synthesis of second image information as taught by Lelescu. Cho expressly advocates the blending of image data to create deblurred images ([0143]). Accordingly, one or ordinary skill in the art would understand that data from separate images could also be used for such a process with predictable and successful integration into Cho’s system. Lelescu explicitly teaches the conventional technique of super-resolution process to achieve such an ends. As a result, Cho is enhanced by obtaining a greater array of reference data for usage. Claims 15-19 recite claims 3-7 in the form of a method rather than apparatus. The apparatus is capable of performing the method, and as such, claims 15-19 are rejected in line with claims 3-7 discussed above. Additional References Additionally cited references (see attached PTO-892) otherwise not relied upon above have been made of record in view of the manner in which they evidence the general state of the art. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOAH WILLIAM BOYAR whose telephone number is (571)272-8392. The examiner can normally be reached 8:30 – 5:00 EST, Monday – Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NOAH W BOYAR/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month