Prosecution Insights
Last updated: April 19, 2026
Application No. 18/721,111

APPARATUS AND METHOD FOR ENHANCING A WHITEBOARD IMAGE

Non-Final OA §103
Filed
Jun 17, 2024
Examiner
NGUYEN, DAVID VAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Canon U S A Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
14 currently pending
Career history
14
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
78.6%
+38.6% vs TC avg
§102
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: In Paragraph 20, Line 5: “masks .” should be written as “masks.”. An extra space was added between the word and the period. Appropriate correction is required. Claim Objections Claim 4 objected to because of the following informalities: In Claim 4, Line 5: “correct” should be read as “corrected” to clarify that the first corrected image generated in Claim 1 is what should be retrieved prior to performing blending processing. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5, 7, 9, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yarvis et al (US 20170372449 A1), Zhou et al (US 20210019892 A1), and Akiba et al (US 20220101487 A1), hereinafter Yarvis, Zhou, and Akiba respectively. Regarding claim 1, Yarvis teaches a control apparatus that performs image processing, the control apparatus comprising: one or more processors (“While computing system 500 is illustrated with a single processor, it may include multiple processors and/or co-processors” – Par 150, Lines 5-7); and one or more memories storing instructions that, when executed, configures the one or more processors (“Computing system 500 may further include random access memory (RAM) or other dynamic storage device 520 (referred to as main memory), coupled to bus 505 and may store information and instructions that may be executed by processor 510” – Par 150, Lines 9-13.), to: receive a captured video from a camera capturing a meeting room (“may be used for capturing any amount and type of visual data, such as images (e.g., photos, videos, movies, audio/video streams, etc.)” – Par 80, Lines 7-10, Fig 3A-D); PNG media_image1.png 323 674 media_image1.png Greyscale Screenshot of Yarvis et al (US 20170372449 A1), Fig 3A extract and store a predefined region of the video as extracted image data (“where identifying the target board includes extracting a region encompassing the target board.” – Abstract); generate a first corrected image by performing first image correction processing on the extracted image data to correct noise (“rectification logic 205 may then be used to rectify the whiteboard image of whiteboard 270 by correcting any geometric distortions of the identified whiteboard region to provide a frontal-view of whiteboard 270” – Par 55, Lines 1-4. [NOTE: Yarvis teaches a first correction that adjusts an image to counteract the distortion caused by the camera’s angle of the whiteboard (keystone correction)].) Yarvis does not teach generating a binary mask of the first corrected image; generate a filtered image based on the binary mask of the first corrected image and the first corrected image; generate a second corrected image by performing second image correction processing on the filtered image. However, Zhou teaches generating a binary mask of the first corrected image (“In block 226, a binary mask for the frame is obtained by performing fine segmentation based on the color data, the trimap, and the weight map” – Par 82, Lines 1-3. [NOTE: After the combination, the generation of the binary mask as taught by Zhou can be generated for the first corrected image taught by Yarvis to teach generating a binary mask of the first corrected image ]); generate a filtered image based on the binary mask of the first corrected image and the first corrected image (“In block 230, a Gaussian filter may be applied to the binary mask as part of performing fine segmentation. The Gaussian filter may smooth the segmentation boundary in the binary mask. Applying the Gaussian filter can provide alpha matting, e.g., can ensure that the binary mask separates hairy or fuzzy foreground objects from the background” – Par 88. [NOTE: The first corrected image taught by Yarvis can be combined with Zhao’s method of obtaining the binary mask to obtain the binary mask of the first corrected image. Then the Gaussian filter is applied to the binary mask as disclosed by Zhao will then create a filtered image that smooths the segmentation in the binary mask.]); generate a second corrected image by performing second image correction processing on the filtered image (“Considering the color continuity of the background of whiteboard 270, in one embodiment, enhancement logic 207 may be used to identify and enhance the unclear contents of whiteboard 270 with less noises, such as by designing an adaptive enhancement function of the contrast” – Yarvis: Par 78, Lines 1-5. [NOTE: Yarvis discloses that the contrast enhancement logic (second correction) can take place after the rectification logic (first correction), Yarvis: Par 55. After the combination, Zhao’s generation of a filtered image based on the binary mask of the first corrected image prior to the contrast enhancement logic would then teach generating a second corrected image by performing second image correction processing on the filtered image ]). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yarvis to incorporate the teachings of Zhou to generate a binary mask for the first corrected image in order to create a filtered image and generate a second corrected image by performing a second correction to the filtered image. Binary masks are known in the art as a technique that isolates the region of interest (the whiteboard). The resulting filtered image allows the second correction to be performed on an image with less noise creating an even sharper image. Yarvis in view of Zhou still does not teach performing blending processing that combines the second corrected image with the first corrected image to generate a final corrected image. However, Akiba teaches performing blending processing that combines the second corrected image with the first corrected image to generate a final corrected image (“As shown in the middle part and the right part of FIG. 4, the synthesizing section 133 generates a display image DSIM by synthesizing the first correction image CIM1 and the second correction image CIM2” – Par 62, Lines 1-4, Fig 4. [NOTE: The first and second corrected images as taught by Yarvis can be blended similarly to how Akiba blends a first and second corrected image to produce a final display image.]). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Yarvis in view of Zhou to incorporate the teachings of Akiba to perform a blending process to combine the first and second corrected images to generate a final corrected image. The blending of the first corrected image, which corrects the geometrical distortion of the whiteboard, and the second corrected image, which adjusts the contrast of the white board contents to make it clearer, would provide a predicted result of a sharper and enhanced visual of the whiteboard that uses both of the advantages of the corrected images. Regarding claim 7, the claim describes a method that lists the steps of a function performed by claim 1. Therefore, method claim 7 corresponds to the apparatus disclosed in claim 1 and is rejected for the same reasons of obviousness as used above. Regarding claim 3. Yarvis in view of Zhou and Akiba teaches the apparatus of claim 1. Yarvis further teaches wherein the first image correction processing is keystone correction that generates a substantially rectangular image of the exacted predefined region (“For example, the boundary of the identified whiteboard region may be refined by finding a minimum-bounding rectangle having edges that are aligned with vanishing lines in the scene captured in the input image, where whiteboard 270 may then be rectified as a frontal-view image by using an aspect ratio that is estimated, as facilitated by estimation logic 203, using the whiteboard indicators 271 on whiteboard 270.” – Par 51). It would have been obvious to one of ordinary skill in the art to further incorporate the teachings of Yarvis to have the first image correction be a keystone correction. Keystone correction is a common technique used in the art to adjust the screen to deliver a rectangular image rather than a distorted angle shown by the camera. This technique has the known result of providing a proper image shape for a clearer view of the content. Regarding claim 9, the claim describes a method that lists the steps of a function performed by claim 3. Therefore, method claim 9 corresponds to the apparatus disclosed in claim 3 and is rejected for the same reasons of obviousness as used above. Regarding claim 5, Yarvis in view of Zhou and Akiba teaches the apparatus of claim 1. Yarvis further teaches wherein the second image correction processing corrects color and intensity of the first corrected image (“Embodiments provide for enhancement logic 207 to facilitate contrast enhancement for a rectified whiteboard image from a panoramic camera of camera(s) 241 so that the whiteboard contents may be clearly viewed and shared with more meeting participants under various lighting conditions.” – Par 77, Lines 1-11. [NOTE: Yarvis discloses that color enhancement is done to remedy the small difference of color and/or intensity between the contents of the whiteboard and the background. This enhancement is done after the first rectification logic]). It would have been obvious to one of ordinary skill in the art before the effective filing date to further incorporate Yarvis to use second image correction to correct the color and intensity of the first corrected image. As disclosed by Yarvis, one motivation for performing the contrast enhancement is to adjust the color and/or intensity in order. Applying this technique to the first corrected image would have the predictable result of an enhanced view of a whiteboard with a clear view of the contents. Regarding claim 11, the claim describes a method that lists the steps of a function performed by claim 5. Therefore, method claim 11 corresponds to the apparatus disclosed in claim 5 and is rejected for the same reasons of obviousness as used above. Claim(s) 2 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yarvis, Zhou, Akiba, and Port (US 11489886 B2), hereinafter Port respectively. Regarding claim 2, Yarvis in view of Zhou, and Akiba teach the apparatus of claim 1. Yarvis in view of Zhou and Akiba does not teach wherein execution of the instructions further configures the one or more processors to: store, in memory, a predetermined number of masks of the first corrected image; compute the average of the stored predetermined number of masks; and generate an average mask, wherein the filtered image is generated using the average mask. However, Port teaches wherein execution of the instructions further configures the one or more processors to: store, in memory, a predetermined number of masks of the first corrected image (For example, N is in the range of 4-32 frames, e.g. 10, 15 or 20. ” – Col 8, Line 50-51. [NOTE: Port discloses that the variable N determined the number of frames which include respective masks for each image. Port does not explicitly say that the masks are of the first corrected image. After the combination, the first corrected image as disclosed by Yarvis and the predetermined number of masks as taught by Port teaches wherein execution of the instructions further configures the one or more processors to: store, in memory, a predetermined number of masks of the first corrected image]); compute the average of the stored predetermined number of masks (“In step 218, an average is calculated of the intermediate masks of the N last video frames. In other words, an average is calculated of the intermediate mask of the current video frame and the intermediate masks of the N−1 previous video frames. “ – Col 8, Lines 47-51. [NOTE: Port discloses that the average is calculated from the masks of each video frame image. Port does not explicitly state that the images are the first corrected image. After the combination, the calculation of the average of the number of masks as taught by Port can be based off of the first corrected image as taught by Yarvis to teach wherein execution of the instructions further configures the one or more processors to: store, in memory, a predetermined number of masks of the first corrected image]); compute the average of the stored predetermined number of masks]); and generate an average mask (“The examples of FIGS. 3A and 3B refer to an averaging operation for generating the pen stroke mask” – Col 9, Lines 11-12. [NOTE: Port defines the pen stroke mask as the average mask after computing the average of the intermediate masks. ]). After the combination, the predetermine number of masks as taught by Port can be generated based on the first corrected image as taught by Yarvis. The masks can then be computed for an average to create an average mask. The Gaussian filter as taught by Zhou can be applied to the average mask to create the filtered image to teach computing the average of the stored predetermined number of masks and generate an average mask, wherein the filtered image is generated using the average mask. The number of masks It would have been obvious for one of ordinary skill in the art before the effective filing date of the present application to modify Yarvis in view of Zhou and Akiba to incorporate the teachings of Port to store a predetermined number of masks of the first corrected image in order to compute the average to generate an average mask to create the filtered image. Generating an average mask from the multiple binary masks is a common technique for creating a consistent binary mask that accurately depicts each segmentation such as the region of interest (the whiteboard) and the background. Regarding claim 8, the claim describes a method that lists the steps of a function performed by claim 2. Therefore, method claim 8 corresponds to the apparatus disclosed in claim 2 and is rejected for the same reasons of obviousness as used above. Claim(s) 4 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yarvis, Zhou, Akiba, and Kim et al (US 20190110077 A1), hereinafter Kim. Regarding claim 4, Yarvis in view of Zhou and Akiba teach the apparatus of claim 1. Yarvis in view of Zhou and Akiba does not teach wherein execution of the instructions further configures the one or more processors to: store, in memory, a copy of the first corrected image; and prior to performing the blending processing, retrieving the stored copy of the first corrected image. However, Kim teaches wherein execution of the instructions further configures the one or more processors to: store, in memory a copy of the first corrected image (“one processor (e.g., at least one of the processor 120 or the processor 264) may be configured to acquire a raw image of an external object by using the camera (e.g., the camera module 180 and/or the image sensor 230), to generate a first corrected image for the raw image and store the first corrected image in the memory 230” – Par 96, Lines 2-7 [NOTE: Kim does not explicitly state that the first corrected image stored in memory is a copy. However, one of ordinary skill in the art could simply configure the processors to store a copy of the generated first corrected image as it is a very standard technique.]) and prior to performing the blending processing, retrieving the stored copy of the first corrected image (Akiba: “the synthesizing section 133 performs α blending of the first correction image CIM1 and the second correction image CIM2 and outputs the display image DSIM, Par 69 [NOTE: the synthesizing section as disclosed by Akiba is configured to take the first and second correction image and blend them to produce a final image. After the combination, one of ordinary skill in the art could take the first corrected image stored in memory as disclosed in Kim and have the synthesizing section disclosed by Akiba to retrieve it from memory in order to perform the alpha blending with the second corrected image to teach wherein execution of the instructions further configures the one or more processors to: store, in memory a copy of the first corrected image and prior to performing the blending processing, retrieving the stored copy of the first corrected image.]) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Yarvis in view of Zhou to further incorporate the teachings of Akiba to store a copy of the first corrected image and prior to any blending processing, retrieving the stored copy of the first corrected image. It is a common technique to store a copy of the data stored in memory for the preservation of the data in case it needs to be used again in a future process. Storing the first corrected image to later be retrieved for blending would allow the blending operation to incorporate the strengths of the first image’s corrections into the new blended image. Regarding claim 10, the claim describes a method that lists the steps of a function performed by claim 4. Therefore, method claim 10 corresponds to the apparatus disclosed in claim 4 and is rejected for the same reasons of obviousness as used above. Claim(s) 6 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yarvis, Zhou, Akiba, Pettigrew et al (US 8891864 B2), and Baek (US 11334970 B2), hereinafter Pettigrew and Baek respectively. Regarding claim 6, Yarvis in view of Zhou and Akiba teach the apparatus of claim 5. Yarvis further teaches wherein execution of the instructions further configures the one or more processors to correct the color and intensity of the corrected image by converting the first corrected image from a first color space to a second color space (“ At block 1103, a hue, saturation, and value (HSV) channel splitting is performed, where given a color image for the rectified whiteboard, the color image is first converted into an HSV color space.” – Par 137, Lines 7-11 [NOTE: Yarvis implies that if an image is not already in HSV color space, a conversion is performed before processing.]). It would have been obvious to one of ordinary skill before the effective filing date of the present application to further incorporate the teachings of Yarvis to convert the first corrected image space from a first color space to a second color space. Converting from one color space to another gives the known result of obtaining more control over how channels such as saturation and intensity are manipulated. Yarvis in view of Zhou and Akiba does not teach for each pixel in the first corrected image having a first value, applying a predetermined saturation setting and a predetermine intensity setting. However, Pettigrew teaches for each pixel in the first corrected image having a first value, applying a predetermined saturation setting and a predetermine intensity setting (“The color masking tool of some embodiments applies color correction operations (e.g., invoked by a user through selection of a GUI item provided by the media-editing application) to a portion or region of the image (also referred to as secondary color corrections) by using the color mask to isolate pixels in the image that have particular color values and applying color correction operations (e.g., hue adjustments, saturation adjustments, brightness adjustments, etc.) to the isolated pixels.” – Col 2, Lines 27-35. [NOTE: Pettigrew discloses user selection of color correction operations are applied to pixels based on their mask values. This can be combined with Zhou’s binary mask to define the value of each pixel in the first corrected image and apply the color adjustments based on their values.]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to modify Yarvis in view of Zhou and Akiba to incorporate the teachings of Pettigrew to apply a predetermined saturation and intensity setting based on the value of each pixel. The binary masks defines the region of interest (the whiteboard) and its contents. Using the values that the binary mask assigns each pixel, the saturation and intensity settings could be applied to the pixels resulting the predicted outcome of a clearer visual of the contents on the whiteboard. Yarvis, Zhou, Akiba, and Pettigrew still does not teach for each pixel not having the first value, set the pixel to be white. However, Baek teaches for each pixel not having the first value, set the pixel to be white (“Upon generating the binary map, enhancement manager 508 can map the mask against a digital image (e.g., input image) by using pixels from the digital image wherever the map is black and use white elsewhere” - Col 19, Lines 35-38. [NOTE: the binary map as disclosed by Baek will set the pixels to white to distinguish from the black region of interest]). It would have been obvious to one of ordinary skill in the art to modify Yarvis in view of Zhou and Akiba to incorporate the teachings of Baek to set the pixels to white based on not having a value. Setting the pixels to white will have the predicted outcome of distinguishing what is the foreground and background which results in a clearer image of the contents of the whiteboard. Regarding claim 12, the claim describes a method that lists the steps of a function performed by claim 6. Therefore, method claim 12 corresponds to the apparatus disclosed in claim 6 and is rejected for the same reasons of obviousness as used above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID V. NGUYEN whose telephone number is (571)272-6111. The examiner can normally be reached M-F 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID VAN NGUYEN/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Jun 17, 2024
Application Filed
Feb 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573160
INTIMACY-BASED MASKING OF THREE DIMENSIONAL (3D) FACE LANDMARKS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month