Prosecution Insights
Last updated: April 19, 2026
Application No. 18/778,944

STORAGE MEDIUM, METHOD OF CONTROLLING IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING APPARATUS

Non-Final OA §102§103§112
Filed
Jul 20, 2024
Examiner
ALMAGHAYREH, KHALID M
Art Unit
2492
Tech Center
2400 — Computer Networks
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
208 granted / 248 resolved
+25.9% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
13 currently pending
Career history
261
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 248 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This communication responsive to the Application No. 18/778,944 filed on July 20, 2024. Claims 1-19 are pending and are directed towards STORAGE MEDIUM, METHOD OF CONTROLLING IMAGE PROCESSING APPARATUS, AND IMAGE PROCESSING APPARATUS. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 10/18/2024, 11/15/2024 and 12/05/2024 were Acknowledge. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 9 and 10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites the limitation “displaying, on the display unit, the plurality of the types of the preset designation based on the number of usage by a user on the screen to perform designation by the fourth method” which is vague and not understood. It is not understood what is meant by the number of usage by a user on the screen, or how the number of usage is related to the plurality of the types of the preset designation. For examination purposes, the examiner interpreted the limitation to display a plurality of the types of the preset designation. Claim 10 recites the limitation “displaying, on the display unit, the plurality of the types of the preset designation based on the last usage date and time on the screen to perform designation by the fourth method” which is vague and not understood. It is not understood what is meant by the last usage date and time on the screen, or how the last usage is related to the plurality of the types of the preset designation. For examination purposes, the examiner interpreted the limitation to display a plurality of the types of the preset designation. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 4-7 and 11-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim US 2020/0404122 A1 (hereinafter “Kim”). As per claim 1, Kim teaches a non-transitory computer readable storage medium storing an application which causes a computer to execute a method of controlling an image processing apparatus (the image scanning apparatus 100 may include the image sensor 110, the memory 120, the processor 130, a communicator 140, a display 150, a manipulation inputter 160, and an engine 170. Kim, para [0056]), the method comprising: displaying, on a display unit, a screen to designate a region where masking processing is performed using a method selected from a plurality of different methods, in which the region where the masking processing is performed in image data obtained by scanning a document by a scanner included in the image processing apparatus is designated (the expression “image forming apparatus” as used herein refers to an apparatus that prints the printing data generated at a terminal such as a computer onto a recording medium. Examples of an image forming apparatus may include a copy machine, a printer, a facsimile, or a multi-function printer (MFP) implementing functions of the above. The printer, the scanner, the fax machine, the multi-function printer (MFP), a display apparatus, or the like may represent any apparatus that can perform the image forming job. Kim, para [0016])( The processor 130 may control to display a preview image corresponding to the masked scan image. At this time, the processor 130 may generate a user interface window including a first area for displaying a preview image corresponding to the masking process, a second area for receiving an option related to the masking process, and control to display the generated user interface window. Kim, para [0047]); and performing the masking processing on the image data based on designation on the screen (The personal information included in the generated scan image is masked in operation S1120. For example, OCR may be performed on the generated scan image, and an area including personal information is set as a masking area based on the character recognition result. The masking process can be performed on the masking area. Kim, para [0128]), wherein the plurality of different methods include a first method in which the region where the masking processing is performed is designated by selecting a type of a character string included in the region where the masking processing is performed (A user may select types of personal information for performing protection for personal information using the user interface window 500. In the illustrated example, only the resident registration number, the address, and the telephone number are shown as examples of personal information, but various personal information such as a name, an e-mail address, a job title, a passbook number, or the like can be used in implementation. In implementation, a masking method, a masking area (e.g., all or part of personal information) may be separately set for each type of personal information. Kim, para [0090-0091]), and a second method in which the region where which the masking processing is performed is designated by designating a position of the region where the masking processing is performed (The area to add masking 730 provides for selection of an option for adding masking processing. If the area to add making 730 is selected, the user can additionally set types of personal information or set an area for performing masking. Kim, para [0101])( The processor 130 may perform an OCR process 810 on the scan image 820 and set an area containing the personal information as a masking area. For example, the processor 130 can confirm the numeric text format of xxxxxx-xxxxxxx as personal information and set a partial area of the detected numeric text as a masking area. As an alternative, the entire area of the numeric text may be set as the masking area. Kim, para [0110]). As per claim 2, Kim teaches the non-transitory computer readable storage medium according to claim 1, wherein the plurality of the different methods further include a third method in which the region where the masking processing is performed is designated by designating a character string included in the region where the masking processing is performed (The processor 130 may mask personal information included in the generated scan image. For example, when it is determined that masking of personal information is required, the processor 130 may perform optical character recognition (OCR) on the scan image and determine whether personal information exists using the character recognition result…if keywords related to personal information (e.g., TEL, MOBILE, Email, resident number, “.com”, “/co.kr”, “Seoul,” etc.) are detected, or a number text (For example, xxxxxx-xxxxxxx, xxx-xxxx-xxxx) or character text of a predetermined type is detected, it can be determined that personal information may exist. Kim, para [0034-0035]) (If the additional information is text information, the processor 130 may generate information on the OCR result, a text font, a size, or the like representing the personal information of the scan image as text information or generate the vectorized information on the text in the masked area into text. Here, the vectorized information is information that defines a symbol constituting a text as a straight line or a curved line. Kim, para [0043]). As per claim 4, Kim teaches the non-transitory computer readable storage medium according to claim 2, wherein the method further comprising: in a case where the selected method to designate the region where the masking processing is performed is the third method, receiving a user input to designate the character string on a screen to perform designation by the third method (The manipulation inputter 160 may receive an input through the control menu displayed on the display 150. The manipulation inputter 160 may be implemented as a plurality of buttons, a keyboard, a mouse, or the like, and may be implemented as a touch screen that can simultaneously perform functions of the display 150 as described above. The manipulation inputter 160 may receive a setting of an option related to personal information protection from a user or receive a setting of additional processing of a currently masked scan image. Kim, para [0068-0070]) (If the additional information is text information, the processor 130 may generate information on the OCR result, a text font, a size, or the like representing the personal information of the scan image as text information or generate the vectorized information on the text in the masked area into text. Here, the vectorized information is information that defines a symbol constituting a text as a straight line or a curved line. Kim, para [0043]) (The processor 130 may mask personal information included in the generated scan image. For example, when it is determined that masking of personal information is required, the processor 130 may perform optical character recognition (OCR) on the scan image and determine whether personal information exists using the character recognition result…if keywords related to personal information (e.g., TEL, MOBILE, Email, resident number, “.com”, “/co.kr”, “Seoul,” etc.) are detected, or a number text (For example, xxxxxx-xxxxxxx, xxx-xxxx-xxxx) or character text of a predetermined type is detected, it can be determined that personal information may exist. Kim, para [0034-0035]); and performing the masking processing on a character string region including the character string (The personal information included in the generated scan image is masked in operation S1120. For example, OCR may be performed on the generated scan image, and an area including personal information is set as a masking area based on the character recognition result. The masking process can be performed on the masking area. Kim, para [0128]). As per claim 5, Kim teaches the non-transitory computer readable storage medium according to claim 1, wherein the method further comprising: in a case where the selected method to designate the region where the masking processing is performed is the first method, receiving a user input to select a type of the character string on a screen to perform designation by the first method (A user may select types of personal information for performing protection for personal information using the user interface window 500. In the illustrated example, only the resident registration number, the address, and the telephone number are shown as examples of personal information, but various personal information such as a name, an e-mail address, a job title, a passbook number, or the like can be used in implementation. In implementation, a masking method, a masking area (e.g., all or part of personal information) may be separately set for each type of personal information. Kim, para [0090-0091]); and performing the masking processing on a character string region including a character string corresponding to the designated type of the character string (The personal information included in the generated scan image is masked in operation S1120. For example, OCR may be performed on the generated scan image, and an area including personal information is set as a masking area based on the character recognition result. The masking process can be performed on the masking area. Kim, para [0128]). As per claim 6, Kim teaches the non-transitory computer readable storage medium according to claim 1, wherein the type of the character string includes a company name, a name, a phone number, an address, a FAX number, an e-mail address, a uniform resource locator (URL), a credit card number, and a one dimensional or two dimensional code image (Personal information may be a resident registration number, a phone number, address, email address, job title, company name, etc. Kim, para [0031]). As per claim 7, Kim teaches the non-transitory computer readable storage medium according to claim 1, wherein the method further comprising: in a case where the selected method to designate the region where the masking processing is performed is the second method, receiving a user input to designate a position of the region to be masked on a screen to perform designation by the second method (The manipulation inputter 160 may receive an input through the control menu displayed on the display 150. The manipulation inputter 160 may be implemented as a plurality of buttons, a keyboard, a mouse, or the like, and may be implemented as a touch screen that can simultaneously perform functions of the display 150 as described above. The manipulation inputter 160 may receive a setting of an option related to personal information protection from a user or receive a setting of additional processing of a currently masked scan image. Kim, para [0068-0070])(a masking area (e.g., all or part of personal information) may be separately set for each type of personal information. Kim, para [0091])( The area to add masking 730 provides for selection of an option for adding masking processing. If the area to add making 730 is selected, the user can additionally set types of personal information or set an area for performing masking. Kim, para [0101]); and performing the masking processing on a region in the designated position (The personal information included in the generated scan image is masked in operation S1120. For example, OCR may be performed on the generated scan image, and an area including personal information is set as a masking area based on the character recognition result. The masking process can be performed on the masking area. Kim, para [0128]), As per claim 11, Kim teaches the non-transitory computer readable storage medium according to claim 1, wherein the method further comprising: displaying a preview screen on the display unit before the masking processing is performed (The preview image area 710 is an area for displaying a preview image of a mask-processed scan image. A user may confirm a state of the mask-processed scan image using the preview image area 710. Kim, para [0099]). As per claim 12, Kim teaches the non-transitory computer readable storage medium according to claim 2, wherein the method further comprising: displaying a preview screen on the display unit before the masking processing is executed (the display 150 may display a preview image indicating a masked scan image and may display a user interface window for setting an additional processing method for the masked scan image. Kim, para [0066]). As per claim 13, Kim teaches the non-transitory computer readable storage medium according to claim 12, wherein the preview screen includes a screen to input the character string designated in the third method, a screen to select the type of the character string selected in the first method, and a screen to designate the position of the region where the masking processing is performed in the second method (The display 150 may display a user interface window for receiving settings related to privacy protection. In addition, the display 150 may display a preview image indicating a masked scan image and may display a user interface window for setting an additional processing method for the masked scan image.. Kim, para [0066]) (The option area 720 is an area for setting an additional processing option for the preview image, including an area to add masking 730, an area to remove masking 740, and an area to recover to an original image 750. Kim, para [0100]). As per claim 14, Kim teaches the non-transitory computer readable storage medium according to claim 1, wherein the masking processing is processing to mask a designated region in the image data with black color (The processor 130 may perform a masking process on the set masking area to generate a masked scan image 830. For example, the processor 130 may perform a black covering over the masked area. As an alternative, various image processing such as mosaic processing or replacing with a preset image may be applied as well as a simple image processing method in which the image is covered with white or covered with the background color of the scan image. Kim, para [0112]). As per claim 15, Kim teaches the non-transitory computer readable storage medium according to claim 1, wherein the masking processing is processing to mask a designated region in the image data with a background color of the image data (The processor 130 may perform a masking process on the set masking area to generate a masked scan image 830. For example, the processor 130 may perform a black covering over the masked area. As an alternative, various image processing such as mosaic processing or replacing with a preset image may be applied as well as a simple image processing method in which the image is covered with white or covered with the background color of the scan image. Kim, para [0112]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3 and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Kim US 2020/0404122 A1 (hereinafter “Kim”) in view of Lim US 2017/0180605 A1 (hereinafter “Lim”). As per claim 3, Kim teaches the non-transitory computer readable storage medium according to claim 1. Kim does not explicitly teach wherein the method further comprising: saving information of the region designated on the screen to a memory of the image processing apparatus as preset designation; and performing the masking processing on other image data obtained by scanning another document by the scanner by using the saved preset designation. However, Lim teaches saving information of the region designated on the screen to a memory of the image processing apparatus as preset designation (The apparatus may further include a storage configured to store position and size information of a transcript name area and a personal information area, wherein the at least one processor may detect a transcript area within the scanned image and detect a personal information area within the detected transcript area using position information of personal information corresponding to the determined type of transcript. Lim, para [0010])(The storage 140 may store information on a position and size of a personal information area according to a transcript name area, name of a transcript and a type of a transcript. To be specific, information on a position and size of a transcript name area and a personal information area may be included in the stored database, and information on a position and size of an area set by a user using the manipulation inputter 150. Lim, para [0066]); and performing the masking processing on other image data obtained by scanning another document by the scanner by using the saved preset designation (an image forming apparatus may detect a personal information area within a scanned image based on prestored information. For example, the image forming apparatus may detect a transcript area 621 within a scanned image using difference of brightness and darkness, and detect a preset area disposed at a preset distance from an edge of a transcript area as a personal information area. To be specific, the image forming apparatus may detect that, from an upper edge of the transcript area, an area of a preset size positioned at a distance of 5.5 cm from an upper edge and 2.2 cm from a left edge is a fingerprint area 622. Lim, para [0088])( The processor 130 may correct a scanned image by blurring (or blurring out) the detected personal information area. At this time, blurring or blurring out can mean changing a detected personal information area to a blank image or making it a mosaic. Lim, para [0062]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the teaching of Kim in view of Lim. One would be motivated to do so, to enhance the privacy of the system by masking personal information at predetermined locations. As per claim 8, Kim teaches the non-transitory computer readable storage medium according to claim 3. Kim does not explicitly teach wherein the plurality of the different methods further include a fourth method in which the region where the masking processing is performed is designated by using the preset designation saved in the memory, the method further comprising: in a case where the selected method to designate the region where the masking processing is performed is the fourth method, receiving preset designation selected from a plurality of types of preset designation displayed on a screen to perform designation by the fourth method; and performing the masking processing based on the selected preset designation. However, Lim teaches wherein the plurality of the different methods further include a fourth method in which the region where the masking processing is performed is designated by using the preset designation saved in the memory(The apparatus may further include a storage configured to store position and size information of a transcript name area and a personal information area, wherein the at least one processor may detect a transcript area within the scanned image and detect a personal information area within the detected transcript area using position information of personal information corresponding to the determined type of transcript. Lim, para [0010])(The storage 140 may store information on a position and size of a personal information area according to a transcript name area, name of a transcript and a type of a transcript. To be specific, information on a position and size of a transcript name area and a personal information area may be included in the stored database, and information on a position and size of an area set by a user using the manipulation inputter 150. Lim, para [0066]), the method further comprising: in a case where the selected method to designate the region where the masking processing is performed is the fourth method, receiving preset designation selected from a plurality of types of preset designation displayed on a screen to perform designation by the fourth method (The storage 140 may store information on a position and size of a personal information area according to a transcript name area, name of a transcript and a type of a transcript. To be specific, information on a position and size of a transcript name area and a personal information area may be included in the stored database, and information on a position and size of an area set by a user using the manipulation inputter 150. Lim, para [0066]) (an image forming apparatus may detect a personal information area within a scanned image based on prestored information. For example, the image forming apparatus may detect a transcript area 621 within a scanned image using difference of brightness and darkness, and detect a preset area disposed at a preset distance from an edge of a transcript area as a personal information area. To be specific, the image forming apparatus may detect that, from an upper edge of the transcript area, an area of a preset size positioned at a distance of 5.5 cm from an upper edge and 2.2 cm from a left edge is a fingerprint area 622. Lim, para [0088]) and performing the masking processing based on the selected preset designation (The processor 130 may correct a scanned image by blurring (or blurring out) the detected personal information area. At this time, blurring or blurring out can mean changing a detected personal information area to a blank image or making it a mosaic. Lim, para [0062]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the teaching of Kim in view of Lim. One would be motivated to do so, to enhance the privacy of the system by masking personal information at predetermined locations. As per claim 9, Kim teaches the non-transitory computer readable storage medium according to claim 8. Kim does not explicitly teach wherein the method further comprising: displaying, on the display unit, the plurality of the types of the preset designation based on the number of usage by a user on the screen to perform designation by the fourth method. However, Lim teaches wherein the method further comprising: displaying, on the display unit, the plurality of the types of the preset designation based on the number of usage by a user on the screen to perform designation by the fourth method (Fig. 6B shows a plurality of types of preset location of personal information that’s to be blurred “address change and fingerprint area”. Lim, Fig. 6B elements 621 and 622). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the teaching of Kim in view of Lim. One would be motivated to do so, to enhance the privacy of the system by masking personal information at predetermined locations. As per claim 10, Kim teaches the non-transitory computer readable storage medium according to claim 8, Kim does not explicitly teach wherein the method further comprising: displaying, on the display unit, the plurality of the types of the preset designation based on the last usage date and time on the screen to perform designation by the fourth method. However, Lim teaches wherein the method further comprising: displaying, on the display unit, the plurality of the types of the preset designation based on the last usage date and time on the screen to perform designation by the fourth method (Fig. 6B shows a plurality of types of preset location of personal information that’s to be blurred “address change and fingerprint area”. Lim, Fig. 6B elements 621 and 622). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the teaching of Kim in view of Lim. One would be motivated to do so, to enhance the privacy of the system by masking personal information at predetermined locations. Claim(s) 3 and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Kim US 2020/0404122 A1 (hereinafter “Kim”) in view of Shimahashi et al. US 2016/0094755 A1 (hereinafter “Shimahashi”). As per claim 16, Kim teaches the non-transitory computer readable storage medium according to claim 2. Kim does not explicitly teach wherein the method further comprising: in a case where it is detected that the already-obtained image data and other image data newly obtained by scanning another document by the scanner are misaligned, obliquely rotating the newly-obtained other image data such that four sides of the newly-obtained other image data erect; and correcting the obliquely rotated other image data such that an origin of the obliquely rotated other image data matches an origin of the already-obtained image data. However, Shimahashi teaches wherein the method further comprising: in a case where it is detected that the already-obtained image data and other image data newly obtained by scanning another document by the scanner are misaligned, obliquely rotating the newly-obtained other image data such that four sides of the newly-obtained other image data erect; and correcting the obliquely rotated other image data such that an origin of the obliquely rotated other image data matches an origin of the already-obtained image data (The CPU 410 identifies the first edge 22L by analyzing the first scanned image 20L. Any edge detection process known in the art, such as a process employing filters or a process employing a Hough transform, may be used to identify the first edge 22L. If the identified first edge 22L is sloped relative to the first direction D1, the CPU 410 executes a skew correction process to rotate the first scanned image 20L until the first edge 22L is aligned with the first direction D1. Shimahashi, para [0049]) (the CPU 410 rotates the second scanned image 20Ra 180 degrees. FIG. 10 is a schematic diagram showing the first scanned image 20La and the rotated second scanned image 20Ra. At this time, the top of the left region 10L in the first scanned image 20La matches the top of the right region 10R in the second scanned image 20Ra, as shown in FIG. 10. Note that the first scanned image 20La may include a margin area. Sometimes the left region 10L is skewed in the first scanned image 20La relative to directions D1 and D2. In such cases, the CPU 410 executes the same skew correction process described in S100 and S110 of FIG. 6. For example, the CPU 410 may execute an edge detection process known in the art to identify the left edge SL (i.e., side SL) of the left region 10L. If the identified left edge SL is sloped relative to the first direction D1, the CPU 410 rotates the first scanned image 20La so that the left edge SL is aligned in the first direction D1. Similarly, if the right region 10R in the second scanned image 20Ra is skewed, the CPU 410 rotates the second scanned image 20Ra so that the right edge RE is aligned with the first direction D1. Shimahashi, para [0067-0068])( the CPU 410 may instead set the rotated angles of the first and second scanned images after analyzing the images. For example, the CPU 410 may set overlapping regions for four rotated angles, including 0°, 90°, 180°, and 270°, and may use the rotated angle at which the overlapping region has the largest second similarity S2 among the four overlapping regions. Shimahashi, para [0142]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the system of Kim in view of Shimahashi. One would be motivated to do so, to enhance the accuracy of the masking process by assuring that the scanned images are correctly aligned with original template image. Claims 17-19 have limitations similar to those treated in the above rejection, and are met by the references as discussed above, and are rejected for the same reasons (of anticipation\ and rationales) as used above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. A. McCorkindale et al. US 2013/0272523 A1 directed to mobile field level encryption of private documents. B. Okamoto et al. US 2008/027678 A1 directed to image processing apparatus. C. Salgado et al. US 2008/0239365 A1 directed to masking of text in document reproduction. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHALID M ALMAGHAYREH whose telephone number is (571)272-0179. The examiner can normally be reached Monday - Thursday 8AM-5PM EST & Friday variable. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RUPAL DHARIA can be reached at (571)272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Respectfully Submitted /KHALID M ALMAGHAYREH/Primary Examiner, Art Unit 2492
Read full office action

Prosecution Timeline

Jul 20, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596848
METHOD OF VERIFYING INTEGRITY OF DATA FROM A DEVICE UNDER TEST
2y 5m to grant Granted Apr 07, 2026
Patent 12587840
AUTHENTICATION MANAGEMENT IN A WIRELESS NETWORK ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12587386
CHECKOUT WITH MAC
2y 5m to grant Granted Mar 24, 2026
Patent 12579328
SYSTEM ON A CHIP AND METHOD GUARANTEEING THE FRESHNESS OF THE DATA STORED IN AN EXTERNAL MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572699
Using Memory Protection Data
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+25.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 248 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month