Prosecution Insights
Last updated: April 19, 2026
Application No. 17/547,568

DATA BLURRING

Final Rejection §101§102§103§112
Filed
Dec 10, 2021
Examiner
ZARRINEH, SHAHRIAR
Art Unit
2496
Tech Center
2400 — Computer Networks
Assignee
Business Objects Software Ltd.
OA Round
5 (Final)
79%
Grant Probability
Favorable
6-7
OA Rounds
2y 8m
To Grant
87%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
341 granted / 433 resolved
+20.8% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
59 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
52.2%
+12.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.2%
-23.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 433 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In communications filed on 08/11/2025.Claims 3, and 14 are cancelled. Claims 1-2, 4-13, and 15-22 are pending in this examination. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This examination is in response to US Patent Application No. 17/547,568. Examiner Note Applicant’s replacement of the title obviates previously raised Specification objection. Response to Arguments Applicant's arguments filed 08/11/2025 have been fully considered but they are not persuasive: Examiner respectfully disagrees with applicant argument on pages 8-9 regarding claims 1, 12, and 17 rejections under 35 USC 101, abstract idea. Examiner refers Applicant to Claim Rejections - 35 USC § 101 section of this office action for more details and clarification. Examiner maintain the rejection. Examiner respectfully disagrees with applicant argument on pages 9-12 regarding claims 1, 12, and 17 rejections under 35 USC 112(a), first paragraph. On page 10 of Argument/Remarks submitted on 08/11/2025, Applicant stated that, as discussed above, the document template, and thus all of the identifiers of data elements that it contains, are accessed by the processors of the application server. Thus, this claim element serves to identify a particular data element as “a first data element,” for reference later in the claim. For the purposes of this discussion, consider $Income [3] as the identifier of the first data element. Examiner would like to see if the Applicant chooses another data element such as Name or City, would Applicant be able to do the same analysis as it has been done with the Income data element? The claim set filed on 08/11/2025 stated: “accessing, by the one or more processors, an identifier of a first data element of the plurality of data elements; based on the identifier of the first data element, accessing a first data value of the first data element from a data source”, however does not indicate what the “first data element” is? Examiner maintains the rejection. Applicant submits on pages 12-14 of remarks filed on 08/11/2025 regarding claims 1, 12, and 17, rejections under Claim Rejections - 35 USC § 102, that prima facie has not been made that Sharifi discloses the claimed limitations below. Examiner respectfully disagrees with applicant argument on pages 12-14 regarding claims 1, 12, and 17 rejections under Claim Rejections - 35 USC § 102. Sharifi discloses: Sharifi discloses a method comprising: accessing, by one or more processors of a device, a document template that references a plurality of data elements [ see FIG. 1, # 114, ¶28… the type of information may be based on an application module 124 associated with the information, a data structure or format of the information, or the content of the information. In some examples, the information to be displayed by GUI 114 may include employee name, employee start date, salary, and social security number. In these examples, IPM 230 may assign a privacy level of “very private” for the social security information, a privacy level of “semi-private” for the salary information, and a privacy level of “non-private” for the start date], and accessing, by the one or more processors, an identifier of a first data element of the plurality of data elements [ see FIG. 1, #114, Employee name, start date, salary, social security number)]; and based on the identifier of the first data element, determining accessing a first data value of the first data element from a data source [ see FIG. 1, #114, Employee name, start date, salary, social security number], and [¶71, as illustrated by graphical user interface 114 of FIG. 1, OM 232 may determine that personally identifiable information (e.g., social security numbers) should be obscured by replacing digits [0-9] with letters, and that the financial information (e.g., salary) should be obscured by removing reference to whether the salary is hourly, annually, or some other unit]; and based on the determined length number of characters, generating a second string [ ¶37, UI module 120 may cause the information to be obscured by obfuscating or concealing the information…. UI module 120 may conceal information by replacing certain alphanumerical characters with different alphanumerical characters or symbols. For instance, as shown in GUI 114, social security numbers are obscured by replacing numbers of characters with letters], and [¶71, as illustrated by graphical user interface 114 of FIG. 1, OM 232 may determine that personally identifiable information (e.g., social security numbers) should be obscured by replacing digits [0-9] with letters]; and based on the identifier of the first data element and the document template, determining to not include a second data value that depends on the first data value; and generating, based on the document template and the second string, a document that excludes the second data value [¶90, Responsive to determining the obfuscation levels for the respective portions of the information, PMM 126 may output the various portions of information and an indication of the respective obfuscation levels to UI module 120. UI module 120 may receive the information and the respective obfuscation levels and may cause PSD 112 to display the information according to the obfuscation levels. Thus, some portions of information may be obscured in a manner different than the method used to obscure other portions of information. For instance, as illustrated by graphical user interface 414B, health information 402, 404 may be obfuscated (e.g., blurred), address information 406 may be truncated, personally identifiable information 408 and 410 (e.g., social security number and phone number, respectively) may be concealed (e.g., may have characters replaced with placeholder characters), and financial information 412 (e.g., salary) may have characters replaced. [¶93, Computing device 110 may receive consent to store user data (500). Computing device 110 may only store information associated with a user of computing device 110 if the user affirmatively consents to such collection of information. Computing device 110 may further provide opportunities for the user to withdraw consent and in which case, computing device 110 may cease collecting or otherwise retaining the information associated with that particular user. Responsive to receiving user consent to store user data, computing device 110 may store contextual information, such as action usage information and/or application usage information], and [¶¶69-70]. Examiner maintains the rejection. Applicant submits on pages 14-16 of remarks filed on 08/11/2025 regarding claims 1, 12, and 17, rejections under Claim Rejections - 35 USC § 103, that prima facie has not been presented for the claims below. Examiner respectfully disagrees with applicant argument on pages 14-16 regarding claims 1, 12, and 17 rejections under Claim Rejections - 35 USC § 103. Fan discloses: accessing, by one or more processors of a device, a document template that references a plurality of data elements [¶¶25-26, FIG. 3 illustrates example text and formatting changes to a document in accordance with some embodiments. The example text changes and formatting changes in FIG. 3 are illustrative, and not limiting. The examples shown in FIG. 3 may be determined and edited automatically. For example, a state of a list may be captured, such as when a user first navigates (e.g., places a cursor) at or within the list (equated to a document template) The list may be captured again after the user edits the list (e.g., edits a first example). The list may be captured when the user performs an additional action, such as a second edit example, placing a cursor at a location to begin the second edit example (but before the second edit example is actually performed), or performing a portion of the second edit. These captures may be sent to a machine learning component to learn examples. Training data for these examples may be marked (e.g., this is an edit that should be made, this is a list, etc.). In an example, when the first capture and a subsequent capture are different, the captures may be used in the machine learning component for training. The two captures that differ may be testing data (e.g., not labeled) and used in the machine learning component for unsupervised training. Example formatting changes may include…. redacting certain types of content (names, SSN, etc.) …. template document creation…]; and accessing, by the one or more processors, an identifier of a first data element of the plurality of data elements [¶16, FIGS. 1A-1G illustrate a user interface 100 for presenting a document including automatic text modification in accordance with some embodiments. The document displayed on user interface 100 is modified in each of the FIGS. 1A-1G, and the modifications may be user modifications or automatic modifications. The document depicts a set of names with social security numbers]; and based on the identifier of the first data element, determining accessing a first data value of the first data element from a data source [¶16, FIGS. 1A-1G illustrate a user interface 100 for presenting a document including automatic text modification in accordance with some embodiments. The document displayed on user interface 100 is modified in each of the FIGS. 1A-1G, and the modifications may be user modifications or automatic modifications. The document depicts a set of names with social security numbers]; and based on the identifier of the first data element and the document template, determining to not include a second data value that depends on the first data value While Fan discloses: [¶9, …The automation may speed up document processing and present unique and specific edits on a user interface. Some common repetitive tasks may include applying a specific direct formatting to all headings repetitively, redacting a series of numbers to retain confidentiality, modifying a list of entries in similar way such as capitalizing a first letter, adding a period to an end of a sentence, etc], and [¶14, The edits performed automatically based on a user example may include string modification (e.g., changing text, such as redacting numbers or letters, changing spelled out words to abbreviations, etc.), or formatting modifications (e.g., capitalizing letters, font changes, such as underline, italicize, bolding, size, etc.). Some automatic edits may include both format changes and string changes (e.g., redacting and changing font type)], and [¶18, In an example, after the edit is completed, an additional action by the user may be required before triggering automatic edits. For example, placement of the cursor 102 at the location shown in FIG. 1C may trigger the automatic edits. A system may identify, from the cursor 102 placement that the user is attempting to perform a second edit (on Jake Craig's number) corresponding to the completed edit. Other actions of this sort may trigger the automatic edits, such as a keystroke (e.g., a press of a spacebar), entry of text or deletion of text (e.g., when XXXX is entered before the numbers are deleted), highlighting text (e.g., with the cursor, highlighting the numbers to delete/change to XXXX)], and [¶28, Example find and replace changes may include: extrapolation in tables; extrapolation in lists; consistent list structure (e.g., Capitalize, add period at the end); redact content such as SSN in a list; extrapolation for “date” entity; apply the change to all words/phrases representing the same entity when the user changes one of these words/phrases; content in table cell gets formatted automatically based on column/row names; for example: all numbers with “revenue” in column name are formatted as “currency”; recognize fillable blanks in a document, and automatically populate suggestions based on previous blanks the user filled in other documents (e.g. form filling); or the like]. Fan does not explicitly disclose, however Blenkhorn discloses: [¶¶54-59, The operator uses the GUI 100 to load an image of a scanned document into a first panel 110. The text in the first panel 110 is a sample of a possible image file that the operator can load. The particular printed information in the first panel 110 is not limited to any specific embodiment. The image information in the first panel 110 is representative of a particular form. This representative image information is hereafter referred to as the "template image." The operator uses the template image for finding and redacting similar forms. The operator may load a similar image file created from the same form into a second panel 120. As shown in FIG. 1, the second panel 120 includes an image of a completed or filled version of the sample form. The particular printed information in the second panel 120 is not limited to the illustrated example. The image information or file in the second panel 120 is representative of the same form as the template file shown in the first panel 110. The representative image in the second panel 120 is hereafter referred to as the "test image," since its purpose is to allow the matching-algorithm of the comparator to test whether a corresponding portion of the test image matches the select portion of the template image. The operator uses controls within the panel 130 to select or otherwise define one or more identification rectangles 111 on the template image information in the first panel 110. The matching algorithm of the comparator uses the portion of the template image inside these identification rectangles 111 to identify corresponding areas 121 in the test image as shown in the second panel 120. When a match is found between the two image files, by comparing the image information in the identification rectangle 111 of the template image with the corresponding pixel information of the corresponding area 121 of the test image, the system for identifying matching images may be further configured to take one or more actions on the test image information. For example, a redactor may be configured to modify or replace pixel information from one or more operator identified portions of the test image information. Such operator identified portions of the test image include the redaction rectangle 122 and redaction rectangle 123. In the illustrated embodiment, the modified or redacted portions of the test image are illustrated in a solid black color. This is indicative of the replacement of the corresponding pixel information by all zero digital values. Thus, a saved version of the modified or redacted image cannot be reverse engineered to determine the image information that was present in the original test image. Alternatively, the system could replace the pixel information corresponding to redaction rectangle 122 and/or the pixel information corresponding to redaction rectangle 123 with alternating patterns of zeros and ones, or all ones…], and [¶25]. PNG media_image1.png 502 552 media_image1.png Greyscale Examiner maintains the rejection. Claim Rejections - 35 USC § 101 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The claimed invention is not directed to patent eligible subject matter. Based upon consideration of all of the relevant factors with respect to the claim as a whole, claims 1-2, 4-13, and 15-22 are determined to be directed to an abstract idea. Claims 1-2, 4-13, and 15-22 are rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 2a, prong 1: The claim 1 recites the steps of “A method comprising: accessing, by one or more processors of a device, a document template that references a plurality of data elements; accessing, by the one or more processors, an identifier of a first data element of the plurality of data elements; based on the identifier of the first data element, accessing a first data value of the first data element from a data source; determining a number of characters of a first string for the first data value; based on the determined length number of characters, generating a second string; based on the identifier of the first data element and the document template, determining to not include a second data value that depends on the first data value; and generating, based on the document template and the second string, a document that excludes the second data value”. as drafted, is a method that, under its broadest reasonable interpretation, covers performance of the limitations in the mind and are broad enough to encompass performance by a human using pen and paper. For example, one ordinary skilled in the art, in the context of the claims, can manually (i.e., by using pen and paper and/or in human mind) generate , accessing a document template , accessing the identifier of data element included in the document template, accessing first data value for the first data element, determine the number or character of the first string, generate second string, and second data value, and excludes the second date value for a document based on the document template and second string, in a data processing system. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind and/or includes some additional physical steps such as to encompass performance by a human using pen and paper but for the recitation of generic computer components e.g., one or more processor, then it falls within the “Mental Processes” grouping of abstract ideas( This is a mental process as described in MPEP 2601.04(a)(2)(III)) Accordingly, the claim recites an abstract idea. Step 2a, prong 2: No. This judicial exception is not integrated into a practical application because the claim recites additional element, such as a processor (data processing system). These elements in the claim are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer which may include component e.g., a processor. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. See MPEP 2106.04(d). Step 2B: No. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations are specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)). Thus, the claim 1 is not patent eligible. Therefore, the independent claims 12, and 17 are rejected under 35 U.S.C 101 as being directed to non-statutory subject matter for the same reasons addressed above for the independent claim 1. Thus, the claims 1-2, 4-13, and 15-22 are rejected under 35 U.S.C 101 as being directed to non-statutory subject matter as the claims do not contain any element or combination of elements that is sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the ineligible concept itself. See Alice, 134 S. Ct. at 2360. Under Alice, that is not sufficient "to transform an abstract idea into a patent-eligible invention." See Electric Power group, CyberSource, and Classen (Fed. Cir. 2011). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL. —The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. The 1-2, 4-13, 15-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The independent claims 1, 12, and 17, contain ““accessing, by the one or more processors, an identifier of a first data element of the plurality of data elements; based on the identifier of the first data element, accessing a first data value of the first data element from a data source…based on the determined number of characters, generating a second string; based on the identifier of the first data element and the document template, determining to not include a second data value that depends on the first data value; and generating, based on the document template and the second string, a document that excludes the second data value.” Applicant has argued on pages 9-12 of the Argument/Remarks submitted on 08/11/2025 for when the first data element considered to be an Income element, however the first data element in the claim set is written broadly and does not indicate the first data element is an Income element! Examiner is unable to locate the description in the specification analyzing when the first data element was chosen from other elements in the template such as Name, and City and how it is analyzed based on the limitation in the claim 1 claim set? Hence, it is not reasonably conveyed to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant is kindly requested to show the examiner support in the original disclosure for the new or amended claims. See MPEP 714.02 and 2163.06 (“Applicant should specifically point out the support for any amendments made to the disclosure"). Claims 2, 4-11, 21-22, and 13, 15-16, and 18-20 do not cure the deficiency of claims 1, 12, and 17 and are rejected under 35 USC 112, 2nd paragraph, for their dependency upon claim 1, 12, and 17. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4-7, 9-12, 15-19, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. (US2019/0311022) issued to Fan and, in view of Blenkhorn et.al (US2012/0033892), hereinafter, “Blenkhorn”. First set of rejections: Regarding claims 1, 12, and 17, Fan discloses a method comprising: accessing, by one or more processors of a device, a document template that references a plurality of data elements [¶¶25-26, FIG. 3 illustrates example text and formatting changes to a document in accordance with some embodiments. The example text changes and formatting changes in FIG. 3 are illustrative, and not limiting. The examples shown in FIG. 3 may be determined and edited automatically. For example, a state of a list may be captured, such as when a user first navigates (e.g., places a cursor) at or within the list (equated to a document template) The list may be captured again after the user edits the list (e.g., edits a first example). The list may be captured when the user performs an additional action, such as a second edit example, placing a cursor at a location to begin the second edit example (but before the second edit example is actually performed), or performing a portion of the second edit. These captures may be sent to a machine learning component to learn examples. Training data for these examples may be marked (e.g., this is an edit that should be made, this is a list, etc.). In an example, when the first capture and a subsequent capture are different, the captures may be used in the machine learning component for training. The two captures that differ may be testing data (e.g., not labeled) and used in the machine learning component for unsupervised training. Example formatting changes may include…. redacting certain types of content (names, SSN, etc.) …. template document creation…]; and accessing, by the one or more processors, an identifier of a first data element of the plurality of data elements [¶16, FIGS. 1A-1G illustrate a user interface 100 for presenting a document including automatic text modification in accordance with some embodiments. The document displayed on user interface 100 is modified in each of the FIGS. 1A-1G, and the modifications may be user modifications or automatic modifications. The document depicts a set of names with social security numbers]; and based on the identifier of the first data element, determining accessing a first data value of the first data element from a data source [¶16, FIGS. 1A-1G illustrate a user interface 100 for presenting a document including automatic text modification in accordance with some embodiments. The document displayed on user interface 100 is modified in each of the FIGS. 1A-1G, and the modifications may be user modifications or automatic modifications. The document depicts a set of names with social security numbers]; and determining a length number of characters of a first string for the first data value [see FIG 1A, employees name and their social security number], [ see FIG. 4A-B, for social security number of 123-45-678]; and based on the determined length number of characters, generating a second string [see FIG 3, for before and after redaction done on the social security number], and [0014] The edits performed automatically based on a user example may include string modification (e.g., changing text, such as redacting numbers or letters, changing spelled out words to abbreviations, etc.), or formatting modifications (e.g., capitalizing letters, font changes, such as underline, italicize, bolding, size, etc.). Some automatic edits may include both format changes and string changes (e.g., redacting and changing font type)], and [¶18, In an example, after the edit is completed, an additional action by the user may be required before triggering automatic edits. For example, placement of the cursor 102 at the location shown in FIG. 1C may trigger the automatic edits. A system may identify, from the cursor 102 placement that the user is attempting to perform a second edit (on Jake Craig's number) corresponding to the completed edit. Other actions of this sort may trigger the automatic edits, such as a keystroke (e.g., a press of a spacebar), entry of text or deletion of text (e.g., when XXXX is entered before the numbers are deleted), highlighting text (e.g., with the cursor, highlighting the numbers to delete/change to XXXX)]; and based on the identifier of the first data element and the document template, determining to not include a second data value that depends on the first data value; and generating, based on the document template and the second string, a document that excludes the second data value While Fan discloses: [¶9, …The automation may speed up document processing and present unique and specific edits on a user interface. Some common repetitive tasks may include applying a specific direct formatting to all headings repetitively, redacting a series of numbers to retain confidentiality, modifying a list of entries in similar way such as capitalizing a first letter, adding a period to an end of a sentence, etc], and [¶14, The edits performed automatically based on a user example may include string modification (e.g., changing text, such as redacting numbers or letters, changing spelled out words to abbreviations, etc.), or formatting modifications (e.g., capitalizing letters, font changes, such as underline, italicize, bolding, size, etc.). Some automatic edits may include both format changes and string changes (e.g., redacting and changing font type)], and [¶18, In an example, after the edit is completed, an additional action by the user may be required before triggering automatic edits. For example, placement of the cursor 102 at the location shown in FIG. 1C may trigger the automatic edits. A system may identify, from the cursor 102 placement that the user is attempting to perform a second edit (on Jake Craig's number) corresponding to the completed edit. Other actions of this sort may trigger the automatic edits, such as a keystroke (e.g., a press of a spacebar), entry of text or deletion of text (e.g., when XXXX is entered before the numbers are deleted), highlighting text (e.g., with the cursor, highlighting the numbers to delete/change to XXXX)], and [¶28, Example find and replace changes may include: extrapolation in tables; extrapolation in lists; consistent list structure (e.g., Capitalize, add period at the end); redact content such as SSN in a list; extrapolation for “date” entity; apply the change to all words/phrases representing the same entity when the user changes one of these words/phrases; content in table cell gets formatted automatically based on column/row names; for example: all numbers with “revenue” in column name are formatted as “currency”; recognize fillable blanks in a document, and automatically populate suggestions based on previous blanks the user filled in other documents (e.g. form filling); or the like]. Fan does not explicitly disclose, however Blenkhorn discloses: [¶¶54-59, The operator uses the GUI 100 to load an image of a scanned document into a first panel 110. The text in the first panel 110 is a sample of a possible image file that the operator can load. The particular printed information in the first panel 110 is not limited to any specific embodiment. The image information in the first panel 110 is representative of a particular form. This representative image information is hereafter referred to as the "template image." The operator uses the template image for finding and redacting similar forms. The operator may load a similar image file created from the same form into a second panel 120. As shown in FIG. 1, the second panel 120 includes an image of a completed or filled version of the sample form. The particular printed information in the second panel 120 is not limited to the illustrated example. The image information or file in the second panel 120 is representative of the same form as the template file shown in the first panel 110. The representative image in the second panel 120 is hereafter referred to as the "test image," since its purpose is to allow the matching-algorithm of the comparator to test whether a corresponding portion of the test image matches the select portion of the template image. The operator uses controls within the panel 130 to select or otherwise define one or more identification rectangles 111 on the template image information in the first panel 110. The matching algorithm of the comparator uses the portion of the template image inside these identification rectangles 111 to identify corresponding areas 121 in the test image as shown in the second panel 120. When a match is found between the two image files, by comparing the image information in the identification rectangle 111 of the template image with the corresponding pixel information of the corresponding area 121 of the test image, the system for identifying matching images may be further configured to take one or more actions on the test image information. For example, a redactor may be configured to modify or replace pixel information from one or more operator identified portions of the test image information. Such operator identified portions of the test image include the redaction rectangle 122 and redaction rectangle 123. In the illustrated embodiment, the modified or redacted portions of the test image are illustrated in a solid black color. This is indicative of the replacement of the corresponding pixel information by all zero digital values. Thus, a saved version of the modified or redacted image cannot be reverse engineered to determine the image information that was present in the original test image. Alternatively, the system could replace the pixel information corresponding to redaction rectangle 122 and/or the pixel information corresponding to redaction rectangle 123 with alternating patterns of zeros and ones, or all ones…], and [¶25]. PNG media_image1.png 502 552 media_image1.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Fan by incorporating “A user interface and interactive application for redacting digital documents”, as taught by Blenkhorn. One could have been motivated to do so in order to allows an operator to perform document recognition and redaction on a small number of representative files and receive feedback on the accuracy of these processes before committing to potentially long and processor-intensive redaction of a larger collection of files. [Blenkhorn, Abstract]. Regarding claims 4, and 15, Fand discloses receiving a request that comprises an account identifier; and determining, based on the account identifier, to generate the second-string [see FIG 3, for before and after redaction done on the social security number], and [¶18, In an example, after the edit is completed, an additional action by the user may be required before triggering automatic edits. For example, placement of the cursor 102 at the location shown in FIG. 1C may trigger the automatic edits. A system may identify, from the cursor 102 placement that the user is attempting to perform a second edit (on Jake Craig's number) corresponding to the completed edit. Other actions of this sort may trigger the automatic edits, such as a keystroke (e.g., a press of a spacebar), entry of text or deletion of text (e.g., when XXXX is entered before the numbers are deleted), highlighting text (e.g., with the cursor, highlighting the numbers to delete/change to XXXX)]. The method of claims 5, and 16, Fan discloses further comprising: receiving a second request that comprises a second account identifier; and in response to the second request, generating, based on the document template, a second document comprising the first string [¶18, In an example, after the edit is completed, an additional action by the user may be required before triggering automatic edits. For example, placement of the cursor 102 at the location shown in FIG. 1C may trigger the automatic edits. A system may identify, from the cursor 102 placement that the user is attempting to perform a second edit (on Jake Craig's number) corresponding to the completed edit. Other actions of this sort may trigger the automatic edits, such as a keystroke (e.g., a press of a spacebar), entry of text or deletion of text (e.g., when XXXX is entered before the numbers are deleted), highlighting text (e.g., with the cursor, highlighting the numbers to delete/change to XXXX)], and [see FIG. 1D-1F and corresponding text for more details]. Regarding claims 6, and 18, Fan discloses determining a second number of characters of a third string representation of the second data value; and based on the determined second number of characters, generating a fourth string; wherein the generating of the document is further based on the fourth string [¶¶13-14, … the systems and methods may automatically detect that the user is conducting a repeatable task. After identification of the repeatable task, the systems and methods may be used to scan the document to identify whether there are other places to apply the same edit. When identified, the systems and methods may automatically surface suggestions to modify the document or perform the edits automatically. The edits performed automatically based on a user example may include string modification (e.g., changing text, such as redacting numbers or letters, changing spelled out words to abbreviations, etc.), or formatting modifications (e.g., capitalizing letters, font changes, such as underline, italicize, bolding, size, etc.). Some automatic edits may include both format changes and string changes (e.g., redacting and changing font type), and [0026] Example formatting changes may include: formatting headings based on a first example; capitalizing names; redacting certain types of content (names, SSN, etc.); positioning pictures based on examples, wrapping, size, etc.; table layout clean up, such as fixing column widths across tables; proofing, such as automating changes in spelling of a word, applied everywhere else automatically; forms creation, such as turning all similar types of data in to fields automatically; template document creation…]. Regarding claims 7, and 19, Fan discloses determining a font size of the first string; wherein the generating of the second string is further based on the font size [¶¶13-14, … the systems and methods may automatically detect that the user is conducting a repeatable task. After identification of the repeatable task, the systems and methods may be used to scan the document to identify whether there are other places to apply the same edit. When identified, the systems and methods may automatically surface suggestions to modify the document or perform the edits automatically. The edits performed automatically based on a user example may include string modification (e.g., changing text, such as redacting numbers or letters, changing spelled out words to abbreviations, etc.), or formatting modifications (e.g., capitalizing letters, font changes, such as underline, italicize, bolding, size, etc.). Some automatic edits may include both format changes and string changes (e.g., redacting and changing font type), and [0026] Example formatting changes may include: formatting headings based on a first example; capitalizing names; redacting certain types of content (names, SSN, etc.); positioning pictures based on examples, wrapping, size, etc.; table layout clean up, such as fixing column widths across tables; proofing, such as automating changes in spelling of a word, applied everywhere else automatically; forms creation, such as turning all similar types of data in to fields automatically; template document creation…]. Regarding claim 9, this claim is interpreted and rejected for the same rational set forth in claim 7. Regarding claim 10, Fan discloses determining that the first string comprises a number of digits; wherein the generating of the second string is further based on the number of digits [See FIGS. 1B-1D, and corresponding text for more details, for example John Smith SSN 123-45-6789, and 123-45-XXXX, and etc.]. Regarding claim 11, Fan discloses determining that the first string comprises a number of letters; wherein the generating of the second string is further based on the number of letters [¶14, The edits performed automatically based on a user example may include string modification (e.g., changing text, such as redacting numbers or letters, changing spelled out words to abbreviations, etc.), or formatting modifications (e.g., capitalizing letters, font changes, such as underline, italicize, bolding, size, etc.). Some automatic edits may include both format changes and string changes (e.g., redacting and changing font type)]. Regarding claim 22, Fan discloses : accessing an identifier of a third data element of the plurality of data elements; based on the identifier of the third data element, accessing a name of an individual from the data source; determining a second number of characters of a third string for the name of the individual; and based on the determined second number of characters, generating a fourth string [ see FIGs 1A-1G, 2A and corresponding text for more details], and [¶26, Example formatting changes may include: formatting headings based on a first example; capitalizing names; redacting certain types of content (names, SSN, etc.); positioning pictures based on examples, wrapping, size, etc.; table layout clean up, such as fixing column widths across tables; proofing, such as automating changes in spelling of a word, applied everywhere else automatically; forms creation, such as turning all similar types of data in to fields automatically; template document creation, such as changing a customer name throughout a document after a first user replacement; footnotes by example; creating a table of contents based on a first manually typed example; automatically creating cross-references based on what the user types (e.g., “see table on page 2”); reorganizing research content to move each portion of content that is highlighted a color to be near one another, or alternatively to move everything that's “green” to a new document; content editing, such as replacing two spaces after period with one or putting a quotation mark around a name/number; changing a date format; formatting a list; recovering a “paste” function, by learning what a next paste recovery for a next example is, based on a first example; or the like. In an example, formatting changes may include object positioning or modification changes. For example, setting a wrapping style of an object (e.g., an image, a text box, etc.), alignment, cropping an image, inserting or modifying a shape, a 3D model, or the like]. Claims 2, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. (US2019/0311022) issued to Fan, and, in view of Blenkhorn et.al (US2012/0033892), hereinafter, “Blenkhorn”, and further in view of Vikram Suriyanarayanan (US10331950), hereinafter, “Vikram”. Regarding claims 2, and 13, Fan, Blenkhorn do not explicitly disclose, however, Vikram discloses: wherein: the generating of the second string comprises randomly generating the second string with using the same number of characters as the first string [Col. 12 lines 3-41, In block 340, the system may normalize each extractable data entry 108. For example, if the extractable data entry 108 comprises an SSN in the form 123-45-7890, it may be normalized to 123457890 according to a business ruleset. The business ruleset may include standardization routines for commonly detected data types… In block 350, when the extractable data entry 108 includes sensitive data (e.g., SSN on a Social Security card, account number on a bank statement data, etc.), the extractable data entry 108 may be tokenized before proceeding. The data quality and tokenization unit 114 may, in some embodiments, tokenize data entries 108 that include sensitive data entries (e.g., the entire data entry 108 or at least the portion of the data entry 108 containing sensitive data). For example, the data quality and tokenization unit 114 may tokenize an SSN such as 123456789 by encrypting the sensitive data entry 108 with an encryption key stored securely on the system 100 and returning to the data quality and tokenization unit 114 ihoBOe3hH. …]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Fan, and Blenkhorn by incorporating “file processing device 110) which extract at least one extractable data entry from each uploaded document”, as taught by Vikram. One could have been motivated to do so in order to tokenize SSN by encrypting the sensitive data entry with an encryption key for the protection of the customer’s sensitive information [Vikram, Col. 12 lines 22-41]. Claims 8, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. (US2019/0311022) issued to Fan, and, in view of Blenkhorn et.al (US2012/0033892), hereinafter, “Blenkhorn”, and further in view of US Patent No. (US2018/0285592) issued to Sharifi. Regarding claims 8, and 20, Fan, and Blenkhorn do not explicitly disclose, However Sharifi disclose determining a color of the first string; wherein the generating of the second string is further based on the color [¶37, In some examples, UI module 120 may cause the information to be obscured by obfuscating or concealing the information. For example, UI module 120 may cause the information to be obfuscated by dimming the display or adjusting a font (e.g., reducing font size, changing the color, or changing the font type) or blurring the information. UI module 120 may conceal information by replacing certain alphanumerical characters with different alphanumerical characters or symbols. For instance, as shown in GUI 114, social security numbers are obscured by replacing numbers of characters with letters. In some instances, UI module 120 may cause information to be obscured by refraining from displaying characters. For instance, as shown in GUI 114, UI module 120 may refrain from outputting a scale of the employee salary. By refraining from including a salary scale, individuals other than the active user may not be able to ascertain whether the salary is an hourly wage, or yearly salary, or some other indication of an employee salary], and [¶69]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Fan, and Blenkhorn by incorporating “adjusting a font (e.g., reducing font size, changing the color, or changing the font type) or blurring the information”, as taught by Sharifi. One could have been motivated to do so in order for enable a computing device to selectively obscure private information [ Sharifi, ¶¶2,37]. Claims 21 is rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. (US2019/0311022) issued to Fan, and, in view of Blenkhorn et.al (US2012/0033892), hereinafter, “Blenkhorn”, and further in view of US Patent No. (US20200104539) issued to Liu. Regarding claim 21, Fan, and Blenkhorn do not explicitly disclose, however, Liu discloses wherein the first data element is an income of an individual and the second data value is a total income of a plurality of individuals including the individual [¶46, The sensitive data can be obscured utilizing any suitable technique that can prevent an individual from being able to read and/or decipher the contents of a set of sensitive data displayed on a display 102. Examples of obscurity sensitive data includes, but are not limited to, blacking out the sensitive data, blurring the sensitive data, removing or blanking the sensitive data, making the sensitive data invisible, fuzzy out the sensitive data, encrypting the sensitive data, and/or scrambling the sensitive data, etc., among other techniques that can prevent an individual from reading and/or deciphering the contents of a set of sensitive data], and [¶55, In various embodiments, a learning module 208 can automatically learn/determine sensitive data and/or can receive user input identifying types of sensitive data. Examples of sensitive data can include, but are not limited to, bank account information (account number, balance, transaction history, etc.), credit/debit card information (account number, balance, transaction history, personal identification number (PIN), etc.), social security number, passport number, salary/income, tax information, contact information (e.g., physical address, phone number, email address, etc.), license information (e.g., driver's license, professional license, etc.), legal information (e.g., title, ownership, citizenship, lawsuits, etc.), and personal information (e.g., relatives, date of birth, maiden name, mother's maiden name, birth
Read full office action

Prosecution Timeline

Dec 10, 2021
Application Filed
Apr 04, 2024
Non-Final Rejection — §101, §102, §103
Apr 26, 2024
Interview Requested
May 06, 2024
Examiner Interview Summary
May 06, 2024
Applicant Interview (Telephonic)
Jul 09, 2024
Response Filed
Oct 10, 2024
Final Rejection — §101, §102, §103
Dec 16, 2024
Final Rejection — §101, §102, §103
Jan 14, 2025
Interview Requested
Jan 23, 2025
Applicant Interview (Telephonic)
Jan 25, 2025
Examiner Interview Summary
Jan 29, 2025
Notice of Allowance
Mar 28, 2025
Response after Non-Final Action
Mar 31, 2025
Response after Non-Final Action
Jun 20, 2025
Non-Final Rejection — §101, §102, §103
Jul 15, 2025
Interview Requested
Aug 01, 2025
Examiner Interview Summary
Aug 01, 2025
Applicant Interview (Telephonic)
Aug 11, 2025
Response Filed
Nov 04, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587392
SECURE COMMUNICATION METHOD AND APPARATUS IN PASSIVE OPTICAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12549527
MULTI-FACTOR AUTHENTICATION OF CLOUD-MANAGED SERVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12547755
TECHNIQUES FOR SECURELY EXECUTING ATTESTED CODE IN A COLLABORATIVE ENVIRONMENT
2y 5m to grant Granted Feb 10, 2026
Patent 12543044
SYSTEMS AND METHODS OF AUTOMATIC OUT-OF-BAND (OOB) RESTRICTED CELLULAR CONNECTIVITY FOR SET UP PROVISIONING OF MANAGED CLIENT INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Feb 03, 2026
Patent 12511435
DEVICE AND METHOD FOR ENFORCING A DATA POLICY
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
79%
Grant Probability
87%
With Interview (+7.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 433 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month