Prosecution Insights
Last updated: April 19, 2026
Application No. 18/180,910

PERSONAL INFORMATION DETECTION REINFORCEMENT METHOD USING MULTIPLE FILTERING AND PERSONAL INFORMATION DETECTION REINFORCEMENT APPARATUS USING THE SAME

Non-Final OA §102§103
Filed
Mar 09, 2023
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
AhnLab CloudMate Inc.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This non-final office action is in response to the application filed 9 March 2023. Claims 1-10 are pending. Claims 1 and 10 are independent claims. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). Information Disclosure Statement The information disclosure statement (IDS) submitted on 30 June 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The examiner accepts the drawings filed 9 March 2023. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5, and 10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ghosh et al. (US 11640345, filed 15 May 2020, hereafter Ghosh). As per independent claim 1, Ghosh disclose a personal information detection reinforcement method using multiple filtering, the personal information detection reinforcement method being performed by an apparatus and comprising: performing first filtering of input data using record data and pattern data (Figure 1B; column 4, line 57- column 6, line 49: Here, a subject matter expert, or other source, provides a filtered set of available options to a prediction model (column 4, lines 62-67). This filtering by the subject matter expert is based upon their expertise and includes identifying records and patterns based upon data unknown to the model (column 2, lines 11-23). These options are provided to an options filtering module (Figure 1B, item 112) to constrain the decisions (column 4, line 67- column 5, line 5)) classifying a class of the first-filtered input data using a previously constructed supervised learning model (Figure 2; column 6, line 50- column 9, line 62: Here, the option filtering module (Figure 1B, item 112) receives a set of potential options from a subject matter expert (Figure 2, item 202). A prediction model, a machine learning model, is used to predict results and can be trained, tuned, or modified to incorporate additional information as it becomes available (column 6, lines 62-67). This machine learning prediction model estimates gain for the potential options (Figure 2, item 214) and provided an output set of feasible options (Figure 2, item 224). This set of potential options constitutes a class of data) performing second filtering of the first-filtered input data using an unsupervised-based algorithm based on the classified class (Figure 3; column 9, line 63- column 11, line 22: Here, a second filtering of the first-filtered data is performed using the option selecting module. The option selectin model receives the output of the option filtering model (Figure 1B and Figure 3, item 302). The option selection model evaluates options and determines the predicted result for the option. This includes determining if an option is ”dominated” (Figure 3, item 308). An option is “dominated” if another option in the set of feasible options has both a same or better predicted result and a higher predicted information gain (column 10, lines 16-24). This determination of whether an option is “dominated” results in a filtering of the option. Finally, the option with the best future results are selected (Figure 3, item 320)) updating the supervised learning model based on the second-filtered results data (Figure 1B; column 6, lines 13-31: Here, based upon results provided the decision implementer and the subject matter expert, the decision constraints may be revised to update the safe reinforcement learning model service (item 110)) As per dependent claim 5, Ghosh discloses wherein the performing of the second filtering comprises performing an unsupervised-based algorithm for the first-filtered input data (Figure 3; column 9, line 63- column 11, line 22: Here, a second filtering of the first-filtered data is performed using the option selecting module. The option selectin model receives the output of the option filtering model (Figure 1B and Figure 3, item 302), based on the classified class, and determining whether the classified class is correct for the first-filtered input data ((Figure 3; column 9, line 63- column 11, line 22: Here, it is determined whether each option in the set of options (class) is the best future result. If an option yields the best future results, it is maintained as an option; if the option is “dominated” it is not selected as an option). With respect to independent claim 10, the claim recites the apparatus for implementing the method of claim 1. Ghosh further discloses a communication unit (Figure 4, item 406), a memory storing at least one process for reinforcing personal information detection using the multiple filtering (Figure 4, item 420), and a processor configured to operate depending on the at least one process (Figure 4, item 402). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Ghosh and further in view of Patthak et al. (US 11971898, filed 2 Dec 2021, hereafter Patthak). As per dependent claim 2, Ghosh discloses the limitations similar to those in claim 1, and the same rejection is incorporated herein. Ghosh discloses wherein the performing of the first filtering comprises comparing the input data with the record data being previously collected based on a predicted result of the supervised learning model to determine whether the input data corresponds to the record data (Figure 2; column 6, line 50- column 9, line 62: Here, the option filtering model uses the constraints from the subject matter expert to classify options as “feasible options” or not feasible options. This is performed using a prediction model, a gain estimation metric, and decision constraints). Ghosh fails to specifically disclose performing regular expression pattern inspection of data which does not correspond to the record data and determining whether there is pattern data corresponding to a type of the input data among the pieces of pattern data previously stored about a data type. However, Patthak, which is analogous to the claimed invention because it is directed toward implementing machine learning classifications, discloses performing regular expression pattern inspection of data which does not correspond to the record data and determining whether there is pattern data corresponding to a type of the input data among the pieces of pattern data previously stored about a data type (claim 1; column 26, line 47- column 27, line 2: Here, a regular expression classifier is used to match contents against a regular expression pattern to cause the item to be included in the classification). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Patthak with Ghosh, with a reasonable expectation of success, as it would have allowed for applying multiple classifiers in order to improve classification quality. This includes including items matching a defined regular expression for further evaluation to insure that relevant items are evaluated by the additional filtering. As per dependent claim 3, Ghosh and Patthak disclose the limitations similar to those in claim 2, and the same rejection is incorporated herein. Patthak discloses determining a class corresponding to the pattern data as a class of input data in which the pattern data is present, with respect to the input data in which the pattern data is present (claim 1; column 26, line 47- column 27, line 2: Here, a regular expression classifier is used to match contents against a regular expression pattern to cause the item to be included in the classification). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Patthak with Ghosh, with a reasonable expectation of success, as it would have allowed for applying multiple classifiers in order to improve classification quality. This includes including items matching a defined regular expression for further evaluation to insure that relevant items are evaluated by the additional filtering. As per dependent claim 4, Ghosh and Patthak disclose the limitations similar to those in claim 2, and the same rejection is incorporated herein. Ghosh discloses wherein the classifying of the class comprises applying input data in which the pattern data is not present to the supervised learning model to classify a class of the input data in which the pattern data is not present (Figure 2; column 6, line 50- column 9, line 62: Here, the option filtering model uses the constraints from the subject matter expert to classify options as “feasible options” or not feasible options. This is performed using a prediction model, a gain estimation metric, and decision constraints). Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Ghosh and further in view of Chorakhalikar et al. (US 2023/0162021, filed 24 November 2021, hereafter Chorakhalikar). As per dependent claim 6, Ghosh discloses the limitations similar to those in claim 5, and the same rejection is incorporated herein. Ghosh discloses wherein the determining of whether the class is correct comprises determining that the classified class is not correct, when a feature value of the first-filtered input data deviates from a predetermined range with respect to a data statistics value for the classified class (column 8, lines 21-54). Ghosh fails to specifically disclose measuring a similarity between the first-filtered input data and data of each of a plurality of classes learned by the supervised learning model and selecting a class with the largest similarity value among the plurality of classes as a class of the first-filtered input data to calibrate the classified class. However, Chorakhalikar, which is analogous to the claimed invention because it is directed toward classification based on similarity values, discloses measuring a similarity between the first-filtered input data and data of each of a plurality of classes learned by the supervised learning model and selecting a class with the largest similarity value among the plurality of classes as a class of the first-filtered input data to calibrate the classified class (paragraph 0051: Here, data is classified based upon a similarity score representing the confidence that a data item should be classified in a specific class). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Chorakhalikar with Ghosh, with a reasonable expectation of success, as it would have allowed for classifying date among a plurality of different possible categories/classifications (Chorakhalikar: paragraph 0051). As per dependent claim 8, Ghosh and Chorakhalikar disclose the limitations similar to those in claim 6, and the same rejection is incorporated herein. Ghosh discloses adding the data as training data of the supervised learning model to update the supervised learning model (Figure 1B; column 6, lines 13-31: Here, based upon results provided the decision implementer and the subject matter expert, the decision constraints may be revised to update the safe reinforcement learning model service (item 110)). Ghosh fails to specifically disclose calibrate class data, however, Chorakhalikar discloses calibrated class data (paragraph 0051: Here, data is classified based upon a similarity score representing the confidence that a data item should be classified in a specific class). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Chorakhalikar with Ghosh, with a reasonable expectation of success, as it would have allowed for improving of the model used for classifying date among a plurality of different possible categories/classifications. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ghosh and Chorakhalikar further in view of Lengevin (US 2023/0420084, filed 26 August 2020). As per dependent claim 7, Ghosh and Chorakhalikar disclose the limitations similar to those in claim 6, and the same rejection is incorporated herein. Ghosh fails to specifically disclose wherein the predetermined range is set based on a data characteristic and wherein the data characteristic includes a length distribution of data, a character number distribution of data, and a learning score distribution. However, Lengevin, which is analogous to the claimed invention because it is directed toward distribution learning, discloses wherein the predetermined range is set based on a data characteristic and wherein the data characteristic includes a length distribution of data, a character number distribution of data, and a learning score distribution (paragraphs 0004, 0038, and 0045: Here, distribution learning is used to determine the probability of a distribution of data items over characters/tokens of the string (paragraph 0038) and for a length of the sample (paragraph 0045)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Lengevin with Ghosh-Chorakhalikar, with a reasonable expectation of success, as it would have allowed for applying generative models to optimize goal-oriented learning (Lengevin: paragraph 0004). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Ghosh. As per dependent claim 9, Ghosh discloses the limitations similar to those in claim 1, and the same rejection is incorporated herein. Ghosh discloses updating a previously constructed model based on the second filtered result data (Figure 1B; column 6, lines 13-31: Here, based upon results provided the decision implementer and the subject matter expert, the decision constraints may be revised to update the safe reinforcement learning model service (item 110)). However, Ghosh fails to specifically disclose updating a record-based model, a pattern-based model, and a statistics-based model. However, the examiner takes official notice that it was notoriously well-known in the art at the time of the applicant’s effective filing date that updating models may include using and updating of a record-based model, a pattern-based model, and a statistics-based model. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ghosh’s teaching of updating a model with the use of well-known models, as it would have allowed for updating multiple models to reflect the results of the training. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Seo et al. (A reinforcement learning agent for personalized information filtering, 9 January 2000): Discloses reinforcement learning to learn from profiles of individual users (Abstract) Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Mar 09, 2023
Application Filed
Dec 14, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month