Prosecution Insights
Last updated: April 18, 2026
Application No. 18/111,049

APPARATUS, METHOD AND COMPUTER PROGRAM FOR GENERATING DE-IDENTIFIED TRAINING DATA FOR CONVERSATIONAL SERVICE

Non-Final OA §101§103§112
Filed
Feb 17, 2023
Examiner
JOSEPH, NEDGE DARLEN'S
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Tunib Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
1 currently pending
Career history
1
Total Applications
across all art units

Statute-Specific Performance

§103
33.3%
-6.7% vs TC avg
§102
33.3%
-6.7% vs TC avg
§112
33.3%
-6.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-19 are pending and have been examined. Claims 1-19 are rejected. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The present application claims foreign priority to Korean patent application No. KR10-2022-0021195 filed on 2/18/2022. The examiner acknowledges that a certified copy of Korean patent application No. KR10-2022-0021195 has been retrieved (on 4/12/2023, in Korean) as required by 37 CFR 1.55. The examiner notes that a translation of Korean patent application No. KR10-2022-0021195 does not appear to have been furnished to-date. Claim Objections Claims 1-19 are objected to because of the following informalities: Line 1 of claim 1 recites “An apparatus for generating de-identified training data for conversational service”. This recitation is grammatically incorrect. In particular there is a missing article “a” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for a conversational service”. Appropriate correction is required. Line 1 of dependent claim 2 recites “The apparatus for generating de-identified training data for conversational service of Claim 1”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 3 recites “The apparatus for generating de-identified training data for conversational service of Claim 1”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 4 recites “The apparatus for generating de-identified training data for conversational service of Claim 3”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 5 recites “The apparatus for generating de-identified training data for conversational service of Claim 1”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 6 recites “The apparatus for generating de-identified training data for conversational service of Claim 1”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 7 recites “The apparatus for generating de-identified training data for conversational service of Claim 1”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 8 recites “The apparatus for generating de-identified training data for conversational service of Claim 1”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of independent claim 10 recites “A method for generating de-identified training data for conversational service”. This recitation is grammatically incorrect. In particular there is a missing article “a” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for a conversational service”. Appropriate correction is required. Line 1 of dependent claim 11 recites “A method for generating de-identified training data for conversational service of Claim 10”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 12 recites “The method for generating de-identified training data for conversational service of Claim 10”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 13 recites “The method for generating de-identified training data for conversational service of Claim 12”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 14 recites “The method for generating de-identified training data for conversational service of Claim 10”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 15 recites “The method for generating de-identified training data for conversational service of Claim 10”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 16 recites “The method for generating de-identified training data for conversational service of Claim 10”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 17 recites “The method for generating de-identified training data for conversational service of Claim 10”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 1 of dependent claim 18 recites “The method for generating de-identified training data for conversational service of Claim 10”. This recitation is grammatically incorrect. In particular there is a missing article “the” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for the conversational service”. Appropriate correction is required. Line 2 of independent claim 19 recites “A non-transitory computer-readable storage medium storing a computer program including a sequence of instructions to generate de-identified training data for conversational service”. This recitation is grammatically incorrect. In particular there is a missing article “a” between “for” and “conversational”. If supported by the original specification, examiner suggests that one possible way to address this objection would be to amend “for conversational service” to read “for a conversational service”. Appropriate correction is required. Also, claims 2-9 and 11-18, which depend directly or indirectly from claims 1 and 10, respectively, are objected to base on their respective dependencies from claims 1 and 10. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “sentence detection unit”, “de-identification target sentence detection unit”, “search unit”, and “training data generation unit” in claim 1, “sentence detection unit” in claim 2, “sentence detection unit” in claim 3, “de-identification target sentence detection unit” in claim 4, “training data generation unit” in claims 5, “training data generation unit” in claim 6, “training data generation unit” in claim 7, and “training data generation unit” in claim 8. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification shows that the corresponding structure is not described in the specification for the 35 U.S.C. 112(f) regarding unit limitations recited in claims 1-8: The written description of unit is in the Specification in page 6, ¶ 23: Throughout this document, the term "unit" may refer to a unit implemented by hardware, software, and/or a combination thereof. As examples only, one unit may be implemented by two or more pieces of hardware or two or more units may be implemented by one piece of hardware. However, the "unit" is not limited to the software or the hardware and may be stored in an addressable storage medium or may be configured to implement one or more processors. The drawing of FIG. 1 merely shows a high-level black-box for a “unit” designed to perform the entire claimed function. There is no description, flowchart, pseudo-code, or logic for the operations performed by the “unit” disclosed in applicants’ specification. Accordingly, for these claim limitations, the written description fails to disclose both an algorithm(s) and special-purpose computer hardware to perform the algorithm(s). For more information, see MPEP § 2181. As such, the specification describes the claimed units by their functions without disclosing any specific structure performing the claimed functions. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-9 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. Independent claim 1 and dependent claims 2-8 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention. In particular, and as previously noted, the claim limitations below invoke 35 U.S.C. 112(f): “sentence detection unit”, “de-identification target sentence detection unit”, “search unit”, and “training data generation unit” in claim 1, “sentence detection unit” in claim 2, “sentence detection unit” in claim 3, “de-identification target sentence detection unit” in claim 4, “training data generation unit” in claims 5, “training data generation unit” in claim 6, “training data generation unit” in claim 7, and “training data generation unit” in claim 8. The written description of unit is in the Specification page 6, ¶ 23: Throughout this document, the term "unit" may refer to a unit implemented by hardware, software, and/or a combination thereof. As examples only, one unit may be implemented by two or more pieces of hardware or two or more units may be implemented by one piece of hardware. However, the "unit" is not limited to the software or the hardware and may be stored in an addressable storage medium or may be configured to implement one or more processors. The drawings merely show a black-box for a “unit” designed to perform the entire claimed function (see, e.g., FIG. 1). There is no description, flowchart, pseudo-code, or logic for the operations performed by the “unit” disclosed in applicants’ specification. As such, the specification describes the claimed unit by its functions without disclosing any specific structure performing the claimed functions. However, as noted above, the written description of the current application fails to disclose the corresponding structure, material, or acts for performing each of the above-identified claimed functions and to clearly link the structure, material, or acts to the function. In particular, for each of the claimed functions, the written description fails to disclose both an algorithm(s) and special-purpose computer hardware to perform the algorithm. For more information, see MPEP § 2181. Accordingly, claims 1-9 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. Also, claims 2-9 all depend directly or indirectly from claims 1 and claim 4 depends directly from claim 3, are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement under the same rationale as claims 1 and 3. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9 and 14-16 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. As previously noted, the claim limitations ““sentence detection unit … de-identification target sentence detection unit … search unit … training data generation unit” in claim 1; “sentence detection unit” in claim 2; “sentence detection unit” in claim 3; “de-identification target sentence detection unit” in claim 4,“training data generation unit” in claim 5, “training data generation unit” in claim 6, “training data generation unit” in claim 7, and “training data generation unit” in claim 8” in claims 1-8 invoke 35 U.S.C. 112(f). However, as also discussed above with regard to the rejections of claims 1-8 under 35 U.S.C. 112(a), the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. In particular, the specification fails to clearly link the structure, material, or acts to the function for the limitations “sentence detection unit configured to … de-identification target sentence detection unit configured to … search unit configured to … training data generation unit configured to” in claim 1; “sentence detection unit configured to” in claim 2; “sentence detection unit configured to” in claim 3; “de-identification target sentence detection unit configured to” in claim 4,“training data generation unit configured to” in claim 5, “training data generation unit configured to” in claim 6, “training data generation unit configured to” in claim 7, and “training data generation unit configured to” in claim 8”. However, as noted above, there is insufficient disclosure in the specification of algorithms and specific computer hardware for implementing the claimed units. As such, the above-noted limitations recited in claims 1-8 are indefinite. Therefore, claims 1-8 are indefinite and is rejected under 35 U.S.C. 112(b). Regarding claim 5, line 3, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Appropriate correction is required. Regarding claim 6, line 3, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Appropriate correction is required. Regarding claim 7, line 4, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Appropriate correction is required. Regarding claim 14, line 3, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Appropriate correction is required. Regarding claim 15, line 4, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Appropriate correction is required. Regarding claim 16, line 5, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Appropriate correction is required. Claim 6, line 4, recites the limitation "the same tag set". There is insufficient antecedent basis for this limitation in the claim. No “same tag set” was previously introduced in this claim, or its base claim, independent claim 1. For examination purposes, "the same tag set" is being interpreted as any “same tag set". Appropriate correction is required. Claim 15, line 4, recites the limitation "the same tag set". There is insufficient antecedent basis for this limitation in the claim. No “same tag set” was previously introduced in this claim, or its base claim, independent claim 10. Appropriate correction is required. Also, claims 2-9 all depends directly or indirectly from independent claim 1, and claim 4 depends directly from claim 3, so claims 1-9 are all rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC §. 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-9 are directed to an apparatus corresponding to a machine, one of the statutory categories. Claim 10-18 are method type claims, corresponding to a process, one of the statutory categories. Claim 19 is directed to a non-transitory computer readable medium storing instructions, corresponding to an article of manufacture, one of the statutory categories. Therefore, claims 1-19 are directed to either a process, a machine, or an article of manufacture. With respect to claim 1: 2A Prong 1: The claim recites, inter alia: detect at least one sentence including personal information in a conversation between a user device and a chatbot (mental process of evaluation/judgment/opinion – a user can mentally detect/identify at least one sentence including personal information in an observed conversation between a user device and a chatbot or with aid of pen and paper); input conversational data including the at least one sentence into a personal information identification model (mental process of evaluation/judgment/opinion – a user can input conversational data including the at least one sentence into a personal information identification model with aid of pen and paper); search a predefined de-identification target token from the conversational data when a de-identification target sentence is detected from the conversational data (mental process of judgment – a user can mentally search a predefined de-identification target token from the conversational data when a de-identification target sentence is detected from the observed conversational data with pen and paper); generate training data on the conversational data by de-identifying text corresponding to the searched de-identification target token (mental process of judgment – a user can perform this task mentally with the aid of pen and paper). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: a sentence detection unit configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). a de-identification target sentence detection unit (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). a search unit configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). a training data generation unit configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). detect a de- identification target sentence through the personal information identification model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: a sentence detection unit configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). a de-identification target sentence detection unit (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). a search unit configured to mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). a training data generation unit configured to mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). detect a de- identification target sentence through the personal information identification model mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. With respect to claim 2: 2A Prong 1: Claim 2 is directed to an apparatus as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: the sentence detection unit is configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). can understand intention of the sentences based on context of the sentences stored sequentially in the buffer (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). wherein sentences in the conversation are stored sequentially in a buffer (This step is directed to receiving and storing information, which is understood to be insignificant extra-solution activity and necessary data outputting and storage- see MPEP 2106.05(g)). Additionally, the above recitation of “wherein sentences in the conversation are stored sequentially in a buffer” recites insignificant extra-solution activity of mere data storage. The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: the sentence detection unit is configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)).). can understand intention of the sentences based on context of the sentences stored sequentially in the buffer (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). wherein sentences in the conversation are stored sequentially in a buffer (This step is directed to receiving information, which is understood to be insignificant extra-solution activity and data gathering - see MPEP 2106.05(d)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere insignificant extra solution activity in combination of generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP2106.05(d)(II) (“The courts have recognized the following computer functions as well‐understood, routine, and conventional functions… i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory”) (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). Therefore, recitation of “wherein sentences in the conversation are stored sequentially in a buffer” are the well-understood, routine, conventional activities of receiving or transmitting data over a network and storing information in memory, as discussed in MPEP § 2106.05(d). With respect to claim 3, claim 3 is directed to an apparatus as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. 2A Prong 1: The claim recites: calculate a first probability that the at least one sentence will include the personal information (mental process of evaluation/judgement/opinion– a user can mentally calculate/determine a first probability that the observed at least one sentence will include the personal information with pen or paper). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: the sentence detection unit is configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: sentence detection unit is configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. With respect to claim 4, claim 4 is directed to an apparatus as depending from claim 3, thus the analysis for patent eligibilities of claim 3 and base claim 1 are incorporated herein. 2A Prong 1: The claim recites: detect the de-identification target sentence using the first probability and the second probability (mental process of judgment – a user can detect the de-identification target sentence using the first probability and the second probability in the mind or with pen and paper). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein a second probability that each sentence will include the personal information is output from the personal information identification model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). the de-identification target sentence detection unit is configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein a second probability that each sentence will include the personal information is output from the personal information identification model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). the de-identification target sentence detection unit is configured to (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. With respect to claim 5 2A Prong 1: The claim recites: generate the training data by de-identifying the text corresponding to the de-identification target token, such as deleting the text or replacing the text with a special character (mental process of judgment – a user can generate/create the training data by evaluation/judgement/opinion to de-identify the text corresponding to the observed de-identification target token with pen and paper). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. With respect to claim 6 2A Prong 1: The claim recites: generate the training data by de-identifying first text corresponding to the de-identification target token, such as replacing the first text with second text included in the same tag set as the first text (mental process of judgment – a user can generate the training data by de-identifying the text corresponding to the de-identification target token with pen and paper). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. With respect to claim 7 2A Prong 1: The claim recites: generate tag information based on attribute information of the text corresponding to the de-identification target token, and generate the training data by de-identifying the text, such as replacing the text with the tag information (mental process of judgment – a user can generate the training data by de-identifying the text corresponding to the de-identification target token with pen and paper). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. With respect to claim 8, claim 8 is directed to an apparatus as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. 2A Prong 1: The claim recites: generate different training data for each conversational service by de-identifying the text corresponding to the de-identification target token in a different format based on type of the conversational service (mental process of judgment – a user can generate the training data by de-identifying the text corresponding to the de-identification target token with pen and paper). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: training data generation unit is configured (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. With respect to claim 9: 2A Prong 1: Claim 9 is directed to a method as depending from claim 1, thus the analysis for patent eligibility of claim 1 is incorporated herein. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the personal information identification model is trained based on a dataset including the conversational data and a labelling of the de-identification target sentence (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions being implemented with generic computer elements at a high level of generality to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. wherein the personal information identification model is trained based on a dataset including the conversational data and a labelling of the de-identification target sentence (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above. Regarding independent claim 10: The claim recites a method that performs steps corresponding to the operations recited in apparatus claim 1. Therefore, claim 10 is rejected for the same reasons as discussed above with regard to claim 1. Regarding claims 11-18: Dependent claims 11-18 recite a “method” with steps that are the similar to the operations recited in apparatus claims 2-9. As such, claims 11-18, are directed to the same abstract ideas as recited in claims 2-9. Please see above for the rationale. Regarding independent claim 19: The claim recites a non-transitory computer-readable storage medium storing a computer program including a sequence of instructions that performs the operations similar to the method steps recited in claim 1. Therefore, claim 19 is rejected for the same reasons as discussed above with regard to claim 1. The limitations for additional elements of claim 19 are analyzed below: 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: A non-transitory computer-readable storage medium storing a computer program including a sequence of instructions to generate de-identified training data for conversational service (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions that are implemented to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. A non-transitory computer-readable storage medium storing a computer program including a sequence of instructions to generate de-identified training data for conversational service (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions being implemented with generic computer elements in a high level of generality to perform the disclosed abstract idea above Claim Rejection - 35 USC §. 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 9-10, 12-14, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over of Medalion et al. (US 20210125615 A1, hereinafter "Medalion") in view of Kneller et al. (US 20210256417 A1, hereinafter " Kneller"). Regarding claim 1 With respect to claim 1, Medalion discloses the invention as claimed including an apparatus for generating de-identified training data for conversational service (see, e.g., ¶ 18, "Aspects of the present disclosure provide apparatuses … for automatic detection and removal of personally identifiable information (PII). ¶ 39, “The … transcripts, and text-based chat transcripts may be saved.” [i.e., an apparatus for generating de-identified/redacted training data without PII for a conversational service/chat], ¶ 91, "flagger/redactor 110 may generate redacted text strings, which are sent back to production data collection 122 to replace the un-redacted text strings that include PII. Further, the redacted text strings may be provided for other uses 308, such as training, generation of other models”, ¶ 107). a sentence detection unit configured to detect at least one sentence including personal information between a user device and a chat (see, e.g., ¶ 34, "The BiLSTM [i.e., bidirectional long short-term memory] neural network models described herein are trained on labelled datasets including PII to identify the PII both directly and by context, which mitigates the rigidness of existing pattern-based techniques, such as regular expressions.", ¶ 36, "the BiLSTM neural network models may be configured to predict: (1) whether a string of text (e.g., a sentence in a transcript) contains PII by looking at the full sentence; (2) whether a specific term (or token) contains PII based on the context of the term; and (3) whether the specific term contains PII based on the term itself."). a de-identification target sentence detection unit configured to input conversational data including the at least one sentence into a personal information identification model and detect a de-identification target sentence through the personal information identification model (see, e.g., ¶ 36, "the BiLSTM neural network models may be configured to predict: (1) whether a string of text (e.g., a sentence in a transcript) contains PII by looking at the full sentence", ¶ 55, "PII detection system 100 also includes pattern-based matchers 106, which may match known patterns for PII. For example, social security numbers, credit card numbers, passport numbers, and other known PII patterns may be detected in data repository 120 using pattern-based matchers 106.", ¶ 78 “the model prediction of whether the ith training sample (e.g., a text string) includes PII … the model prediction of whether the ith training sample (e.g., a specific data element) is PII; and … the model prediction of whether the ith training sample (e.g., the specific data element) is PII based on the context around the specific data element” [see e.g., Equation 1 that calculates these predictions]); a search unit configured to search a predefined de-identification target token from the conversational data when a de-identification target sentence is detected from the conversational data (see, e.g., ¶ 36, “the BiLSTM neural network models may be configured to predict: (1) whether a string of text (e.g., a sentence in a transcript) contains PII by looking at the full sentence; (2) whether a specific term (or token) contains PII based on the context of the term”, ¶ 39, ¶ 51, ¶ 55, “PII detection system 100 also includes pattern-based matchers 106, which may match known patterns for PII. For example, social security numbers, credit card numbers, passport numbers, and other known PII patterns may be detected in data repository 120 using pattern-based matchers 106.", ¶ 83, ¶ 85, ¶ 94, "the original text strings from production data collection 122, or filtered and/or tokenized text strings, may be provided to pattern-based matchers 106, which may flag potential PII data using, for example, regular expressions." [i.e., flagger/search unit detects PII/de-identification target token]). a training data generation unit configured to generate training data on the conversational data by de-identifying text corresponding to the searched de-identification target (see, e.g., ¶ 39, “phone-base support lines may be recorded and transcribed for later use by the organization, such as … for training various sorts of models …. The recordings of the support calls, associated transcripts, and text-based chat transcripts may be saved”, ¶ 83, “user 302 may request support … through a text-based support session, such as a live chat.”, ¶ 85, “In the case of a text-based support session, the text strings (e.g., a live chat log) may be sent directly to production data collection 122.” ¶ 91, "flagger/redactor 110 may generate redacted text strings, which are sent back to production data collection 122 to replace the un-redacted text strings that include PII. Further, the redacted text strings may be provided for other uses 308, such as training, generation of other models”). Although Medalion substantially discloses the claimed invention, Medalion is not relied on for explicitly disclosing a sentence detection unit configured to detect at least one sentence … in a conversation between a user device and a chatbot. However, the same field, analogous art Kneller teaches: a sentence detection unit configured to detect at least one sentence … in a conversation between a user device and a chatbot (see, e.g., Abstract, “A system and method for creating input data to be used to train a conversational bot may include receiving a set of conversations or interaction transcripts, each conversation including sentences, classifying each sentence into a dialog act taken from a number of dialog acts, for each set of sentences classified into a dialog act, clustering the set of sentences into clusters based on the content (e.g. text) of the sentences, each cluster having a cluster name or label, and generating a language model (LM) based on the cluster labels.”); Medalion and Kneller are analogous art because they are both are from the same field of endeavor and are both related to techniques and systems for to creating training data for training machine learning models (see, e.g., Medalion, Abstract, and Kneller, Abstract). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Kneller into the method of Medalion to incorporate the teaching of a conversation between a user device and a chatbot (i.e., a conversational service) to provide techniques “for creating input data to be used to train a conversational bot may include receiving a set of conversations or interaction transcripts, each conversation including sentences, classifying each sentence into a dialog" (see, e.g., Kneller, ¶ 4). Doing so would have allowed Medalion to be able to use Kneller’s techniques for “creating input data to be used to train a conversational bot” (i.e., generating training data for a conversational service) in order to "improve the process of training conversational bots using human chat logs and audio calls transcripts" and "create automated computer processes capable of engaging in conversations with people", as suggested by Kneller (see, e.g., ¶ 5). Regarding claim 3 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 1. Medalion further teaches: wherein the sentence detection unit is configured to calculate a first probability that the at least one sentence will include the personal information (see. e.g., ¶ 72, "For example, this output ŷ1 may take the form of a probability, which may be compared to a threshold to make the determination … Equation 1”, and ¶ 78 and ¶ 101 “the output from the BiLSTM neural network model comprises at least: a first prediction of whether the respective text string comprising the respective text data element comprises personally identifiable information; a second prediction of whether the respective text data element comprises personally identifiable information based on the text data element; and a third prediction of whether the respective text data element comprises PII based on a forward context and a backward context associated with the respective text data element.”). Regarding claim 4 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 3. Medalion further teaches: wherein a second probability that each sentence will include the personal information is output from the personal information identification model (see e.g., ¶ 76, “both ŷ2 and ŷ3 may be used to determine whether or not the specific text data element (here, “1234”) is PII as part of determination block 220. For example, these outputs may take the form of probabilities”), and the de-identification target sentence detection unit is configured to detect the de- identification target sentence using the first probability and the second probability (see e.g., ¶ 72, “this output ŷ1 may take the form of a probability, which may be compared to a threshold to make the determination.” [i.e., using ŷ1, the first probability], ¶ 88, “Machine learning model(s) 108 may output predicted PII text, or likelihoods or probabilities that analyzed text data elements (e.g., terms or tokens) are PII, [i.e., using probabilities including 2nd probability]. ¶ 51, "text data elements, such as text strings" [i.e., detect target sentence using 1st and 2nd probabilities]). Regarding claim 5 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 1. Medalion further teaches: wherein the training data generation unit is configured to generate the training data by de- identifying the text corresponding to the de-identification target token, such as deleting the text or replacing the text with a special character (see, e.g., ¶ 61, "PII detection system 100 also includes flagger/redactor 110, which is configured to redact and/or flag text data elements, such as terms or tokens, that are predicted to be PII by machine learning model(s) 108 … redaction may involve simply deleting the predicted PII text, while in other cases, the predicted PII text may be replaced with placeholder text to maintain some context/readability in the text strings.", ¶ 91, "flagger/redactor 110 may generate redacted text strings, which are sent back to production data collection 122 to replace the un-redacted text strings that include PII. Further, the redacted text strings may be provided for other uses 308, such as training, generation of other models”, ¶ 107). Regarding claim 9 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 1. Medalion further teaches: wherein the personal information identification model is trained based on a dataset including the conversational data and a labelling of the de-identification target sentence (see, e.g., ¶ 34, “The BiLSTM neural network models described herein are trained on labelled datasets including PII to identify the PII both directly and by context, which mitigates the rigidness of existing pattern-based techniques, such as regular expressions.”, ¶ 39, ¶ 65, “training data collection 126 may include text-based transcripts of support sessions with users of application 130 (or other applications), which have been labeled based on the presence of PII and/or whether PII has been redacted”). Regarding claim 10 With respect to independent claim 10, claim 10 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 10 is a method claim that recites steps that correspond to the apparatus operations of claim 1. In addition, Medalion further teaches: A method for generating de-identified training data for conversational service, which is performed by a training data generation apparatus, comprising (see, e.g., ¶ 7, ¶ 9, “A non-transitory computer-readable medium comprising computer-executable instructions, which, when executed by a processing system, cause the processing system to perform a method for detecting personally identifiable information”, ¶ 83, “user 302 may request support … through a text-based support session, such as a live chat.”, ¶ 85, “In the case of a text-based support session, the text strings (e.g., a live chat log) may be sent directly to production data collection 122.”, ¶ 94, "the original text strings from production data collection 122, or filtered and/or tokenized text strings, may be provided to pattern-based matchers 106, which may flag potential PII data using, for example, regular expressions." [i.e., flagger/search unit detects PII/de-identification target token], ¶ 107). The motivation to combine Medalion and Kneller is the same as discussed above with respect to claim 1. Regarding claims 12-14 and 18 With respect to claims 12-14 and 18, claims 12-14 and 18 are substantially similar to claims 3-5, and 9 and therefore are rejected on the same ground as claims 1, 3-5, and 9, discussed above. In particular, claims 3-5, and 9 are method claims that recite steps that correspond to the apparatus operations of claims 3-5, and 9. Regarding claim 19 With respect to independent claim 19, claim 19 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 19 is a non-transitory computer-readable storage medium claim that recites steps that correspond to the apparatus operations of claim 1. The motivation to combine Medalion and Kneller is the same as discussed above with respect to claim 1. Medalion further teaches: A non-transitory computer-readable storage medium storing a computer program including a sequence of instructions to generate de-identified training data for conversational service, wherein the computer program includes a sequence of instructions that, when executed by a computing device, cause the computing device to (see, e.g., ¶ 9, “A non-transitory computer-readable medium comprising computer-executable instructions, which, when executed by a processing system, cause the processing system to perform a method for detecting personally identifiable information”, ¶ 39, ¶ 91, ¶ 107). The motivation to combine Medalion and Kneller is the same as discussed above with respect to claim 1. Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Medalion in view of Kneller and further in view of Wang et al. (US 20240111967 A1, hereinafter “Wang”). Regarding claim 2 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 1. Medalion further teaches: wherein sentences in the conversation are stored sequentially (see, e.g., ¶ 39, “phone-base support lines may be recorded and transcribed for later use by the organization, such as … for training various sorts of models …. The recordings of the support calls, associated transcripts, and text-based chat transcripts may be saved”, ¶ 20, “BiLSTM models are thus well-suited to classifying, processing, and making predictions based on sequential data, such as text, spoken words, time-series, and the like.”, ¶ 62, “data repository 120 includes a plurality of data collections (i.e., datasets), including production data collection 122, flagged data collection 124, and training data collection 126.”, ¶ 118, “Storage 510 also includes flagged data 534, which may be flagged data collection 124 in FIG. 1”) and the sentence detection unit is configured to understand intention of the sentences based on context of the sentences stored sequentially … and detect the at least one sentence (see, e.g., Fig. 2, ¶ 71, [e.g., shown the teaching of sequence and storing the sequence for the sentence detection], ¶ 78, “In Equation 1, above: ŷ1i is the model prediction of whether the ith training sample (e.g., a text string) includes PII; ŷ2i is the model prediction of whether the ith training sample (e.g., a specific data element) is PII; and ŷ3i is the model prediction of whether the ith training sample (e.g., the specific data element) is PII based on the context around the specific data element; ŷ1, ŷ2, and ŷ3 are the known labels”.”). Medalion in view of Kneller does not specifically disclose sentences in the conversation are stored … in a buffer and sentences stored … in the buffer. However, in the same field, analogous art Wang teaches sentences in the conversation are stored sequentially … in a buffer and sentences stored … in the buffer (see, e.g., ¶ 49, “chunk-end detecting device 170 for detecting a new chunk-end of the word sequence stored in input buffer 164 and outputting a chunk-end detection signal; and a sentence-end detecting device 174 for detecting an end of sentence of the word sequence stored in input buffer 164 and outputting a sentence-end detection signal.”). Medalion, Kneller and Wang are analogous art because they are each related to techniques and systems for to creating training data for training machine learning models (see, e.g., Medalion, Abstract, Kneller, Abstract, and Wang, ¶ 2). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Wang into the method of Medalion and Kneller to incorporate the teaching of a buffer. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Medalion and Kneller to incorporate the teachings of Wang to provide techniques with the de- identification done in such a way that avoid the problem of model being “too sluggish to follow the topic" (see, e.g., Wang, ¶ 11). Doing so would have allowed Medalion and Kneller to be able to use Wang’s techniques for the buffer order to improve the performance of the model, as suggested by Wang (see, e.g., ¶ 13-14). A person of ordinary skill of the art would have been motivated in combining teaching of Kneller of the conversional chatbot being addressed with the Medalion’s teaching of the machine learning model of anonymizing regarding personal identification data and creation of training data in order to improve the performance of the model to be more on a real-time basis (see, e.g., Wang, ¶ 13-14). Regarding claim 11 Claim 11 recites substantially same limitations as claims 2, except these claims are directed to a method. Therefore, these claims are rejected under the same rationale as addressed above. The motivation to combine Medalion in view of Kneller and further in view of Wang is the same as discussed above with respect to claim 2. Regarding claim 6 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 1. Medalion further teaches: wherein the training data generation unit is configured to generate the training data by de- identifying first text corresponding to the de-identification target token, such as replacing the first text with second text … (see, e.g., ¶ 61, "PII detection system 100 also includes flagger/redactor 110, which is configured to redact and/or flag text data elements, such as terms or tokens, that are predicted to be PII by machine learning model(s) 108 … redaction may involve simply deleting the predicted PII text, while in other cases, the predicted PII text may be replaced with placeholder text to maintain some context/readability in the text strings."). Medalion in view of Kneller does not specifically disclose generate the training data by de- identifying first text … such as replacing the first text with second text included However, in the same field, analogous art Ardhanari teaches generate the training data by de- identifying first text … such as replacing the first text with second text included in the same tag set as the first text (see, e.g. ¶ 196 [training data generation], ¶ 305, “FIG. 26A is a simplified diagram of a process 2600 for obfuscating private information according to some embodiments. Illustrative input text 2610 and output text 2620 is shown. … the obfuscation process 2600 replaces tagged entities (e.g., names, locations and organizations) with suitable surrogates … FIG. 26C depicts an illustrative comparison between the use of placeholders to mask detected private information (upper branch) and the use of surrogates to obfuscate both detected and undetected private information (lower branch).”) Medalion, Kneller and Ardhanari are analogous art because they are each related to techniques and systems for to creating training data for training machine learning models (see, e.g., Medalion, Abstract, and Kneller, Abstract, Ardhanari, Abstract). It would have been obvious to a person of ordinary skill in art before the effective filling date of the invention to implement the function of Ardhanari into the method of Medalion and Kneller to incorporate the teaching of a replacement of personal identifiable information like words or texts with use of their respective categorical information. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Medalion and Kneller to incorporate the teachings of Ardhanari to provide techniques for replacing “tagged entities (e.g., names, locations and organizations) with suitable surrogates" (see, e.g., Ardhanari, ¶ 305). Doing so would have allowed Medalion and Kneller to be able to use Ardhanari’s techniques for to replace tagged entities (e.g., names, locations and organizations) with suitable surrogates" to "improve the performance and efficiency of the obfuscation process", as suggested by Ardhanari (see, e.g., ¶ 307). A person of ordinary skill of the art would have been motivated in combining teaching of Kneller of the conversional chatbot being addressed with the Medalion’s teaching of the machine learning model of anonymizing regarding personal identification data and creation of training data in order to “performance and efficiency of the obfuscation process” from tagged entities dedicate cache, as suggested by Ardhanari (see, e.g., ¶ 307). Regarding claim 7 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 1. Medalion further teaches: wherein the training data generation unit is configured to generate … information of the text corresponding to the de-identification target token (see, e.g., ¶ 61, "PII detection system 100 also includes flagger/redactor 110, which is configured to redact and/or flag text data elements, such as terms or tokens, that are predicted to be PII by machine learning model(s) 108 … redaction may involve simply deleting the predicted PII text, while in other cases, the predicted PII text may be replaced with placeholder text to maintain some context/readability in the text strings."), and generate the training data by de-identifying the text, such as replacing the text … (see, e.g., ¶ 91, "flagger/redactor 110 may generate redacted text strings, which are sent back to production data collection 122 to replace the un-redacted text strings that include PII. Further, the redacted text strings may be provided for other uses 308, such as training, generation of other models”, ¶ 107). Medalion in view of Kneller does not specifically disclose generate tag information based on attribute information and generate the training data … such as replacing the text with the tag information However, in the same field, analogous art Ardhanari teaches: Medalion in view of Kneller does not specifically disclose generate tag information based on attribute information (see, e.g., ¶ 35, ¶ 305, “FIG. 26A is a simplified diagram of a process 2600 for obfuscating private information according to some embodiments. Illustrative input text 2610 and output text 2620 is shown. … the obfuscation process 2600 replaces tagged entities (e.g., names, locations and organizations) with suitable surrogates … FIG. 26C depicts an illustrative comparison between the use of placeholders to mask detected private information (upper branch) and the use of surrogates to obfuscate both detected and undetected private information (lower branch).”). and generate the training data … such as replacing the text with the tag information (see, e.g., ¶ 196, “Once a specific pre-trained model is chosen for an entity type, the model is fine-tuned with the bootstrap training set 1201 … bootstrap training set 1201 may be updated using an iterative process whereby training samples are continuously added to the initial set of training samples, further described below in connection with FIG. 14.” [explain the training data generation with tag information], ¶ 35, ¶ 183, ¶ 305). The motivation to combine Medalion in view of Kneller and further in view of Ardhanari is the same as discussed above with respect to claim 6. Regarding claim 8 As discussed above, Medalion in view of Kneller teaches the apparatus of claim 1. Medalion further teaches: wherein the training data generation unit is configured to generate … training data for … conversational service (see, e.g., ¶ 61, "PII detection system 100 also includes flagger/redactor 110, which is configured to redact and/or flag text data elements, such as terms or tokens, that are predicted to be PII by machine learning model(s) 108 … redaction may involve simply deleting the predicted PII text, while in other cases, the predicted PII text may be replaced with placeholder text to maintain some context/readability in the text strings.", ¶ 91, "flagger/redactor 110 may generate redacted text strings, which are sent back to production data collection 122 to replace the un-redacted text strings that include PII. Further, the redacted text strings may be provided for other uses 308, such as training, generation of other models” [i.e., generate training data] ). Medalion in view of Kneller does not specifically generate different training data for each conversational service However, in the same field, analogous art Ardhanari teaches: generate different training data for each conversational service (see, e.g., ¶ 333, ”training a first neural network model using the first curated set of health records; … curating the first uncurated set of health records using the trained first neural network model, yielding a second curated set of health records; and training a second neural network model using the second curated set of health records,” [curated sets created as different sets of training data]). The motivation to combine Medalion in view of Kneller and further in view of Ardhanari is the same as discussed above with respect to claim 6. Regarding claims 15-17 With respect to claims 15-17, claims 6-8, are substantially similar to claims 6-8 and therefore are rejected on the same grounds as claims 6-8, discussed above. In particular, claims 15-17 are method claims that recite steps that correspond to the apparatus operations of claims 6- The motivation to combine Medalion in view of Kneller and further in view of Ardhanari is the same as discussed above with respect to claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEDGE D JOSEPH whose telephone number is (571)272-2777. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at 571-272-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.D.J./Examiner, Art Unit 2127 /KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Feb 17, 2023
Application Filed
Apr 03, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month