Prosecution Insights
Last updated: April 19, 2026
Application No. 18/751,457

Privacy Data Augmentation

Non-Final OA §101§103
Filed
Jun 24, 2024
Examiner
HO, DAO Q
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Verizon Patent and Licensing Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
565 granted / 679 resolved
+25.2% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
710
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
36.3%
-3.7% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 679 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is a reply to the application filed on 6/24/2024, in which, claim(s) 1-20 are pending. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Drawings The drawings filed on 6/24/2024 is/are accepted by The Examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed is being directed to non-statutory subject matter. The claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1-20 is/are directed to a method and system. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. Based upon consideration of all of the relevant factors with respect to the claims as a whole, claims are held to claim an unpatentable abstract idea, and are therefore rejected as ineligible subject matter under 35 U.S.C. § 101. Inventions for a “new and useful process, machine, manufacture, or composition of matter” generally constitute patent-eligible subject matter. 35 U.S.C. § 101. However, the U.S. Supreme Court has long interpreted 35 U.S.C. § 101 to include implicit exceptions: “[l]aws of nature, natural phenomena, and abstract ideas” are not patentable. Alice Corp. v. CLS Bank Int’1l, 573 U.S. 208,216 (2014). The Supreme Court, in Alice, reiterated the two-step framework previously set forth in Mayo Collaborative Services v. Prometheus Laboratories, Inc., 566 U.S. 66 (2012), “for distinguishing patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent- eligible applications of those concepts.” Alice Corp., 573 U.S. at 217. The first step in that analysis is to “determine whether the claims at issue are directed to one of those patent-— ineligible concepts.” Id. If the claims are not directed to a patent-ineligible concept, e.g., an abstract idea, the inquiry ends. Otherwise, the inquiry proceeds to the second step where the elements of the claims are considered “individually and ‘as an ordered combination’” to determine whether there are additional elements that “‘transform the nature of the claim’ into a patent-eligible application.” Id. (quoting Mayo, 566 U.S. at 79, 78). This is “a search for an ‘inventive concept’ - i.e., an element or combination of elements that is ‘sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.’” Id. at 217-18 (alteration in original). The USPTO published revised guidance on January 7, 2019, for use by USPTO personnel in evaluating subject matter eligibility under 35 U.S.C. § 101. 2019 REVISED PATENT SUBJECT MATTER ELIGIBILITY GUIDANCE, 84 Fed. Reg. 50 (Jan. 7, 2019) (the “2019 Revised Guidance”). That guidance revised the USPTO's examination procedure with respect to the first step of the Mayo/Alice framework by (1) “[p]roviding groupings of subject matter that [are] considered an abstract idea”; and (2) clarifying that a claim is not “directed to” a judicial exception if the judicial exception is integrated into a practical application of that exception. Id. at 50.1 The first step, as set forth in the 2019 Revised Guidance (i.e., Step 2A), is, thus, a two-prong test. In Step 2A, Prong One, we look to whether the claim recites a judicial exception, e.g., one of the following three groupings of abstract ideas: (1) mathematical concepts; (2) certain methods of organizing human activity, e.g., fundamental economic principles or practices, commercial or legal interactions; and (3) mental processes. See 2019 Revised Guidance, 84 Fed. Reg. at 54; MPEP §§ 2106.04(II) (A) (1), 2106.04(a). If so, we next determine, in Step 2A, Prong Two, whether the claim as a whole integrates the recited judicial exception into a practical application of that exception, i.e., whether the additional elements recited in the claim beyond the judicial exception, apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. See 2019 Revised Guidance, 84 Fed. Reg. at 54-55; MPEP §§ 2106.04 (IT) (A) (2), 2106.04(d). Only if the claim (1) recites a judicial exception and (2) does not integrate that exception into a practical application do we conclude that the claim is “directed to” the judicial exception, e.g., an abstract idea. See 2019 Revised Guidance, 84 Fed. Reg. at 54-55; MPEP § 2106.04 (IT) (A) (2). If the claim is determined to be directed to a judicial exception under Step 2A, we next evaluate the additional elements, individually and in combination, in Step 2B, to determine whether they provide an inventive concept, i.e., whether the additional elements or combination of elements amounts to significantly more than the judicial exception itself; only then, is the claim patent eligible. See 2019 Revised Guidance, 84 Fed. Reg. at 56; MPEP § 2106.05. Step One of the Mayo/Alice Framework (2019 Revised Guidance, Step 2A) 2019 Revised Guidance, Step 2A, Prong 1 The abstract idea to which claims 1-20 are directed to is mental process such as concepts performed in the human mind (including an observation, evaluation, judgement, opinion) and mathematical relationships/calculations. In particular, the claims recite the following abstract concepts: “selecting a first augmentation pipeline to process first data based upon a data type of the first data;” (i.e., abstract idea of observation, judgment under mental processor and selecting is done via human activity) “performing entity tagging to assign tags to tokens within the first data to create tagged tokens that are tagged as either being entity tokens or non-entity tokens” (i.e., abstract idea of evaluation, judgment under mental processor and performing is done via human activity) “generating a first contextual prompt for a model based upon the tagged tokens and privacy regulations of at least one of a source region or a destination region;” (i.e., abstract idea of evaluation, judgment under mental processor and performing is done via human activity) “processing the first contextual prompt using the model to identify one or more tagged tokens to mask;” (i.e., abstract idea of mental process of detecting, analyzing data, data recognition and storage as found abstract by the Courts in TLI Comms, Digitech, SmartGene, Bancorp Servs, Electric Power Group, Classen, FairWarning, Cybersource) “masking the one or more tagged tokens within the first data to create augmented first data;” (i.e., abstract idea of mental process of organizing and manipulating information of the collected data as found abstract by the Courts in TLI Comms, Digitech, SmartGene, Bancorp Servs, Electric Power Group, Classen, FairWarning, Cybersource) “transmitting the augmented first data to a computing device within the destination region.” (i.e., abstract idea of mental process of informing, notifying, displaying the result of data processing to an entity as found abstract by the Courts in FairWarning, Content Extraction. Court has noted “merely presenting the results of abstract processes of collecting and analyzing information, without more (such as identifying a particular tool for presentation), is abstract as an ancillary part of such collection and analysis.” See e.g., Electric Power Group, 830 F.3d 1350, 1351, 1353-54) The Supreme Court and Federal Circuit have identified abstract ideas in patent claims by making comparisons to concepts found in past decisions to be judicial exceptions to eligibility. The 2019 IEG summarizes concepts the courts have considered to be abstract ideas by associating eligibility decisions with judicial descriptors (e.g., “an idea of itself,” “certain methods of organizing human activities”, “mathematical relationships and formulas”) based on common characteristics. These associations define the judicial descriptors in a manner that stays within the confines of the judicial precedent, with the understanding that these associations are not mutually exclusive, i.e., some concepts may be associated with more than one judicial descriptor. The abstract functions of the claims in the case are claim(s) is/are directed to system and method of selecting, performing, generating, processing, masking (i.e., abstract idea mental process, human activity) and providing augmented data as defined by the claimed steps above. The present claims, as a whole, and individual limitations, are reciting abstract concept of data analysis, comparing and masking of data. As such the claims are analogous to Digitech, Content Extraction and FairWarning, 839 F.3d at 1093-94 (concluding claims directed to be mental processes within the abstract-idea category); Electric Power Group; and TLI Comms. Note that merely using well-known and commonly used data modeling in a generic and superficial manner to categorize the data does not convert a known abstract idea (i.e., data categorization) into an eligible subject matter. See, Bancorp Servs., L.L. C. v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1277-78 (Fed. Cir. 2012). Looking at the steps of the claims, for each of the claims, data is simply being analysis, comparing and masking which was ruled abstract in: a. Collecting and comparing known information (Classen); b. Comparing information regarding a sample or test subject to a control or target data (Ambry/Myriad CAFC); c. Collecting and analyzing information to detect misuse and notifying a user when misuse is detected (FairWarning); d. Data recognition and storage (Content Extraction); e. Obtaining and comparing intangible data (Cybersource); f. Collecting, selecting, categorizing, analyzing, and displaying certain results of the collection and analysis (Electric Power Group); g. Organizing and manipulating information through mathematical correlations (Digitech); h. Virus Screening (int. Ventures v. Symantec ‘610 patent); i. A mathematical formula for calculating parameters indicating an abnormal condition (Grams). Furthermore, the invention is nothing more than data analysis, comparing and masking as described in the claims that can be performed mentally (or with a pen and piece of paper). The steps are similar to concepts and ideas that have been identified as abstract by the courts. For example, collecting information, analyzing it, and displaying certain results of the collection and analysis (Electric Power Group); masking/removing of particular data (Digitech) and obtaining and comparing intangible data (Cybersource). While the specific facts of the case differ from these cases, the claims are still directed to analysis, comparing, and masking/removing information and providing known information. 2019 Revised Guidance, Step 2A, Prong 2 The 2019 Revised Guidance sets forth a non-exhaustive listing of considerations indicative that an additional element or combination of elements may have integrated a recited judicial exception into a practical application. See 2019 Revised Guidance, 84 Fed. Reg. at 55; MPEP § 2106.04(d). In particular, the Guidance describes that an additional element may have integrated the judicial exception into a practical application if, inter alia, the additional element reflects an improvement in the functioning of a computer or an improvement to other technology or a technical field. Id. At the same time, the Guidance makes clear that merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea; adding insignificant extra-solution activity to the judicial exception; or only generally linking the use of the judicial exception to a particular technological environment or field are not sufficient to integrate the judicial exception into a practical application. Id. The abstract functions of the claims in the case are claim(s) is/are directed to system and method of data processing to detect personal information (i.e., abstract idea mental process) and providing the masked/augmented data (i.e., abstract idea human activity) as defined by the claimed steps. The claims do not require an arguably inventive set of components, methods, or algorithms. The recitation of a first augmentation pipeline to manipulate/classified the information describes a solution merely at the level of a software to tags data. The abstract idea is implemented using generic computing elements (“computers, programs, medium”) and an off the shelf that do not integrate a practical application of the abstract idea in the claims (step 2A, prong 2). Accordingly, even in combination, these additional generic computing elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims recite a mental process, i.e., an abstract idea, and that the additional elements recited in the claim beyond the abstract idea are no more than generic computer components used as tools to perform the recited abstract idea and insignificant extra-solution activity. As such, they do not integrate the abstract idea into a practical application. See Alice Corp., 573 U.S. at 223-24 ("(Wholly generic computer implementation is not generally the sort of ‘additional feature[s] that provides any ‘practical assurance that the process is more than a drafting effort designed to monopolize the abstract idea itself.’” (quoting Mayo, 566 U.S. at 77)); 2019 Revised Guidance, 84 Fed. Reg. at 55 (identifying “an additional element adds insignificant extra-solution activity to the judicial exception” and “an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use” as examples in which a judicial exception has not been integrated into a practical application). Step Two of the Mayo/Alice Framework (2019 Revised Guidance, Step 2B) Step 2B: Considering Additional Elements The considerations are whether the claim includes: Improvements to another technology or technical field; Improvements to the functioning of the computer itself; Applying the judicial exception with, or by use of, a particular machine; Effecting a transformation or reduction of a particular article to a different state or thing; Adding a specific limitation other than what is well-understood, routine and conventional in the field, or adding unconventional steps that confine the claim to a particular useful application; Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer; Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception; Adding insignificant extra-solution activity to the judicial exception; Generally linking the use of the judicial exception to a particular technological environment or field of use. The relevant question under Step 2B is whether claim includes an additional element or combination of elements adds specific limitations beyond the judicial exception that are not “well-understood, routine, conventional activity” in the field or simply appends well-understood, routine, conventional activities previously known to the industry to the judicial exception. Here, the additional elements of claim beyond the abstract idea, namely, a “computer hardware”, “programs”, “machine learning model” is a conventional computing equipment and algorithm used in a well-understood, routine, and conventional manner. These additional elements do not provide an inventive concept; rather, they simply append well-understood, routine, conventional activities previously known to the industry to the judicial exception. Applying the test to the claims in the application, the structural elements of the claims, which include a computer when taken in combination with the functional elements claim(s) is/are directed to system and method to selected pipeline, analyzing, generating, masking of data by a first augmentation pipeline, and providing augmented data, together do not offer “significantly more” than the abstract idea itself because the claims do not recite an improvement to another technology or technical field, an improvement to the functioning of any computer itself, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment (a general purpose computer and/or environment of the user). When considered as an ordered combination, the Examiner does not find any combination of the additional elements that amounts to more than the sum of the parts. The Examiner finds that the individual elements of the claims are performing their intended roles and functions. In most cases, the additional elements are applied merely to carry out data processing, as discussed above, fall under well-understood, routine, and conventional functions of generic computers in our common day-to-day interactions. Therefore, the claimed interactions of the various generically recited methods/devices lacks an unconventional step that confines the claim to a particular useful application in the sense that the result is equivalent to purely human activity, e.g., masking of particular item from content. Dependent claims do not add an inventive step to the abstract idea of the independent claims and are therefore rejected based on the aforementioned rationale discussed in the rejection. Dependent claims 2-9, 11-17 and 19-20, pertain to rules based on privacy regulation and analyzing, identifying data from different sources without adding any inventive concept or using an unconventional computing element or improving the underlying computer technology. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-12 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carmi et al. (US 20180089313 A1; hereinafter Carmi) in view of Ardhanari et al. (US 20210248268 A1; hereinafter Ardhanari). Regarding claims 1, 10 and 18, Carmi discloses a method, comprising: selecting a first augmentation pipeline to process first data based upon a data type of the first data (selecting the transcriber to transcribe the data via various sources [Carmi; ¶13-15; Figs. 1, 3 and associated text]); performing, by the first augmentation pipeline, entity tagging to assign tags to tokens within the first data to create tagged tokens that are tagged as either being entity tokens or non-entity tokens (the private information may be identified as described in further detail with respect to the method in FIG. 3 through the application of private information rules. Additionally, embodiments may receive an ontology, which may include the private information rules described in further detail herein, but may also include other language models or interpretation techniques or tools specifically crafted or tailored to a domain of the communication data to be analyzed. The identification of private information at exemplarily results in tagging of the identified pieces of private information. In an embodiment, the private information is tagged with an identification of the specific type of private information that the actual text for the transpiration represents. In non-limiting embodiments such tags may identify whether the private information is a phone number, credit card number, social security number, an account number, a birth date, or a password [Carmi; ¶13-15; Figs. 1, 3 and associated text]); generating a first contextual prompt for a model based upon the tagged tokens and privacy regulations of at least one of a source region or a destination region (processing the transcribe data using the transcription and ontology, which may include the private information rules described in further detail herein, but may also include other language models or interpretation techniques or tools specifically crafted or tailored to a domain of the communication data to be analyzed. The identification of private information at exemplarily results in tagging of the identified pieces of private information. In an embodiment, the private information is tagged with an identification of the specific type of private information that the actual text for the transpiration represents, enable the automated review of either previously redacted communication information or the review and analysis of recorded and un-redacted communication data, in order to meet compliance with privacy and confidential information standards, laws, and regulations [Carmi; ¶13-15; Figs. 1, 3 and associated text]); processing the first contextual prompt using the model to identify one or more tagged tokens to mask (The tag transcription is then provided to a rule engine to evaluate the compliance of the transcription with internal, legal, or regulatory private and confidential information standards for these standards may each be different and the application of differing standards may depend upon the manner of use of the transpiration or the intended manner of storage [Carmi; ¶13-15; Figs. 1, 3 and associated text]); masking the one or more tagged tokens within the first data to create augmented first data (the tagged private information may be removed completely, while in an alternative embodiment, the tagged private information may be replaced with the name of the tag such that the context of the private information conveyed in the interpersonal communication is maintained while the private information is removed from the file [Carmi; ¶16-18; Figs. 1, 3 and associated text]). Carmi discloses removal of private information may include: receiving a transcript of communication data; applying a private information rule to the transcript in order to identify private information in the transcript; tagging the identified private information with a tag comprising an identification of the private information; applying a complicate rule to the tagged transcript in order to evaluate a compliance of the transcript with privacy standards; removing the identified private information from the transcript to produce a redacted transaction; and storing the redacted transcript. Carmi does not explicilty discloses transmitting the augmented first data to a computing device within the destination region; however, Ardhanari teaches this feature. In particular, Ardhanari teaches filtering and transmitting the data based on the region regulations [Ardhanari; ¶145, 178]. It would have been obvious before the effective filing date of the claimed invention to modify Carmi in view of Ardhanari with the motivation share data based on set region regulation. Regarding claim 2, Carmi-Ardhanari combination discloses the method of claim 1, comprising: tokening, by the first augmentation pipeline, the first data to identify the tokens (transcribe the content and identified the private information, tagging the identified information [Carmi; ¶13-15; Figs. 1, 3 and associated text]); performing, by the first augmentation pipeline, part of speech tagging to tag the tokens with part of speech tags to create tagged tokens (speech analytics to transcription data and tagging information based on rules and compliances [Carmi; ¶13-15; Figs. 1, 3 and associated text]); and processing raw text of the first data and the tagged tokens to identify the entity tokens and the non-entity tokens (processing the transcribe data using the transcription and ontology, which may include the private information rules described in further detail herein, but may also include other language models or interpretation techniques or tools specifically crafted or tailored to a domain of the communication data to be analyzed. The identification of private information at exemplarily results in tagging of the identified pieces of private information. In an embodiment, the private information is tagged with an identification of the specific type of private information that the actual text for the transpiration represents, enable the automated review of either previously redacted communication information or the review and analysis of recorded and un-redacted communication data, in order to meet compliance with privacy and confidential information standards, laws, and regulations [Carmi; ¶13-15; Figs. 1, 3 and associated text]). Regarding claim 3, Carmi-Ardhanari combination discloses the method of claim 1, comprising: evaluating source privacy regulations of the source region and destination privacy regulations of the destination region to identify a set of entities to mask; and in response to a tagged token corresponding to an entity within the set of entities to mask, masking the tagged token (the tagged private information may be removed completely, while in an alternative embodiment, the tagged private information may be replaced with the name of the tag such that the context of the private information conveyed in the interpersonal communication is maintained while the private information is removed from the file based on compliance rule [Carmi; ¶14-18; Figs. 1, 3 and associated text]). Regarding claim 4, Carmi-Ardhanari combination discloses the method of claim 1, comprising: utilizing a large language model as the model for processing the first contextual prompt (using large vocabulary continuous speech recognition (LVCSR) transcription techniques or may be a transcription of previously recorded audio data using any of a variety of speech-to-text transcription techniques [Carmi; ¶14-15; Figs. 1, 3 and associated text]). Regarding claim 5, Carmi-Ardhanari combination discloses the method of claim 1, comprising: selecting a second augmentation pipeline to process second data based upon a data type of the second data (selecting the transcriber to transcribe the data via various sources [Carmi; ¶13-15; Figs. 1, 3 and associated text]); identifying, by the second augmentation pipeline, objects within the second data (the private information may be identified as described in further detail with respect to the method in FIG. 3 through the application of private information rules. Additionally, embodiments may receive an ontology, which may include the private information rules described in further detail herein, but may also include other language models or interpretation techniques or tools specifically crafted or tailored to a domain of the communication data to be analyzed. The identification of private information at exemplarily results in tagging of the identified pieces of private information. In an embodiment, the private information is tagged with an identification of the specific type of private information that the actual text for the transpiration represents. In non-limiting embodiments such tags may identify whether the private information is a phone number, credit card number, social security number, an account number, a birth date, or a password [Carmi; ¶13-15; Figs. 1, 3 and associated text]); classifying the objects with labels identifying the objects to create labeled objects (specific type of the private information is selected from the group [Carmi; ¶13-15; Figs. 1, 3 and associated text]); and identifying a set of entities to mask based upon the privacy regulations (The tag transcription is then provided to a rule engine to evaluate the compliance of the transcription with internal, legal, or regulatory private and confidential information standards for these standards may each be different and the application of differing standards may depend upon the manner of use of the transpiration or the intended manner of storage [Carmi; ¶13-15; Figs. 1, 3 and associated text]); and processing, by a masking engine, the second data and the set of entities to mask to generate augmented second data to transmit to a destination computing device at the destination region (the tagged private information may be removed completely, while in an alternative embodiment, the tagged private information may be replaced with the name of the tag such that the context of the private information conveyed in the interpersonal communication is maintained while the private information is removed from the file [Carmi; ¶16-18; Figs. 1, 3 and associated text], filtering and transmitting the data based on the region regulations [Ardhanari; ¶145, 178]. The motivation share data based on set region regulation. Regarding claim 6, Carmi-Ardhanari combination discloses the method of claim 5, wherein the second data comprises visual data, and wherein a subset of the visual data is masked to create the augmented second data (visual representation or snapshot of the data [Ardhanari; ¶289-295]). The motivation share data based on set region regulation. Regarding claim 7, Carmi-Ardhanari combination discloses the method of claim 5, comprising: inputting the augmented second data into at least one of image classification functionality, image segmentation functionality, object tracking functionality, pose estimation functionality, image parsing functionality, or process automations functionality (The augmented curation of health records converts a raw health record, such as an electronic health records (EHR), a patient chart, or the like, into a structured representation of a patient phenotype (e.g., a snapshot of a patient's symptoms, diagnoses, treatments, or the like). The structured representation may then be visualized, used as an input for statistical or machine learning analysis, or the like [Ardhanari; ¶289-295]). The motivation share data based on set region regulation. Regarding claim 8, Carmi-Ardhanari combination discloses the method of claim 1, comprising: inputting the augmented first data into at least one of a chatbot, an intent identification model, a churn propensity model, market analysis functionality, variable regression, or functionality that generates instructions for controlling network equipment of a communication network (The data may be visualized, used as an input for statistical or machine learning analysis, or the like [Ardhanari; ¶289-295]). The motivation share data based on set region regulation. Regarding claim 9, Carmi-Ardhanari combination discloses the method of claim 1, wherein the first data comprises text, and wherein a subset of the text is masked to create the augmented first data (the tagged private information may be replaced with the name of the tag such that the context of the private information conveyed in the interpersonal communication is maintained while the private information is removed from the file [Carmi; ¶16-18; Figs. 1, 3 and associated text]). Regarding claim 11, Carmi-Ardhanari combination discloses the system of claim 10, wherein the operations further comprise: inputting the augmented first data into at least one of image classification functionality, image segmentation functionality, object tracking functionality, pose estimation functionality, image parsing functionality, or process automations functionality (The data may be visualized, used as an input for statistical or machine learning analysis, or the like [Ardhanari; ¶289-295]). The motivation share data based on set region regulation. Regarding claim 12, Carmi-Ardhanari combination discloses the system of claim 10, wherein the first data comprises visual data, and wherein a subset of the visual data is masked to create the augmented first data (the tagged private information may be replaced with the name of the tag such that the context of the private information conveyed in the interpersonal communication is maintained while the private information is removed from the file [Carmi; ¶16-18; Figs. 1, 3 and associated text], healthcare data is to anonymize or mask the private data attributes, e.g., mask social security numbers before it is processed or analyzed. In some embodiments of the present disclosures, methods may be employed for masking and de-identifying personal information from healthcare records. Using these methods, a dataset containing healthcare records may have various portions of its data attributes masked or de-identified. The resulting dataset thus may not contain any personal or private information that can identify one or more specific individuals [Ardhanari; ¶173-176]). The motivation share data based on set region regulation. Regarding claim 14, Carmi-Ardhanari combination discloses the system of claim 10, wherein the operations further comprise: utilizing a neural network model to segment boundaries within the first data to identify the objects (The data may be visualized, used as an input for statistical or machine learning analysis, or the like [Ardhanari; ¶289-295]). The motivation share data based on set region regulation. Regarding claim 15, Carmi-Ardhanari combination discloses the system of claim 10, wherein the operations further comprise: generating a contextual prompt for a model based upon the privacy regulations; and processing the contextual prompt using the model to identify the set of entities (receive an ontology, which may include the private information rules described in further detail herein, but may also include other language models or interpretation techniques or tools specifically crafted or tailored to a domain of the communication data to be analyzed [Carmi; ¶14-15; Figs. 1, 3 and associated text]). Regarding claim 16, Carmi-Ardhanari combination discloses the system of claim 10, wherein the operations further comprise: selecting a second augmentation pipeline to process second data based upon a data type of the second data (selecting the transcriber to transcribe the data via various sources [Carmi; ¶13-15; Figs. 1, 3 and associated text]); identifying, by the second augmentation pipeline, objects within the second data (the private information may be identified as described in further detail with respect to the method in FIG. 3 through the application of private information rules. Additionally, embodiments may receive an ontology, which may include the private information rules described in further detail herein, but may also include other language models or interpretation techniques or tools specifically crafted or tailored to a domain of the communication data to be analyzed. The identification of private information at exemplarily results in tagging of the identified pieces of private information. In an embodiment, the private information is tagged with an identification of the specific type of private information that the actual text for the transpiration represents. In non-limiting embodiments such tags may identify whether the private information is a phone number, credit card number, social security number, an account number, a birth date, or a password [Carmi; ¶13-15; Figs. 1, 3 and associated text]); classifying the objects with labels identifying the objects to create labeled objects (specific type of the private information is selected from the group [Carmi; ¶13-15; Figs. 1, 3 and associated text]); and identifying a set of entities to mask based upon the privacy regulations (The tag transcription is then provided to a rule engine to evaluate the compliance of the transcription with internal, legal, or regulatory private and confidential information standards for these standards may each be different and the application of differing standards may depend upon the manner of use of the transpiration or the intended manner of storage [Carmi; ¶13-15; Figs. 1, 3 and associated text]); and processing, by a masking engine, the second data and the set of entities to mask to generate augmented second data to transmit to a destination computing device at the destination region (the tagged private information may be removed completely, while in an alternative embodiment, the tagged private information may be replaced with the name of the tag such that the context of the private information conveyed in the interpersonal communication is maintained while the private information is removed from the file [Carmi; ¶16-18; Figs. 1, 3 and associated text], filtering and transmitting the data based on the region regulations [Ardhanari; ¶145, 178]. The motivation share data based on set region regulation. Regarding claim 17, Carmi-Ardhanari combination discloses the system of claim 16, wherein the operations further comprise: inputting the augmented second data into at least one of a chatbot, an intent identification model, a churn propensity model, market analysis functionality, variable regression, or functionality that generates instructions for controlling network equipment of a communication network (The data may be visualized, used as an input for statistical or machine learning analysis, or the like [Ardhanari; ¶289-295]). The motivation share data based on set region regulation. Regarding claim 19, Carmi-Ardhanari combination discloses the non-transitory computer-readable medium of claim 18, wherein the operations further comprise: inputting the augmented data into at least one of a chatbot, an intent identification model, a churn propensity model, market analysis functionality, variable regression, or functionality that generates instructions for controlling network equipment of a communication network (The data may be visualized, used as an input for statistical or machine learning analysis, or the like [Ardhanari; ¶289-295]). The motivation share data based on set region regulation. Regarding claim 20, Carmi-Ardhanari combination discloses the non-transitory computer-readable medium of claim 18, wherein the operations further comprise: evaluating source privacy regulations of the source region and destination privacy regulations of the destination region to identify a set of entities to mask; and in response to a tagged token corresponding to an entity within the set of entities to mask, masking the tagged token (the tagged private information may be removed completely, while in an alternative embodiment, the tagged private information may be replaced with the name of the tag such that the context of the private information conveyed in the interpersonal communication is maintained while the private information is removed from the file based on compliance rule [Carmi; ¶14-18; Figs. 1, 3 and associated text]). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carmi-Ardhanari combination in view of Lokesh et al. (US 20250363224 A1; hereinafter Lokesh) Regarding claim 13, Carmi-Ardhanari combination does not explicilty discloses the system of claim 10, wherein the operations further comprise: detecting a gradient shift within the first data; creating a bounding box around an object based upon the gradient shift; and assigning the label to the bounding box; however, in a related and analogous art, Lokesh teaches these features. In particular, Lokesh teaches gradient modeling using method such as gradient boosting, I which the object detection model is a learning model, accuracy of the model may be improved over time through iterations of training (and/or fine-tuning), receipt of user feedbacks, etc. Further, training (and/or fine-tuning) the object detection model may include application of a training algorithm. As an example, a decision tree (e.g., a Gradient Boosting Decision Tree) may be used to train the object detection model. In doing so, one or more types of decision tree algorithms may be applied for generating any number of decision trees to fine-tune the object detection model. In one or more embodiments, training of the object detection model may further include generating an ML/AI model that is tuned to reflect specific metrics for accuracy, precision and/or recall before the trained ML/AI model is exposed for real-time (or near real-time) usage [Lokesh; ¶159-161; Fig. 6 and associated text]. It would have been obvious before the effective filing date of the claimed invention to modify Carmi-Ardhanari combination in view of Lokesh for gradient modeling with the motivation to improve accuracy of the object detection model and fine-tunes the “pre-trained” model using the testing data (to increase the accuracy of the model in terms of (i) sensitive object and sensitive action detection/recognition and (ii) taking necessary corrective actions/measures regarding the detected sensitive objects and sensitive actions (e.g., blurring an image of a sensitive object, masking an image of a sensitive object, sending an alert to a corresponding user for taking necessary measures, providing preventive guidance to the user for the future, etc.)) [Lokesh; ¶158]. Internet Communications Applicant is encouraged to submit a written authorization for Internet communications (PTO/SB/439, http:ljwww.uspto.gov/sites/default/files/documents/sb0439.pdf) in the instant patent application to authorize the examiner to communicate with the applicant via email. The authorization will allow the examiner to better practice compact prosecution. The written authorization can be submitted via one of the following methods only: (1) Central Fax which can be found in the Conclusion section of this Office action; (2) regular postal mail; (3) EFS WEB; or (4) the service window on the Alexandria campus. EFS web is the recommended way to submit the form since this allows the form to be entered into the file wrapper within the same day (system dependent). Written authorization submitted via other methods, such as direct fax to the examiner or email, will not be accepted. See MPEP § 502.03. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAO Q HO whose telephone number is (571)270-5998. The examiner can normally be reached on 7:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAO Q HO/Primary Examiner, Art Unit 2432 1 The MANUAL OF PATENT EXAMINING PROCEDURE (“MPEP”) incorporates the revised guidance and subsequent updates at § 2106 (9th ed. Rev. 10.2019, rev. June 2020).
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §103
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603778
APPARATUS AND METHOD FOR GENERATING AN NFT VAULT
2y 5m to grant Granted Apr 14, 2026
Patent 12598169
System and Method for Early Detection of Duplicate Security Association of IPsec Tunnels
2y 5m to grant Granted Apr 07, 2026
Patent 12587852
METHOD AND APPARATUS FOR MANAGING LICENSES FOR DATA IN M2M SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12585736
SYSTEMS AND METHODS FOR AUTHENTICATION AND AUTHORIZATION FOR SOFTWARE LICENSE MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12572378
SECURE ARBITRATION MODE TO BUILD AND OPERATE WITHIN TRUST DOMAIN EXTENSIONS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+32.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 679 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month