Prosecution Insights
Last updated: April 19, 2026
Application No. 18/808,845

GLOBAL CONTENT MANAGEMENT

Non-Final OA §101§103
Filed
Aug 19, 2024
Examiner
COLUCCI, MICHAEL C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Paypal Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
749 granted / 990 resolved
+13.7% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
1031
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 990 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception such as a natural phenomenon, abstract idea, or law of nature, without significantly more and/or a practical application per se, specifically with one or more of: 1) Not integrating a judicial exception into a practical application (see explanation below), and 2) Not reciting elements that would amount to significantly more than the judicial exception (see explanation below). Accordingly, claims 8-14 are directed towards patent ineligible subject matter under 35 U.S.C. 101. The independent claims: When taking the current claim limitations of the present invention, we see that they are directed to editing a spreadsheet or document. Regarding the claim limitations of claim(s) 8 as recited: 8. A method for generating user-specific content, the method comprising: receiving, from a device, a form document; processing the form document to identify a plurality of content units; tagging, by a machine learning model, each of the plurality of content units with a first or second label; deriving a user-specific parameter from the device; generating, for each of the plurality of content units tagged with the first label, a replacement content unit based on the user-specific parameter; assembling an updated document with a plurality of replacement content units and the plurality of content units tagged with the second label; and transmitting the updated document to the device for display on a graphical user interface (GUI) of the device. Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER? Yes Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA? Yes Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION? No. Regarding the independent claims. No, analogous to Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims are directed to editing a document or spreadsheet such as lacking a clear improvement of function/technology with extra solution of a GUI and model per se. Further as demonstrated in Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims were to methods for electronically processing paper checks, all of which contained limitations setting forth receiving merchant transaction data from a merchant, crediting a merchant’s account, and receiving and scanning paper checks after the merchant’s account is credited. In part one of the Alice/Mayo test, the Federal Circuit determined that the claims were directed to the abstract idea of crediting the merchant’s account before the paper check is scanned. The court first determined that the recited limitations of “crediting a merchant’s account as early as possible while electronically processing a check” is a “long-standing commercial practice” like in Alice and Bilski. 931 F.3d at 1167, 2019 USPQ2d 281076, at *5 (Fed. Cir. 2019). The Federal Circuit then continued with its analysis under part one of the Alice/Mayo test finding that the claims are not directed to an improvement in the functioning of a computer or an improvement to another technology. In particular, the court determined that the claims “did not improve the technical capture of information from a check to create a digital file or the technical step of electronically crediting a bank account” nor did the claims “improve how a check is scanned.” Id. Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole. The claim which demonstrated improvements to technology and/or function recites: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task.". The decision recites that “We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation.” When considering the limitation decided upon, there are clear improvements to machine learning that are not rudimentary or a long-standing practice, for instance adjusting for optimization and protection of performance, as claimed, are improvements to a machine learning models operations, not simply a general mathematical or generic recitation, but rather an improvement to function. Specifically, Ex Parte Desjardins explained the following: Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8). Further, specifically: “Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality.” Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation. The second paragraph of MPEP § 2106.05(a), subsection I, is revised to add new examples xiii and xiv to the list of examples that may show an improvement in computer functionality: xiii. An improved way of training a machine learning model that protected the model’s knowledge about previous tasks while allowing it to effectively learn new tasks; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential); and xiv. Improvements to computer component or system performance based upon adjustments to parameters of a machine learning model associated with tasks or workstreams; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential). Step 2B: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT AMOUNT TO SIGNIFICANTLY MORE THAN THE JUDICIAL EXCEPTION? No, the claims are directed to steps that can be performed on known computer devices to alter a document or spread sheet, with a model analogous to the human mind and a GUI as extra solution activity. • Collecting and comparing known information (Classen) • Collecting, displaying, and manipulating data (Int. Ventures v. Cap One Financial) • Collecting information, analyzing it, and displaying certain results of the collection and analysis (Electric Power Group; West View†) • Comparing new and stored information and using rules to identify options (Smartgene)† • Data recognition and storage (Content Extraction) • Delivering user‐selected media content to portable devices (Affinity Labs v. Amazon.com) Assistance for Applicant in amending to overcome 101: Limitations that the courts have found to qualify as “significantly more” when recited in a claim with a judicial exception include: i. Improvements to the functioning of a computer, e.g., a modification of conventional Internet hyperlink protocol to dynamically produce a dual-source hybrid webpage, as discussed in DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258-59, 113 USPQ2d 1097, 1106-07 (Fed. Cir. 2014) (see MPEP § 2106.05(a)); ii. Improvements to any other technology or technical field, e.g., a modification of conventional rubber-molding processes to utilize a thermocouple inside the mold to constantly monitor the temperature and thus reduce under- and over-curing problems common in the art, as discussed in Diamond v. Diehr, 450 U.S. 175, 191-92, 209 USPQ 1, 10 (1981) (see MPEP § 2106.05(a)); iii. Applying the judicial exception with, or by use of, a particular machine, e.g., a Fourdrinier machine (which is understood in the art to have a specific structure comprising a headbox, a paper-making wire, and a series of rolls) that is arranged in a particular way to optimize the speed of the machine while maintaining quality of the formed paper web, as discussed in Eibel Process Co. v. Minn. & Ont. Paper Co., 261 U.S. 45, 64-65 (1923) (see MPEP § 2106.05(b)); iv. Effecting a transformation or reduction of a particular article to a different state or thing, e.g., a process that transforms raw, uncured synthetic rubber into precision-molded synthetic rubber products, as discussed in Diehr, 450 U.S. at 184, 209 USPQ at 21 (see MPEP § 2106.05(c)); v. Adding a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)); or vi. Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment, e.g., an immunization step that integrates an abstract idea of data comparison into a specific process of immunizing that lowers the risk that immunized patients will later develop chronic immune-mediated diseases, as discussed in Classen Immunotherapies Inc. v. Biogen IDEC, 659 F.3d 1057, 1066-68, 100 USPQ2d 1492, 1499-1502 (Fed. Cir. 2011) (see MPEP § 2106.05(e)). To help in amending the claims and for analysis purposes, example claims 3 and 4 are listed below from the courts, however such example amendment potentials are not limited to the provided examples and alternative amendments are possible using i-vi from the courts. The example below show differences between eligible claims (court claim 4) and ineligible claims (court claim 3), which thus illustrates significantly more which is tied to hardware that is not generally recited in the art. In this case general changing of font size in claim 3 versus a significant step of conditionally changing font size tied to hardware in claim 4. See below examples based on MPEP and not on the current claim set, to help amend to overcome 101 rejections: Regarding independent claim examples: For instance in the example claims, for example claims 3 and 4 below: Ineligible 3. A computer‐implemented method of resizing textual information within a window displayed in a graphical user interface, the method comprising: (not significant) generating first data for describing the area of a first graphical element; (not significant) generating second data for describing the area of a second graphical element containing textual information; (not significant) calculating, by the computer, a scaling factor for the textual information which is proportional to the difference between the first data and second data. The claim recites that the step of calculating a scaling factor is performed by “the computer” (referencing the computer recited in the preamble). Such a limitation gives “life, meaning and vitality” to the preamble and, therefore, the preamble is construed to further limit the claim. (See MPEP 2111.02.) However, the mere recitation of “computer‐implemented” is akin to adding the words “apply it” in conjunction with the abstract idea. Such a limitation is not enough to qualify as significantly more. With regards to the graphical user interface limitation, the courts have found that simply limiting the use of the abstract idea to a particular technological environment is not significantly more. (See, e.g., Flook.) Whereas in similar claim 4: Eligible 4. A computer‐implemented method for dynamically relocating textual information within an underlying window displayed in a graphical user interface, the method comprising: displaying a first window containing textual information in a first format within a graphical user interface on a computer screen; displaying a second window within the graphical user interface; constantly monitoring the boundaries of the first window and the second window to detect an overlap condition where the second window overlaps the first window such that the textual information in the first window is obscured from a user’s view; determining the textual information would not be completely viewable if relocated to an unobstructed portion of the first window; calculating a first measure of the area of the first window and a second measure of the area of the unobstructed portion of the first window; calculating a scaling factor which is proportional to the difference between the first measure and the second measure; scaling the textual information based upon the scaling factor; (significant step) automatically relocating the scaled textual information, by a processor, to the unobscured portion of the first window in a second format during an overlap condition so that the entire scaled textual information is viewable on the computer screen by the user; (significant step) automatically returning the relocated scaled textual information, by the processor, to the first format within the first window when the overlap condition no longer exists. These limitations are not merely attempting to limit the mathematical algorithm to a particular technological environment. Instead, these claim limitations recite a specific application of the mathematical algorithm that improves the functioning of the basic display function of the computer itself. As discussed above, the scaling and relocating the textual information in overlapping windows improves the ability of the computer to display information and interact with the user. The dependent claims 9-14 are rejected as follows, for the same reasoning as being directed towards patent ineligible subject matter under 35 U.S.C. 101, and not adding eligible subject matter to the respective parent claim. The same context applies as in claim 8, for specificities in labeling fields/text, correction, translation, and identifying data in a document/form. Claims 1-7 and 15-20 have not been rejected under 35 U.S.C. 101 because the claimed invention appears to contain an abstract idea that demonstrates significantly more than the judicial exception. Regarding claims 1-7 and 15-20: Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER? Yes Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA? Yes Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION? Yes, if the claims are alternatively construed to be abstract in step 2A1. The claims utilize layered models to improve user electronic documents via auto population and replacement through AI models which at least significantly speeds the development of adapted content for user-facing documents, supported by the specification, and reflected by the claims e.g. in spec: 0009 and 0013 In other words, the claims enable the invention to improve the functionality of a backend computing system by enabling the backend system, at runtime, to efficiently present dynamic content to users without a large storage or computing burden that slows response times to the user. Supported by the following: In Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018), the claimed invention was a method of virus scanning that scans an application program, generates a security profile identifying any potentially suspicious code in the program, and links the security profile to the application program. 879 F.3d at 1303-04, 125 USPQ2d at 1285-86. The Federal Circuit noted that the recited virus screening was an abstract idea, and that merely performing virus screening on a computer does not render the claim eligible. 879 F.3d at 1304, 125 USPQ2d at 1286. The court then continued with its analysis under part one of the Alice/Mayo test by reviewing the patent’s specification, which described the claimed security profile as identifying both hostile and potentially hostile operations. The court noted that the security profile thus enables the invention to protect the user against both previously unknown viruses and “obfuscated code,” as compared to traditional virus scanning, which only recognized the presence of previously-identified viruses. The security profile also enables more flexible virus filtering and greater user customization. 879 F.3d at 1304, 125 USPQ2d at 1286. The court identified these benefits as improving computer functionality, and verified that the claims recite additional elements (e.g., specific steps of using the security profile in a particular way) that reflect this improvement. Accordingly, the court held the claims eligible as not being directed to the recited abstract idea. 879 F.3d at 1304-05, 125 USPQ2d at 1286-87. This analysis is equivalent to the Office’s analysis of determining that the additional elements integrate the judicial exception into a practical application at Step 2A Prong Two, and thus that the claims were not directed to the judicial exception (Step 2A: NO). Examples of claims that improve technology and are not directed to a judicial exception include: Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1339, 118 USPQ2d 1684, 1691-92 (Fed. Cir. 2016) (claims to a self-referential table for a computer database were directed to an improvement in computer capabilities and not directed to an abstract idea); McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1315, 120 USPQ2d 1091, 1102-03 (Fed. Cir. 2016) (claims to automatic lip synchronization and facial expression animation were directed to an improvement in computer-related technology and not directed to an abstract idea); Visual Memory LLC v. NVIDIA Corp., 867 F.3d 1253,1259-60, 123 USPQ2d 1712, 1717 (Fed. Cir. 2017) (claims to an enhanced computer memory system were directed to an improvement in computer capabilities and not an abstract idea); Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018) (claims to virus scanning were found to be an improvement in computer technology and not directed to an abstract idea); SRI Int’l, Inc. v. Cisco Systems, Inc., 930 F.3d 1295, 1303 (Fed. Cir. 2019) (claims to detecting suspicious activity by using network monitors and analyzing network packets were found to be an improvement in computer network technology and not directed to an abstract idea). Additional examples are provided in MPEP § 2106.05(a). Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole: Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality. Specifically, Ex Parte Desjardins explained the following: Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8). Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-11, 13, 14, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20200327116 A1 PERLICK; David A. et al. (hereinafter PERLICK) in view of US 20250272682 A1 Pai; Aditya et al. (hereinafter Pai). Re claim 1, PERLICK teaches 1. A system comprising: a processor; and a non-transitory computer readable medium having stored thereon instructions that are executable by the processor to cause the system to perform operations comprising: (fig. 13-14 requiring a computer and components) receiving a document from a user computing device, the document comprising a plurality of text portions; (a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added) tagging… each of the plurality of text portions as either dynamic or static based on at least one characteristic of the respective text portion; (tags or attributes such as Name, City, State, SSN, etc. 0144-0146within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) receiving, from the user computing device, an indication of a content parameter; (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: …via a first machine learning model… (Pai a first model such as prediction model to detect an error in a data field e.g. claim 6 with 0053 with fig. 1 and prediction 0078) generating, via a second machine learning model for each of the plurality of text portions tagged as dynamic, a replacement portion based on the content parameter; and (Pai a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) transmitting, to the user computing device, an updated document comprising a plurality of replacement portions and the plurality of text portions tagged as static. (Pai once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claims 2 and 16, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: 2. The system of claim 1, wherein the first machine learning model comprises a predictive artificial intelligence (AI) model, and the second machine learning model comprises a generative AI model. (Pai once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claims 3 and 17, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: 3. The system of claim 1, wherein the first machine learning model is trained by: receiving a training dataset comprising a plurality of training text portions and a plurality of labels corresponding to the plurality of training text portions; (Pai training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating, by the first machine learning model, a plurality of predicted labels corresponding to the plurality of training text portions; and (PAI using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) adjusting the first machine learning model based on a comparison of the plurality of predicted labels to the plurality of labels in the training dataset. (PAI the learning model inherently learns, and uses a user history showing that previous data is used, using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claims 4 and 18, PERLICK teaches 4. The system of claim 1, wherein: the content parameter comprises a location-specific regulation, and generating the replacement portion comprises: (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: identifying at least one of the plurality of text portions tagged as dynamic as including content that conflicts with the location-specific regulation; and (Pai detecting an error per se… e.g. name, city, state. Etc., training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating a text portion that removes the conflicting content or replaces the conflicting content with non-conflicting content. (Pai replacing after detecting an error per se… e.g. name, city, state. Etc., training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claims 6 and 19, PERLICK teaches 6. The system of claim 1, wherein: the content parameter comprises a user identifier, and (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: generating the replacement portion comprises: identifying at least one of the plurality of text portions tagged as dynamic as including user-identity content; and (Pai training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating the replacement portion as a text portion that replaces the user-identity content with the user identifier. (Pai once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claims 7 and 20, PERLICK teaches 7. The system of claim 1, wherein the transmitting the updated document comprises: assembling the updated document by replacing each of the plurality of text portions tagged as dynamic in the received document with the generated replacement portions; (user replaces the generic field… in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) checking the updated document according to the received content parameter; and (the form is checked for correct format after data is entered as in 0147, user also has the option to add syntax to the form as in fig. 14-15) checking the updated document for proper syntax. (the form is checked for correct format after data is entered as in 0147, user also has the option to add syntax to the form as in fig. 14-15) Re claim 8, PERLICK teaches 8. A method for generating user-specific content, the method comprising: (within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) receiving, from a device, a form document; (within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) processing the form document to identify a plurality of content units; (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) deriving a user-specific parameter from the device; (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) generating, for each of the plurality of content units tagged with the first label, a replacement content unit based on the user-specific parameter; (the user, not the model enables the replacement, in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) assembling an updated document with a plurality of replacement content units and the plurality of content units tagged with the second label; and (similarly the document is updated by the user, not the model enables the replacement, in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) transmitting the updated document to the device for display on a graphical user interface (GUI) of the device. (displayed where the document is updated by the user, not the model enables the replacement, in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: tagging, by a machine learning model, each of the plurality of content units with a first or second label (Pai using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078, claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and/or GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claim 9, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: 9. The method of claim 8, wherein the machine learning model is trained by: receiving a training dataset comprising a plurality of training content units and a plurality of labels corresponding to the plurality of training content units; (Pai training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating, by the machine learning model, a plurality of predicted labels corresponding to the plurality of training content units; and (PAI using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) adjusting the machine learning model based on a comparison of the plurality of predicted labels to the plurality of labels in the training dataset. (PAI the learning model inherently learns, and uses a user history showing that previous data is used, using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claim 10, PERLICK teaches 10. The method of claim 8, further comprising receiving, via the GUI of the device, an input regarding the updated document, (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: wherein the machine learning model is trained according to the received input. (Pai training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claim 11, PERLICK teaches 11. The method of claim 8, wherein: the user-specific parameter comprises a location of the device, (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147) the method further comprises determining a location-specific regulation associated with the location of the device, and (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147) generating the replacement content unit comprises: (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: identifying at least one of the plurality of content units tagged with the first label as including content that conflicts with the location-specific regulation; and (the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating a content unit that removes the conflicting content or replaces the conflicting content with non-conflicting content. (Pai removal and thereafter training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claim 13, PERLICK teaches 13. The method of claim 8, wherein: the user-specific parameter comprises a user identifier, and generating the replacement content unit comprises: (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: identifying at least one of the plurality of content units tagged with the first label as including user-identity content; and (Pai training based on user input, once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating a content unit that replaces the user-identity content with the user identifier. (Pai once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. Re claim 14, PERLICK teaches 14. The method of claim 8, wherein the assembling the updated document comprises: checking the updated document according to the received user-specific parameter; and (the form is checked for correct format after data is entered as in 0147, user also has the option to add syntax to the form as in fig. 14-15) checking the updated document for proper syntax. (the form is checked for correct format after data is entered as in 0147, user also has the option to add syntax to the form as in fig. 14-15) Claims 5 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20200327116 A1 PERLICK; David A. et al. (hereinafter PERLICK) in view of US 20250272682 A1 Pai; Aditya et al. (hereinafter Pai) and further in view of US 20140129457 A1 Peeler; Matthew Scott (hereinafter Peeler). Re claims 5 and 12, PERLICK in view of Pai, while teaching location as a label or attribute, fails to teach translation: 5. The system of claim 4, wherein the transmitting the updated document further comprises translating a language of the updated document based on a location associated with the location-specific regulation. (Peeler 0091 using GPS to determine location for a form on an application and the user can translate based on location thereof such as if the regions default language is an option or selected otherwise to any language by the user as a profile regulation per se) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK in view of Pai to incorporate the above claim limitations as taught by Peeler to allow for combining prior art elements according to known methods to yield predictable results via the use of location detection either by the text in the form, a regions metadata when in that country, by GPS, or by user selection, thereby enabling users from various locales and with various languages to complete a form. Claim 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20200327116 A1 PERLICK; David A. et al. (hereinafter PERLICK) in view of US 20250272682 A1 Pai; Aditya et al. (hereinafter Pai) and further in view of US 20210329081 A1 Singh; Manbinder Pal (hereinafter Singh) Re claim 15, PERLICK teaches 15. A system comprising: a processor; and a non-transitory computer readable medium having stored thereon instructions that are executable by the processor to cause the system to perform operations comprising: (fig. 13 and 14 all hardware necessary) processing a document to identify a plurality of text portions; (a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) tagging… each of the plurality of text portions as either dynamic or static based on at least one characteristic of the respective text portion; (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) receiving, from a first device, an indication of a first user parameter; (in Fig. 14-15 the user provides the parameters for the tag/attribute as well as actual content to be filled in the tag/attribute… tags or attributes such as Name, City, State, SSN, etc. 0144-0147 within a form with fields or document per se containing text portions, edited and filed in by a user as in fig. 13-16, some of which are fixed/ static and other to be altered or added which can be automatically ready for populate or user added, however some are fixed such as in a legal contract e.g. fig. 13-14 and 0033) However, while PERLICK teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach the use of models per se, to bolster the act of adding the tags and replacing portions in PERLICK, while automated in PERLICK, it fails to teach: via a first machine learning model (Pai using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078, claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating, via a second machine learning model for the plurality of text portions tagged as dynamic, a first set of replacement portions based on the first user parameter (Pai replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) transmitting, to the user computing device, a first updated document comprising the first set of replacement portions and the plurality of text portions tagged as static; (Pai once corrected, the data will have an override code sent and replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) generating, via the second machine learning model for the plurality of text portions tagged as dynamic, a second set of replacement portions based on the second user parameter; and (Pai iterative for any entries, user history is still stored and ready for the next iteration… replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) transmitting, to the user computing device, a second updated document comprising the second set of replacement portions and the plurality of text portions tagged as static. (Pai iterative for any entries, user history is still stored and ready for the next iteration… replacement of the error data takes place via a second model such as a learning or GAN (generative) or neural network model 0051 for replacement of predicted error in a data field (such as using a prediction model for the initial phase in the data field, 0053 with fig. 1 and prediction 0078), claim 6 with replacement thereof using the GAN model 0020-0025 and 0034-0035) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the editing of PERLICK can be improved by AI model correction for exiting attributes/tags e.g. name, city, etc. wherein such use of prediction and GAN models help auto-complete entries for text prediction base on user history as well as correcting incorrect entries via GAN, which improves data accuracy by learning the underlying structure of correct data. GANs excel at handling noisy or incomplete data, such that GANs can be trained on large amounts of unlabeled data to learn patterns of "correct" entries per se, which is highly beneficial when labeled data (correctly filled forms) is scarce and global bias is not desired for personalized data. However, while the combination teaches learning models trained to predict text for tags as well as models to correct errors, and also teaches replacement with content that a user provides for tag/attributes which inherently updates the document/form, it fails to teach multiple users or user profiles thereof: receiving, from a second device, an indication of a second user parameter; (Singh 0030 and 0116 with fig. 1a) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PERLICK in view of Pai to incorporate the above claim limitations as taught by Pai to allow for combining prior art elements according to known methods to yield predictable results via analogous use of field/form or document field alterations in both PERLICK and Pai, wherein the accessibility of PERLICK can be improved by allowing multiple users to access forms, wherein restricted fields are provided to allow for both users to access/view forms and edit when allowed such as for adding information and signing documents electronically, thereby adding a security layer for a multi-party document viewing system. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gonzalez; William et al. US 20240020466 A1 Multiple user form access Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C COLUCCI whose telephone number is (571)270-1847. The examiner can normally be reached on M-F 9 AM - 5 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL COLUCCI/Primary Examiner, Art Unit 2655 (571)-270-1847 Examiner FAX: (571)-270-2847 Michael.Colucci@uspto.gov
Read full office action

Prosecution Timeline

Aug 19, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592240
ENCODING AND DECODING OF ACOUSTIC ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586570
CHUNK-WISE ATTENTION FOR LONGFORM ASR
2y 5m to grant Granted Mar 24, 2026
Patent 12573405
WORD CORRECTION USING AUTOMATIC SPEECH RECOGNITION (ASR) INCREMENTAL RESPONSE
2y 5m to grant Granted Mar 10, 2026
Patent 12573380
MANAGING AMBIGUOUS DATE MENTIONS IN TRANSFORMING NATURAL LANGUAGE TO A LOGICAL FORM
2y 5m to grant Granted Mar 10, 2026
Patent 12567414
SYSTEM AND METHOD FOR DETECTING A WAKEUP COMMAND FOR A VOICE ASSISTANT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
91%
With Interview (+15.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 990 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month