DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are presented for examination. Applicant filed a response to non-final Office action on 02/17/2026 amending claims 1, 7-10, and 14-20. In light of Applicant's amendments, Examiner has withdrawn the previous objection of claims 1, 8, and 14; the previous § 112(a) rejection of claims 1-20; and the previous grounds of § 103 rejection of claims 1-20. Examiner has, however, established new § 112(a) rejection and new § 101 rejection for claims 1-20 in the instant Office action. Since the new § 112(a) rejection and the new § 101 rejection were necessitated by Applicant’s amendment of the claims, the instant rejection of claims 1-20 is FINAL rejection of the claims.
Examiner's Remarks
Patent Eligibility under § 101: Applicant has deleted the following claim limitation that previously appeared in independent claims 1, 8, and 14:
regenerating, using the machine learning model retrained on the plurality of historical tax question-and-answer pairs, the ordering of the plurality of tax questions, and the completed tax return, the optimization algorithm for minimizing the tax questions asked to the user based on the generated ordering of the tax questions, wherein the machine learning model is retrained at predetermined intervals.
Since claims 1-20 were deemed patent eligible based on the now deleted limitation (in combination with other limitations), Examiner has established new § 101 rejection for claims 1-20 in the instant Office action.
Prior Art Rejection: Closest prior art reference, Mascaro (US 10,176,534 B1) teaches generally assisting a user on tax return preparation. Mascaro, however, fails to disclose – alone in in combination with other references – the following claim limitations found in independent claims 1, 8, and 14, as an ordered combination of steps with other claim limitations:
generating, using a rules generation engine trained with a plurality of historical tax question-and-answer pairs, an optimization algorithm for minimizing tax questions asked to the user based on a generated ordering of the tax questions, wherein the optimization algorithm comprises a first set of ordering rules, [and] wherein the rules generation engine is a machine learning model;
generating an ordering of a plurality of tax questions based on the first set of ordering rules of the optimization algorithm, the ordering including an order in which the plurality of tax questions are presented to the user such that a number of tax questions presented to the user is minimized;
refining the ordering of the plurality of tax questions based on the one or more tax data variables and one or more ordering rules from the first set of ordering rules of the optimization algorithm, comprising: applying the one or more ordering rules from the first set of ordering rules to the plurality of tax questions, wherein the one or more ordering rules applied to the plurality of tax questions are based on the one or more tax data variables;
applying the optimization algorithm to compute one or more value distributions corresponding to one or more unknown tax data variables, wherein each value distribution from the one or more value distributions represents a likelihood of one or more potential outcomes occurring; and
regenerating the optimization algorithm for minimizing the tax questions asked to the user based on the generated ordering of the tax questions, comprising: reingesting, at predetermined intervals, the first set of ordering rules and the one or more tax data variables into the rules generation engine, wherein the rules generation engine is operable to determine a new efficiency based on at least reingesting the first set of ordering rules and the one or more tax data variables into the rules generation engine, [and] wherein a second set of ordering rules is obtained from regenerating the optimization algorithm.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. § 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-20 are rejected under 35 U.S.C. § 112(a) as failing to comply with the written description requirement. The claims contain new subject matter which was not described in the original disclosure filed 06/30/2023 in such a way as to reasonably convey to one skilled in the relevant art that the inventor, at the time the application was filed, had possession of the claimed invention.
The newly amended limitations are:
(1) a rules generation engine trained with a plurality of historical tax question-and-answer pairs; and
(2) the rules generation engine is a machine learning model.
Examiner was unable to find any support in the Applicant’s specification for a rules generation engine that is trained with a plurality of historical tax question-and-answer pairs, or for a rules generation engine being the same as machine learning model. Applicant’s specification recites:
[0033] In some embodiments, user tax data store 210 and external tax data store 212 then provide user tax data and external tax data to rules generation engine 214. Rules generation engine 214 processes user tax data from user tax data store 210 as well as information from external tax data store 212 to generate ordering rules for later use by ordering model 208. A person of skill in the art will appreciate that such a calculation, particularly on a large data set, is only possible with the aid of computer-assisted machine-learning algorithms and techniques such as multivariate analysis and/or cluster analysis. In some embodiments, big-data techniques, including generalized linear modeling and k-means clustering can be used to generate rules. In other embodiments, tree-based algorithms such as gradient-boosting machines can be used.
This recitation, however, does not support either limitation (1) or (2) above. It merely states that the rules generation engine processes user tax data and generates ordering rules for later user by ordering model, and that calculations can be done using the machine learning model in case of large data sets. It is not clear from the specification which calculations: generating the ordering rules or later use by ordering model. In any case, according the [0033], the rules generation engine is not the same as the machine learning model. Therefore, limitations (1) and (2) are deemed to be new matter not supported by Applicant’s specification.
Applicant is required to either show where the support for limitations (1) and (2) can be found in the Applicant’s specification or cancel the new matter.
Claim Rejections - 35 USC § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC § 101 because they are directed to non-statutory subject matter. The rationale for this finding is explained below.
The Supreme Court in Mayo laid out a framework for determining whether an applicant is seeking to patent a judicial exception itself or a patent-eligible application of the judicial exception. See Alice Corp., 134 S. Ct. at 2355,110 USPQ2d at 1981 (citing Mayo, 566 U.S. 66, 101 USPQ2d 1961). This framework, which is referred to as the Mayo test or the Alice/Mayo test (“the test”), is described in detail in Manual of Patent Examining Procedure (”MPEP”) (see MPEP § 2106(III) for further guidance). The step 1 of the test: It need to be determined whether the claims are directed to a patent eligible (i.e., statutory) subject matter under 35 USC § 101. Step 2A of the test: If the claims are found to be directed to a statutory subject matter, the next step is to determine whether the claims are directed to a judicial exception i.e., law of nature, natural phenomenon, and abstract idea (Prong 1). If the claims are found to be directed to an abstract idea, it needs to be determined whether the claims recite additional elements that integrate the judicial exception into a practical application (Prong 2). Step 2B of the test: If the claims are directed to a judicial exception, the next and final step is to determine whether the claims recite additional elements that amount to significantly more than the judicial exception.
Step 1 of the Test:
When considering subject matter eligibility under 35 USC § 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Here, the claimed invention of claims 1-7 is a system, and, thus, one of the statutory categories of invention. Further, the claimed invention of claims 8-13 is a series of steps, which is method (i.e., a process), which is also one of the statutory categories of invention. Still further, the claimed invention of claims 14-20 is one or more non-transitory computer-readable media, which is also one of the statutory categories of invention.
Conclusion of Step 1 Analysis: Therefore, claims 1-20 are statutory under 35 USC § 101 in view of step 1 of the test.
Step 2A of the Test:
Prong 1: Claims 1-20, however, recite an abstract idea of assisting a user on tax return preparation. The creation of assisting a user on tax return preparation, as recited in the independent claims 1, 8, and 14, belongs to certain methods of organizing human activity (i.e., legal interaction – legal obligations) that are found by the courts to be abstract ideas. The limitations in independent claims 1, 8, and 14, which set forth or describe the recited abstract idea, are found in the following steps:
“generating an optimization algorithm for minimizing tax questions asked to the user based on a generated ordering of the tax questions, wherein the optimization algorithm comprises a first set of ordering rules, [and] wherein the rules generation engine is a machine learning model” (claims 1, 8, and 14);
“generating an ordering of a plurality of tax questions based on the first set of ordering rules of the optimization algorithm, the ordering including an order in which the plurality of tax questions are presented to the user such that a number of tax questions presented to the user is minimized” (claims 1, 8, and 14);
“generating a first output to be presented to the user, wherein the first output is a first question of the plurality of questions based on the ordering” (claims 1, 8, and 14);
“updating the one or more tax data variables based on the one or more inputs from the user” (claims 1, 8, and 14);
“refining the ordering of the plurality of tax questions based on the one or more tax data variables and one or more ordering rules from the first set of ordering rules of the optimization algorithm, comprising: applying the one or more ordering rules from the first set of ordering rules to the plurality of tax questions, wherein the one or more ordering rules applied to the plurality of tax questions are based on the one or more tax data variables” (claims 1, 8, and 14);
“applying the optimization algorithm to compute one or more value distributions corresponding to one or more unknown tax data variables, wherein each value distribution from the one or more value distributions represents a likelihood of one or more potential outcomes occurring” (claims 1, 8, and 14);
“determining whether the one or more value distributions exceed a predetermined threshold” (claims 1, 8, and 14);
“responsive to the one or more value distributions exceeding the predetermined threshold, terminating the tax return preparation such that a completed tax return is obtained” (claims 1, 8, and 14); and
“regenerating the optimization algorithm for minimizing the tax questions asked to the user based on the generated ordering of the tax questions, comprising: reingesting, at predetermined intervals, the first set of ordering rules and the one or more tax data variables into the rules generation engine, wherein the rules generation engine is operable to determine a new efficiency based on at least reingesting the first set of ordering rules and the one or more tax data variables into the rules generation engine, [and] wherein a second set of ordering rules is obtained from regenerating the optimization algorithm” (claims 1, 8, and 14).
Prong 2: In addition to abstract steps recited above in Prong 1, independent claims 1 and 14 recite additional elements:
“at least one processor” (claim 1);
“a user tax data store, wherein the user tax data store comprises one or more tax data variables corresponding to the user” (claim 1);
“a rules generation engine trained with a plurality of historical tax question-and-answer pairs” (claim 1, 8, and 14); and
“one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor, perform a method of assisting the user on the tax return preparation” (claims 1 and 14).
These additional elements are recited at a high level of generality (i.e., as a generic processor performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using generic computer components. Also, the following limitation recites insignificant extra solution activity (for example, data gathering):
“receiving one or more inputs from the user in response to the first output” (claims 1, 8, and 14).
These additional elements/limitation do not integrate the abstract idea into a practical application because they do not impose a meaningful limit on the judicial exception. The additional elements/limitation of independent claims 1, 8, and 14, here do not render improvements to the functioning of a computer or to any other technology or technical field (see MPEP § 2106.05(a)), nor do they integrate the abstract idea into a practical application under MPEP § 2106.05(b) (particular machine); MPEP § 2106.05(c) (particular transformations); or MPEP § 2106.05(e) (other meaningful limitations). Further, the combination of these additional elements/limitations is no more than mere instructions to apply the exception using a generic device. Accordingly, even in combination, these additional elements/limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Conclusion of Step 2A Analysis: Therefore, independent claims 1, 8, and 14, are non-statutory under 35 USC § 101 in view of step 2A of the test.
Step 2B of the Test: The additional elements of independent claims 1, 8, and 14, (see above under Step 2A – Prong 2) are well-understood, routine, and conventional elements that amount to no more than implementing the abstract idea with a computerized system. The Applicant’s Specification describes these additional elements in following terms:
Operational Environment for Embodiments of The Invention
[0001] Turning first to FIG. 1, an exemplary hardware platform for certain embodiments of the invention is depicted. Computer 102 can be a desktop computer, a laptop computer, a server computer, a mobile device such as a smartphone or tablet, or any other form factor of general- or special-purpose computing device. Depicted with computer 102 are several components for illustrative purposes. In some embodiments, certain components may be arranged differently or absent. Additional components may also be present. Included in computer 102 is system bus 104, via which other components of computer 102 can communicate with each other. In certain embodiments, there may be multiple busses or components that may communicate with each other directly. Connected to system bus 104 is central processing unit (CPU) 106. Also attached to system bus 104 are one or more random-access memory (RAM) modules 108. Also attached to system bus 104 is graphics card 110. In some embodiments, graphics card 110 may not be a physically separate card, but rather may be integrated into the motherboard or the CPU 106. In some embodiments, graphics card 110 has a separate graphics-processing unit (GPU) 112, which can be used for graphics processing or for general purpose computing (GPGPU). Also, on graphics card 110 is GPU memory 114. Connected (directly or indirectly) to graphics card 110 is display 116 for user interaction. In some embodiments no display is present, while in others, it is integrated into computer 102. Similarly, peripherals such as keyboard 118 and mouse 120 are connected to system bus 104. Like display 116, these peripherals may be integrated into computer 102 or absent. Also connected to system bus 104 is local storage 122, which may be any form of computer-readable media and may be internally installed in computer 102 or externally and removably attached.
[0002] Computer-readable media include both volatile and nonvolatile media, removable and nonremovable media, and contemplate media readable by a database. For example, computer-readable media include (but are not limited to) RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD), holographic media or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage, and other magnetic storage devices. These technologies can store data temporarily or permanently. However, unless explicitly specified otherwise, the term "computer-readable media" should not be construed to include physical, but transitory, forms of signal transmission such as radio broadcasts, electrical signals through a wire, or light pulses through a fiber-optic cable, but only non-transitory data storage. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations. In particular, computer- readable media may store computer-executable instructions that, when executed by at least one processor such as CPU 106, perform methods in accordance with embodiments of the present disclosure.
[0003] Finally, network interface card (NIC) 124 is also attached to system bus 104 and allows computer 102 to communicate over a network such as local network 126.NIC 124 can be any form of network interface known in the art, such as Ethernet, ATM, fiber, Bluetooth, or Wi-Fi (i.e., the Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards). NIC 124 connects computer 102 to local network 126, which may also include one or more other computers, such as computer 128, and network storage, such as data store 130. Generally, a data store such as data store 130 may be any repository from which information can be stored and retrieved as needed. Examples of data stores include relational or object-oriented databases, spreadsheets, file systems, flat files, directory services such as LDAP and Active Directory, or email storage systems. A data store may be accessible via a complex API (such as, for example, Structured Query Language), a simple API providing only read, write, and seek operations, or any level of complexity in between. Some data stores may additionally provide management functions for data sets stored therein such as backup or versioning. Data stores can be local to a single computer such as computer 128, accessible on a local network such as local network 126, or remotely accessible over public Internet 132.Local network 126 is, in turn, connected to public Internet 132, which connects many networks such as local network 126, remote network 134 or directly attached computers such as computer 136. In some embodiments, computer 102 can itself be directly connected to public Internet 132.
[0033] In some embodiments, user tax data store 210 and external tax data store 212 then provide user tax data and external tax data to rules generation engine 214. Rules generation engine 214 processes user tax data from user tax data store 210 as well as information from external tax data store 212 to generate ordering rules for later use by ordering model 208. A person of skill in the art will appreciate that such a calculation, particularly on a large data set, is only possible with the aid of computer-assisted machine-learning algorithms and techniques such as multivariate analysis and/or cluster analysis. In some embodiments, big-data techniques, including generalized linear modeling and k-means clustering can be used to generate rules. In other embodiments, tree-based algorithms such as gradient-boosting machines can be used.
This is a description of general-purpose computer. Further, the additional limitation of “receiving” data amount to no more than mere instruction to apply the exception using generic computer component. For the same reason this element is not sufficient to provide an inventive concept. The additional limitation of “receiving” data was considered insignificant extra-solution activity in Step 2A – Prong 2. Re-evaluating here in Step 2B, it is also determined to be well-understood, routine, and conventional activity in the field. Similarly to OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network), and buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network), the additional limitation of independent claims 1, 8, and 14, “receives” data over a network in a merely generic manner. The courts have recognized receiving data from a user device function as well-understood, routine and conventional when claimed in a merely generic manner. Therefore, the additional elements/limitation of independent claims 1, 8, and 14, are well-understood, routine, and conventional. Further, taken as combination, the additional elements/limitation add nothing more than what is present when the elements are considered individually. There is no indication that the combination provides any effect regarding the functioning of the computer or any improvement to another technology.
Conclusion of Step 2B Analysis: Therefore, independent claims 1, 8, and 14, are non-statutory under 35 USC § 101 in view of step 2B of the test.
Dependent Claims: Dependent claims 2-7 depend on independent claim 1; dependent claims 9-13 depend on independent claim 8; and dependent claims 15-20 depend on independent claim 14. The elements in dependent claims 2-7, 9-13, and 15-20, which set forth or describe the abstract idea, are:
“transmitting, to a third party, the completed tax return corresponding to the user” (claim 2: insignificant extra solution activity);
“providing the first output as an audio signal to the user; and receiving the one or more inputs as audio from the user” (claim 3: both “providing” step and “receiving” step are insignificant extra solution activity);
“generating the first output using natural language generation; and providing the first output as an audio signal” (claim 4: “generating” step is further narrowing the recited abstract idea; and “providing” step is insignificant extra solution activity);
“receiving the one or more inputs as an audio signal; and translating the one or more inputs using natural language processing” (claim 5: “receiving” step is insignificant extra solution activity; and “translating” step is further narrowing the recited abstract idea);
“refining the ordering of the plurality of questions comprises: receiving the one or more inputs; and applying the one or more ordering rules; and applying the one or more ordering rules determines the ordering of the plurality questions” (claim 6: wherein “receiving” step is insignificant extra solution activity; and “applying" step is further narrowing the recited abstract idea);
“generating the optimization algorithm comprising the first set of ordering rules is based on at least a portion of the one or more tax data variables in the user tax data store and external tax data in an external tax data store” (claim 7: further narrowing the recited abstract idea);
“refining the ordering of the plurality of questions comprises: receiving the one or more tax data variables; and storing the one or more tax data variables in a user tax data store, wherein the one or more ordering rules applied to the plurality of tax questions are based on at least the one or more tax data variables and the one or more inputs from the user” (claim 9: insignificant extra solution activity);
“generating the first set of ordering rules is based on at least a first portion of the one or more tax data variables in the user tax data store and at least a second portion of external tax data in an external tax data store” (claim 10: further narrowing the recited abstract idea);
“transmitting, to a third party, a finalized tax return corresponding to the user” (claim 11: insignificant extra solution activity);
“recomputing one or more values associated with the one or more tax data variables utilizing the one or more inputs from the user” (claim 12: further narrowing the recited abstract idea);
“refining the ordering of the plurality of questions occurs responsive to a plurality of user inputs being received, wherein the plurality of user inputs corresponds to one or more fields associated with a tax form” (claim 13: further narrowing the recited abstract idea);
“the machine learning model is trained to perform at least one or cluster analysis or multivariate analysis optimization algorithm comprises one or more ordering rules” (claim 15: further narrowing the recited abstract idea);
“generating the first output using natural language generation; and providing the first output as an audio signal” (claim 16: “generating” step is further narrowing the recited abstract idea; and “providing” step is insignificant extra solution activity);
“receiving the one or more inputs from the user as an audio signal; and translating the one or more inputs using natural language processing” (claim 17: “receiving” step is insignificant extra solution activity; and “translating” step is further narrowing the recited abstract idea);
“applying the optimization algorithm to compute one or more uncertainty levels corresponding to one or more unknown tax data variables” (claim 18: further narrowing the recited abstract idea);
“responsive to a value distribution from the one or more value distributions exceeding the predetermined threshold, populating a tax return field associated with the value distribution” (claim 19: further narrowing the recited abstract idea); and
“regenerating the ordering of the plurality of questions occurs responsive to terminating the tax return preparation” (claim 20: further narrowing the recited abstract idea).
Conclusion of Dependent Claims Analysis: Dependent claims 2-7, 9-13, and 15-20, do not correct the deficiencies of independent claims 1, 8, and 14, and they are, thus, rejected on the same basis.
Conclusion of the 35 USC § 101 Analysis: Therefore, claims 1-20 are rejected as directed to an abstract idea without “significantly more” under 35 USC § 101.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wang (US 10,157,426 B1) discloses: “Computer-implemented methods, systems and articles of manufacture for rendering paginated interview screens that include arrangements of tax questions or topics representative of question or topic relevancy and that are selected and arranged based at least in part upon determined question or topic categorization and/or ranking. Different pagination structures may be utilized to visually indicate tax question or topic relevancy and to encourage or enable users of a tax return preparation application to answer more pertinent questions or topics before others, while also allowing the user to navigate or scroll questions or topics of paginated screens and to select questions or topics to answer or address at the user's discretion, without being restricted by a pre-determined sequence of interview questions or hard-coded question-and-answer frameworks. Interview screen paginations and content thereof are dynamically updated as runtime data of the electronic tax return is received or updated.”
Ioannou (US 10,628,894 B1) discloses: “A method and system provides a personalized question response to a tax-related question that is received from a user of an electronic tax return preparation system, according to one embodiment. The method and system include receiving user data that is associated with the user of the electronic tax return preparation system, according to one embodiment. The method and system include monitoring interactions between the user and a user interface of the electronic tax return preparation system to determine user analytics, according to one embodiment. The method and system include receiving a tax-related question having question content, according to one embodiment. The method and system include determining a question response that satisfies the tax-related question, according to one embodiment. The method and system include providing the question response to the user through the user interface, according to one embodiment.”
Goldman (AU 2018271296 A1) discloses: “Methods, systems and articles of manufacture for using one or more predictive models to predict which tax matters are relevant to a particular taxpayer during preparation of an electronic tax return. A tax return preparation system accesses taxpayer data such as personal data and/or tax data regarding the particular taxpayer. The system executes a predictive model which receives the taxpayer data as inputs to the predictive model. The predictive model generates as output(s) one or more predicted tax matters which are determined to be likely to be relevant to the taxpayer. The system may then determine tax questions to present to the user based at least in part upon the predicted tax matters determined by the predictive model.”
Lin, Paul, and Tom Hazelbaker. "Meeting the Challenge of Artificial Intelligence." CPA Journal 89.6 (2019).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIRPI H. KANERVO whose telephone number is 571-272-9818. The examiner can normally be reached on Monday – Friday, 10 am – 6 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Abhishek Vyas can be reached on 571-270-1836. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIRPI H KANERVO/Primary Examiner, Art Unit 3691