DETAILED ACTION
Status of the Claims
The following is a Final Office Action in response to amendments and remarks filed 19 September 2025.
Claims 1, 3-6 and 8-9 have been amended.
Claims 2 and 7 have been cancelled.
Claims 1, 3-6 and 8-9 are pending and have been examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicants argue that the 35 U.S.C. 101 rejection under the Alice Corp. vs. CLS Bank Int’l be withdrawn; however the Examiner respectfully disagrees. The Examiner notes that in order to be patent eligible under 35 U.S.C. 101, the claims must be directed towards a patent eligible concept, which, the instant claims are not directed. Contrary to Applicants’ assertion that the claims are not mental process/judgement or business relation/fundamental economic practice/commercial or legal interaction/managing personal behavior, the Examiner notes that analyzing documents for business actions and generating recommendations based upon extracted words from “unstructured natural language text” is a function that teams, collaborators, managers, etc. have traditionally performed/provided for user by reading documents and building from past experience. Next, the claims are not directed to a practical application of the concept. The claims do not result in improvements to the functioning of a computer or to any other technology or technical field. They do not effect a particular treatment for a disease. They are not applied with or by a particular machine. They do not effect a transformation or reduction of a particular article to a different state or thing. And they are not applied in some other meaningful way beyond generally linking the use of the judicial exception (i.e., analyzing documents for business actions and generating recommendations) to a particular technological environment (i.e., with the use of computing components, generic computers or machine learning). Here, again as noted in the previous rejection, mere instructions to apply an exception using a generic computer component cannot provide an inventive concept - MPEP 2016.05(f). The claims recitation of the “by a machine learning” only generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). The claim(s) is/are not patent eligible and the arguments not persuasive.
Applicant next argues the claims are an improvement however the Examiner respectfully disagrees. This argument appears to be whether or not the use of computer or computing components for increased speed and efficiency results in an improvement; however the Examiner respectfully disagrees. Nor, in addressing the second step of Alice, does claiming the improved speed or efficiency inherent with applying the abstract idea on a computer provide a sufficient inventive concept. See Bancorp Servs., LLC v. Sun Life Assurance Co. of Can., 687 F.3d 1266, 1278 (Fed. Cir. 2012) (“[T]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.”); CLS Bank, Int’l v. Alice Corp., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) aff’d, 134 S. Ct. 2347 (2014) (“[S]imply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.” (citations omitted)). As such the arguments are not persuasive and the rejection not withdrawn.
Applicant’s remarks with respect to the prior art have been fully considered but are moot on grounds of new rejection, as necessitated by amendments.
In response to arguments in reference to any depending claims that have not been individually addressed, all rejections made towards these dependent claims are maintained due to a lack of reply by the Applicants in regards to distinctly and specifically pointing out the supposed errors in the Examiner's prior office action (37 CFR 1.111). The Examiner asserts that the Applicants only argue that the dependent claims should be allowable because the independent claims are unobvious and patentable over the prior art.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-6 and 8-9 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (an act, or series of acts or steps), a machine (a concrete thing, consisting of parts, or of certain devices and combination of devices), and a manufacture (an article produced from raw or prepared materials by giving these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery). Thus, each of the claims falls within one of the four statutory categories (Step 1). While the claims recite an apparatus and method. the claim(s) recite(s) analyzing documents for business actions and generating recommendations which is an abstract idea of a mental process as well as organizing human activities.
The limitations of “an analysis process for analyzing a dependency structure of a sentence contained in an electronic document comprising unstructured natural language text prepared by a user concerning a project task the analysis comprising dividing the sentence into a plurality of morphemes and assigning a part of speech to each of the morphemes using a morphological analysis model constructing a dependency graph of grammatical relationships between the morphemes using a syntactic analysis model, the dependency graph enabling the identification of a verb for extraction as the task verb a target extraction process for extracting, from the sentence, at least one word indicative of a target of a business action, wherein the target comprises at least one of a client name or an event related to a client inputting the sentence to a first trained model generated by machine learning using training data comprising sentences with ground truth tags assigned to words indicative of client names or client-related events receiving, as an output from the first trained model, the at least one word indicative of the target a task verb extraction process for, based on a result of the analysis process, extracting, as a word indicative of the target, at least one verb having a dependency relation with the at least one word extracted in the target extraction process, generating a recommendation for a future business action by inputting the extracted target and task verb to a second trained model that is generated using historical training data that includes labels indicating whether past projects have succeeded; and displaying the recommendation via a display” as drafted, is a process that, under its broadest reasonable interpretation, covers a mental process—concepts performed in the human mind (including an observation, evaluation, judgment, opinion) and/or organizing human activities--fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “An information processing apparatus comprising at least one processor, the at least one processor carrying out:,” (or “...each being carried out by at least one processor” in claim 8 or “A non-transitory computer-readable storage medium storing therein a program for causing a computer to function as a business action extraction apparatus, the program causing the computer to carry out:” in claim 9) nothing in the claim element precludes the step from practically being performed in the mind or from the methods of organizing human interactions grouping. For example, but for the “An information processing apparatus comprising at least one processor, the at least one processor carrying out:,” (or “...each being carried out by at least one processor” in claim 8 or “A non-transitory computer-readable storage medium storing therein a program for causing a computer to function as a business action extraction apparatus, the program causing the computer to carry out:” in claim 9)” language, “analyzing,” “dividing,” “constructing,” “inputting,” “receiving,” “generating,” and “displaying,” in the context of this claim encompasses the user manually reading a document and finding certain words and sentences that related to some sort of business action which is a mental process/judgement or business relation/fundamental economic practice/commercial or legal interaction/managing personal behavior. However, if possible, the Examiner should consider the limitations together as a single abstract idea rather than as a plurality of separate abstract ideas to be analyzed individually. “For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A, Prong One to make the analysis clear on the record.” MPEP 2106.04, subsection II.B. Under such circumstances, however, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Here, the limitations are considered together as a single abstract idea for further analysis. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitations as a mathematical concept, while some of the limitations may be performed in the mind after certain limitations are performed, but for the recitation of generic computer components, then it falls within the grouping of abstract ideas. (Step 2A, Prong One: YES). Accordingly, the claim(s) recite(s) an abstract idea.
This judicial exception is not integrated into a practical application (Step 2A Prong Two). In particular, the claim only recites one additional element – using a processor or computer to perform the steps. The processor or computer in the steps is recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Specifically the claims amount to nothing more than an instruction to apply the abstract idea using a generic computer or invoking computers as tools by adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d)(I) discussing MPEP 2106.05(f). The recitation of “by a machine learning” in the limitations also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “by machine learning” limits the identified judicial exceptions, this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Accordingly, the combination of these additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole (Step 2A Prong Two: NO).
The claim does not include a combination of additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). As discussed above with respect to integration of the abstract idea into a practical application (Step 2A Prong 2), the combination of additional elements of using a processor or computer to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Also, as discussed above, the use of “machine learning” is simply indicating the field of use, which does not amount to significantly more. Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole (Step 2B: NO).
Claim 3 are dependent on claim 1 and include all the limitations of claim 1. Therefore, claim 3 recite the same abstract idea of “analyzing documents for business actions.” The claim(s) recite(s) the additional limitation(s) further including mathematical concepts (use of models) which is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 8, and 9, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 4-6 are dependent on claim 1 and include all the limitations of claim 1. Therefore, claims 4-6 recite the same abstract idea of “analyzing documents for business actions.” The claim(s) recite(s) the additional limitation(s) further limiting which words/sentences are extracted, which is still directed towards the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 8, and 9, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, the additional element(s) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 1, 3-6 and 8-9 are therefore not eligible subject matter, even when considered as a whole.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-6 and 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lopez Garcia et al. (US PG Pub. 2023/0297784) and further in view of Agarwal et al. (US PG Pub. 2016/0307210).
As per claims 1, 8, and 9, Lopez Garcia discloses an information processing apparatus comprising at least one processor, the at least one processor carrying out; a business action extraction method comprising; a non-transitory computer-readable storage medium storing therein a program for causing a computer to function as a business action extraction apparatus, the program causing the computer to carry out: (system, computer readable program instructions, data processing apparatus, devices, the flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present inventive concept. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s), Lopez Garcia ¶89-¶92):
an analysis process for analyzing a dependency structure of a sentence contained in an electronic document comprising unstructured natural language text prepared by a user concerning a project task, (The text corpus may include a policy document (e.g., bill text, statutes, business compliance policies, rules, user agreement terms and conditions, etc.). The text corpus may be plural (text corpora), such as co-referencing and/or related policy documents within a broader policy framework (e.g., car safety regulations, occupational safety regulations, import/export tax schemes, etc.). The text corpus may include criteria of determination (e.g., decisions, rules, evidence, etc.) for a policy decision (e.g., applicability of a benefit, penalty, compliance, etc.). The decision modelling from text program 134 may obtain the text corpus from an automated or user-initiated web-crawl (e.g., an internet search for specific policy goals, new policies, policy names/frameworks, specific terms, etc.); a user provided reference source (e.g., a hyperlink); and/or a text corpus manually uploaded by the user (e.g., a scan, typed characters, speech, PDF, etc.). The user may narrow an obtained text corpus’ text by selecting specific portions or delineating included/excluded portions with an annotation and/or an identifier (e.g., page, section, paragraph, sentence, header, sub-header, numeric/roman numeral, unique bullet-point type, letter, and/or date, etc.), Lopez Garcia ¶33-¶35; The decision modelling from text program 134 may perform discourse and sentence level semantic parsing (step 208). Sentence level semantic parsing may be used to identify categories of policy information (e.g., semantic roles, topics, decision points, decision-level type (eligibility v. entitlement amount), rules, rule expressions and conditions, etc.) in the text corpus. Semantic roles may be used to determine to whom (e.g., policy subject), for what (e.g., benefit, penalty, compliance certification, etc.), when (e.g., eligible time periods), how (e.g., identify and group conditions, such as for income, participant eligibility, etc.), etc. The identified categories of sentences may be based on a mutual repeated word or inferred from included words (e.g., using the KG). Inferred categories may include an overarching general category, such as occupation, income, and required information. Cross-sentence and/or text span dependencies, decisions, rules, and/or conditions (e.g., and /or, if/when, exclusions, etc.) may also be identified, ¶43; raw text, ¶30; obtain text corpus via web-crawl, ¶31) (Examiner notes raw text obtained from a text corpus via a web-crawl as unstructured natural language text prepared by a user concerning a project task); the analysis comprising
dividing the sentence into a plurality of morphemes and assigning a part of speech to each of the morphemes using a morphological analysis model (The decision modelling from text program 134 may obtain a text corpus; identify terms and syntax within the text corpus; identify sentence similarities and co-references; perform discourse and sentence level semantic parsing; and transform the decision model template into a decision model, Lopez Garcia ¶31); and
constructing a dependency graph of grammatical relationships between the morphemes using a syntactic analysis model, the dependency graph enabling the identification of a verb for extraction as the task verb (the text corpus can be overlaid with a knowledge graph (KG), also known as a semantic network to identify terms. The KG represents a network of terms-e.g., objects, events, situations, concepts, etc.-and illustrates the relationships between them. The KG may be stored in the decision modelling from text data repository 132 and visualized as a graph structure by the user. In an embodiment, the KG may facilitate identification of terms in a text corpus during a cold-start process by accessing a domain of existent identified terms from non-identical decision models and their characteristics (e.g., evidentiary attributes, related topics, synonyms, syntax, correlation with annotations, role in decision models, decisions, rules, etc.). Even if the existent KG does not include elements from new policies, such as The Fish Harvester Benefit Program, it can still be used to support the NER from policy text - e.g., by matching to existent datatypes, (e.g., participant residency status) and codes (e.g., nationality), etc. For example, the definition of a dependent child changes across policies, but it is likely based on common attributes such as age ranges, and whether a child is financially dependent or has an income, Lopez Garcia ¶38; A bag of noun phrases and verb phrases may be extracted from the fragments of text using an abstract meaning representation (AMR) parser. Similarity between any pair of text fragments may be calculated as the aggregated similarity of the text embedding vectors and the bags of noun-phrases of verb-phares (e.g., using S-Bert and cosine similarity between sentence embeddings to identify similar decisions/rules), ¶41);
a target extraction process for extracting, from the sentence, at least one word indicative of a target of a business action, wherein the target comprises at least one of a client name or an event related to a client, (The decision modelling from text program 134 may generate or select a decision model template. In an embodiment, the user may be empowered to alter the decision model template. An existent decision model template may be selected by searching the decision modelling from text data repository 132 for an identified policy, an identified policy type (e.g., a benefit, penalty, compliance certification, etc.), term search, and/or a broad policy topic (e.g., healthcare eligibility, civil fine imposition, standards of vehicle operation compliance, etc.). If an exact identified policy match is found, the decision model template and/or the corresponding decision model may be selected. In an embodiment, the decision modelling from text program 134 may first compare the stored text corpus the decision model template is based on with the presently available text corpus from a reference source to determine whether any substantial textual updates have occurred since the decision model template was produced. The determination of substantial changes may be based upon predetermined parameters for semantic differences, contradictions, and degree of change (e.g., characters, sentences, paragraphs, etc.). If multiple non-specific decision model templates (e.g., policy type, broad policy topic, term search, etc.) are retrieved, the decision modelling from text program 134 may rank the decision model templates based on search term relevance, popularity, matching policy type, broad policy topic, text corpus similarities, etc., Lopez Garcia ¶44); the target extraction process comprising:
inputting the sentence to a first trained model generated by machine learning using training data comprising sentences with ground truth tags assigned to words indicative of client names or client-related events (the decision model template components may be adjusted by the user (e.g., via the decision modelling from text client 122) and used to develop training sets for machine learning. For example, the machine learning may include identified terms, annotations, and/or semantic meanings of sentences (topics of sentence groupings, policy implementer, policy subject, decisions, rules, evidence, etc) from the text corpus. The decision modelling from text program 134 may use a cluster algorithm to group text spans based on embeddings. Using the cluster algorithm, text spans may be grouped within template fields according to identified categories and/or mutual annotations. Text spans may include sentences that are consecutive or non-consecutive, similar or dissimilar groups, from the same or different text corpus, singular or plural, and/or partial or complete. The filled-in decision model template text may be annotated manually by the user or automatically by the decision modelling from text program 134. Relations between text spans relevant to a same decision and/or rule can be determined based on comprising sentence co-references and similarities, conditional dependencies, and/or conjunctive/disjunctive language, and arranged accordingly. Thus, the decision modelling from text program 134 may generate an outline of decision logic for the policy from arranged text spans. In an embodiment, rhetoric structure theory (RST) may be used in parsing sentences and/or text spans (e.g., as annotations), Lopez Garcia ¶47); and
receiving, as an output from the first trained model, the at least one word indicative of the target (Relations between text spans relevant to a same decision and/or rule can be determined based on comprising sentence co-references and similarities, conditional dependencies, and/or conjunctive/disjunctive language, and arranged accordingly. Thus, the decision modelling from text program 134 may generate an outline of decision logic for the policy from arranged text spans. In an embodiment, rhetoric structure theory (RST) may be used in parsing sentences and/or text spans (e.g., as annotations), Lopez Garcia ¶47; he identified categories of sentences may be based on a mutual repeated word or inferred from included words (e.g., using the KG). Inferred categories may include an overarching general category, such as occupation, income, and required information. Cross-sentence and/or text span dependencies, decisions, rules, and/or conditions (e.g., and /or, if/when, exclusions, etc.) may also be identified, ¶43)
a task verb extraction process for, based on a result of the analysis process, extracting, as a word indicative of the target, at least one verb having a dependency relation with the at least one word extracted in the target extraction process (generate decision model, Lopez Garcia ¶44-¶49; The text corpus may include a policy document (e.g., bill text, statutes, business compliance policies, rules, user agreement terms and conditions, etc.). The text corpus may be plural (text corpora), such as co-referencing and/or related policy documents within a broader policy framework (e.g., car safety regulations, occupational safety regulations, import/export tax schemes, etc.). The text corpus may include criteria of determination (e.g., decisions, rules, evidence, etc.) for a policy decision (e.g., applicability of a benefit, penalty, compliance, etc.). The decision modelling from text program 134 may obtain the text corpus from an automated or user-initiated web-crawl (e.g., an internet search for specific policy goals, new policies, policy names/frameworks, specific terms, etc.); a user provided reference source (e.g., a hyperlink); and/or a text corpus manually uploaded by the user (e.g., a scan, typed characters, speech, PDF, etc.). The user may narrow an obtained text corpus’ text by selecting specific portions or delineating included/excluded portions with an annotation and/or an identifier (e.g., page, section, paragraph, sentence, header, sub-header, numeric/roman numeral, unique bullet-point type, letter, and/or date, etc.), ¶33-¶35; see also rule hierarchy, ¶45);
Lopez Garcia does not expressly disclose generating a recommendation for a future business action by inputting the extracted target and task verb to a second trained model that is generated using historical training data that includes labels indicating whether past projects have succeeded; and displaying the recommendation via a display.
However, Agarwal teaches generating a recommendation for a future business action by inputting the extracted target and task verb to a second trained model that is generated using historical training data that includes labels indicating whether past projects have succeeded; and displaying the recommendation via a display (recommending potential user actions that a user can take in a given context while analyzing data, Agarwal ¶6; widget corresponding to the recommendation, ¶9; The recommendation scoring module 350 may give higher weights to user actions performed by a user if recommendations based on the user have higher success rate (as measured by the rate at which other users accept the recommendations corresponding to the users actions.) The recommendation determination module 320 determines the recommendations of actions based on past actions taken by subsets of users in a similar context. The recommendation determination module 320 invokes the subset determination module 340 to filter past user actions based on given criteria to determine subsets of user actions. In an embodiment, the subset determination module 340 determines user actions of subsets of users, for example, actions by users belonging to a particular customer or to a set of customers, actions by users belonging to a particular industry, user actions by users having a particular role within their organization, actions by users having a particular education/qualification, actions by users having a particular level of experience that a user has analyzing data using the multi-tenant data analysis system, and so on. The subsets of users used for determining recommendations for a user may be specified by the user as a configuration parameter. Alternatively, the subset of users may be configured for a specific customer such that the subset is determined for all users associated with that customer, ¶69-¶70).
Both the Agarwal and Lopez Garcia references are analogous in that both are directed towards/concerned with the analysis of business rules and business decisions. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Agarwal’s ability to provide recommendations based upon previous success in Lopez Garcia’s system to improve the system and method with reasonable expectation that this would result in an analysis system that is able to provide recommendations.
The motivation being that there is often loss of information as a result of communication gaps between the data analysts and the business experts leading to incorrect reports being generated as a result of data analysts misinterpreting the requirements provided by the business experts or the business experts misinterpreting the reports provided by the data analysts. These errors lead to incorrect data analysis being performed by the business experts that may lead to the business expert making incorrect strategic decisions. Furthermore, conventional systems require the business expert to rely on the data analysts to perform the data analysis for them. Data analysts perform the data analysis without having the context or the expertise that the business experts have. As a result there are long delays in the time that the business expert requests certain information and the time that the data analysts provide the requested information. As a result, the data analyst often becomes a bottleneck in the data analysis process. Often business experts that need to make time sensitive decisions forego the information provided by the data analysts since it arrives too late for use. The business expert may use the information provided by the data analyst for validation or confirmation of their decisions ex post facto. However, the information provided by the data analysts loses its main purpose of guiding the decision making process of the business expert. As a result, conventional data analysis and visualization tools are often inadequate for bridging the gap between data analysts and the business experts or for providing timely information to business experts and do not provide the functionality needed by the users performing the analysis of the business data (Agarwal ¶5).
Furthermore, the limitations “concerning a project" merely recite the intended use or result of a method step positively claimed and are not considered positive method steps or apparatus/system elements.
As per claim 3, Lopez Garcia and Agarwal disclose as shown above with respect to claim 1. Lopez Garcia further discloses wherein in the target extraction process, the at least one processor extracts, as an additional word indicative of the target, a word having a dependency relation with the at least one task verb extracted in the task verb extraction process (However, non-specific decision model templates may require alteration. Furthermore, during a cold-start process, the decision modelling from text program 134 must generate the decision model template from scratch. In this case, the decision model template fields may be determined based on an identified policy outline (e.g., organized categories and decision logic) in the text corpus. In an embodiment, the parsed sentence semantics may be used in discourse level semantic parsing to organize the sentences into a policy decision outline based on semantic relationships between sentences (e.g., sentences organized by decision logic and inclusion in respective decisions, decision hierarchy, rules, rule hierarchy, topics, semantic roles within rules, etc.). The decision logic (e.g., decisions, rules, evidence, requirements, dependencies/conditions, etc.) may be obtained by analysis of annotations, semantic meanings, similar sentence groupings, conjunctive/disjunctive language (e.g., and, or, neither, both, etc.) within and between sentences/text spans, conditional dependencies (e.g., must, if, when, after, etc.) between sentences/text spans, and implied dependencies of sentences/text spans (e.g., tabbed/bullet-pointed text segments beneath a sentence/paragraph), Lopez Garcia ¶45; based on syntactic relationships between words in a sentence, ¶37).
As per claim 4, Lopez Garcia and Agarwal disclose as shown above with respect to claim 1. Lopez Garcia further discloses wherein in the target extraction process, the at least one processor further extracts an additional word indicative of the target from one of a plurality of documents having a time-series relation with the electronic document, wherein the additional word is identical or similar to a word previously extracted from the sentence as the target (Returning to the flowchart of FIG. 2A, the decision modelling from text program 134 may identify terms and syntax within the text corpus (step 204). The terms (individual words and/or compound words) within the text corpus may be identified using natural language processing (NLP) techniques (e.g., named-entity recognition (NER)). The text corpus may include at least one decision (e.g., an entitlement, user agreement, benefit, penalty, compliance certification, etc.). Broadly, the identified terms may include terms related to criteria for the decision (e.g., evidence, decisions, rules, etc.). In an embodiment, more specific identified terms may include the overall product of a decision (e.g., name of an entitlement, user agreement, benefit, penalty, compliance certification, etc.), applicable time periods (e.g., year/month/days/hour/minute/second, quarters, trimesters, intervals of time, time period type (e.g., tax year, fiscal year, etc.), effected parties (e.g., specific persons, classes of persons, organizations, etc.), reference to measurements (e.g., income, revenue, currency symbols/names, quantities, etc.), words/phrases of comparison (e.g., greater than, less than, minimum, maximum, threshold, best, worst, etc.), and/or personal data (e.g., address, phone number, email, full name, bank details, tax identification number, social security number, etc.). The identified terms may be indicated (e.g., highlighted, underlined, bolded, etc.) and/or annotated by category. Omitted (but implied terms) may be written-in and annotated (e.g., bracketed) as well, Lopez Garcia ¶36).
As per claim 5, Lopez Garcia and Agarwal disclose as shown above with respect to claim 4. Lopez Garcia further discloses wherein in the target extraction process, the at least one processor extracts the additional word from a subsequent document corresponding to a project task that is subsequent to the project task of the electronic document (The decision modelling from text program 134 may generate or select a decision model template. In an embodiment, the user may be empowered to alter the decision model template. An existent decision model template may be selected by searching the decision modelling from text data repository 132 for an identified policy, an identified policy type (e.g., a benefit, penalty, compliance certification, etc.), term search, and/or a broad policy topic (e.g., healthcare eligibility, civil fine imposition, standards of vehicle operation compliance, etc.). If an exact identified policy match is found, the decision model template and/or the corresponding decision model may be selected. In an embodiment, the decision modelling from text program 134 may first compare the stored text corpus the decision model template is based on with the presently available text corpus from a reference source to determine whether any substantial textual updates have occurred since the decision model template was produced. The determination of substantial changes may be based upon predetermined parameters for semantic differences, contradictions, and degree of change (e.g., characters, sentences, paragraphs, etc.). If multiple non-specific decision model templates (e.g., policy type, broad policy topic, term search, etc.) are retrieved, the decision modelling from text program 134 may rank the decision model templates based on search term relevance, popularity, matching policy type, broad policy topic, text corpus similarities, etc., Lopez Garcia ¶44).
As per claim 6, Lopez Garcia and Agarwal disclose as shown above with respect to claim 1. Lopez Garcia further discloses wherein in the target extraction process, the at least one processor extracts, as the word indicative of the target, a first word having a dependency relation with the at least one word extracted in the target extraction process and a second word having a dependency relation with the first word (The text corpus may include a policy document (e.g., bill text, statutes, business compliance policies, rules, user agreement terms and conditions, etc.). The text corpus may be plural (text corpora), such as co-referencing and/or related policy documents within a broader policy framework (e.g., car safety regulations, occupational safety regulations, import/export tax schemes, etc.). The text corpus may include criteria of determination (e.g., decisions, rules, evidence, etc.) for a policy decision (e.g., applicability of a benefit, penalty, compliance, etc.). The decision modelling from text program 134 may obtain the text corpus from an automated or user-initiated web-crawl (e.g., an internet search for specific policy goals, new policies, policy names/frameworks, specific terms, etc.); a user provided reference source (e.g., a hyperlink); and/or a text corpus manually uploaded by the user (e.g., a scan, typed characters, speech, PDF, etc.). The user may narrow an obtained text corpus’ text by selecting specific portions or delineating included/excluded portions with an annotation and/or an identifier (e.g., page, section, paragraph, sentence, header, sub-header, numeric/roman numeral, unique bullet-point type, letter, and/or date, etc.), Lopez Garcia ¶33-¶35; see also rule hierarchy, ¶45).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW B WHITAKER whose telephone number is (571)270-7563. The examiner can normally be reached on M-F, 8am-5pm, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW B WHITAKER/Primary Examiner, Art Unit 3629