Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/07/2026 is considered by the examiner.
Drawings
The drawing submitted on 07/01/2024 is considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1, 12, and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) an artificial intelligence (AI) summarization workflow software tool configured to identify and extract annotations associated with a document, generate a prompt including the annotations and content associated with the document, and output the prompt to generative AI logic configured to generate at least a summary of the document based on the annotations.
The limitation as drafted is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is other than reciting for claim 1, “processor” and “non-transitory storage medium, and “artificial intelligence”; for claim 12, “generative artificial intelligence (AI) logic”, “first software module” and “a second software module”; claim 19, “processors”, “non-transitory storage medium”, “artificial intelligence (AI), and “large language model”, nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the recitation of “processor”, “non-transitory storage medium, “artificial intelligence”, “large language model”, “first software module” and “a second software module”, in claims 1, 12, and 19, a user manually mark the contextual key words from a documents as he reads it and then write one or more summaries of the document in a paper using those contextual keywords and present the summary paper to another person. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental processes” grouping of abstract ideas. Accordingly, the claims 1, 12, and 19, recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular in claim 1, recitation of the non-transitory storage medium, processors, and artificial intelligence (AI) are recited at a high-level of generality (i.e. generic computer with processor executing an AI software tool to perform the annotation and summarization function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The steps of “generative AI logic” is generally apply the abstract idea without limitation how the trained generative AI logic functions. The AI logic describing at a high level such that it amounts to (as recited earlier) using a computer with generic AI to apply the abstract idea. These limitation only recite the outcomes of “identify and extract annotations” and “generate a prompt including the annotations” and without any details (i.e. specification [0054-0061]) about how the outcomes are accomplished.
Similarly, in claim12, the recitation of the non-transitory storage medium, software, computing device and artificial intelligence (AI) are recited at a high-level of generality (i.e. generic computer executing an AI software tool to perform the annotation and summarization function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The steps of “a first software module configured, upon execution, to identify and extract annotations associated with the document”, and “generate a prompt including the annotations and content associated with the document” are similarly mere data gathering and output recited at a high level of generality and thus an insignificant extra-solution activity. The use of “generative AI logic” is generally apply the abstract idea without limitation how the trained generative AI logic functions. The AI logic describing at a high level such that it amounts to (as recited earlier) using a computer with generic AI to apply the abstract idea. These limitation only recite the outcomes of “identify and extract annotations” and “generate a prompt including the annotations” and without any details (i.e. specification [0054-0061]) about how the outcomes are accomplished. The outputting the summary through interactive display which is well-understood, routine, and conventional activity.
Similarly in claim 19, the recitation of the non-transitory storage medium, network resource including large language models, computing device and artificial intelligence (AI) are recited at a high-level of generality (i.e. generic computer executing an AI software tool associated large language model to perform the annotation and summarization function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The steps of “an artificial intelligence (AI) summarization workflow software tool configured to identify and extract annotations associated with a document”, and “generate a prompt including the annotations and content associated with the document” are similarly mere data gathering and output recited at a high level of generality and thus an insignificant extra-solution activity. The use of “generative AI logic” is generally apply the abstract idea without limitation how the trained generative AI logic functions. The AI logic describing at a high level such that it amounts to (as recited earlier) using a computer with generic AI to apply the abstract idea. These limitation only recite the outcomes of “identify and extract annotations” and “generate a prompt including the annotations” and without any details (i.e. specification [0054-0061]) about how the outcomes are accomplished.
Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims 1, 12, and 19 are thus directed to an abstract idea.
The claims 1, 12, and 19, do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements device, non-transitory storage medium, interactive display, and AI logic are at best the equivalent of merely adding the words “apply it” to the judicial exception (See MPEP 2106.05(f)). The limitations of “identify and extract annotations” “generating prompt” and outputting summary of document” are mere data gathering and output recited at a high level of generality are therefore considered insignificant extra solution activity (See MPEP 2106.05 (g)). The outputting the summary through interactive display which is well-understood, routine, and conventional activity (See MPEP 2106.05 (d)). Even when considered in combination, the additional elements represent mere instruction to apply an exception and insignificant extra-solution activity which cannot provide an inventive concept. The claims 1, 12, and 19 are thus patent ineligible.
Claim 2, depends on claim 1 and include all the limitation of claim 1. The limitation of claim 2, “wherein the annotations include highlighted text or image within the document” similarly a mental grouping of an abstract idea, such as user manually highlighting the text or images in the document and are considered mere data gathering and insignificant extra solution activity (See MPEP 2106.05 (g)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 3, depends on claim 1 and include all the limitation of claim 1. The limitation of claim 3 “wherein the annotations include a comment inserted into and adjacent to selected text or images within the document” similarly a mental grouping of an abstract idea, such as user manually writing comments adjacent to the selected text or images within the document and are considered mere data gathering and insignificant extra solution activity (See MPEP 2106.05 (g)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 4, depends on claim 1 and include all the limitation of claim 1. The limitation, of claim 4, “wherein the annotations include graphical images representing notes placed adjacent to text within the document or notes placed within margins of the document”, similarly a mental grouping of an abstract idea, such as user manually writing comments/notes adjacent to the selected text or images within the margin of the paper document. Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 5, depends on claim 1 and include all the limitation of claim 1.The limitation of claim 5, “wherein the generative AI logic includes one or more large language models”.
The use of “generative AI logic includes one or more large language models” is generally apply the abstract idea without limitation how the trained generative AI logic includes one or more large language models functions. The AI logic includes one or more large language models, describing at a high level such that it amounts to (as recited earlier) using a computer with generic AI including large language models to apply the abstract idea. These limitation only recite the outcomes of “identify and extract annotations” and “generate a prompt including the annotations” and without any details (i.e. specification [0054-0061]) about how the outcomes are accomplished. Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 6, depends on claim 5 and include all the limitation of claim 5. The limitation of claim 6, similarly a mental grouping of an abstract idea, such as user manually displaying the paper summary to another person. The use of “large language models” is generally apply the abstract idea without limitation how the large language models functions. The large language models describing at a high level such that it amounts to using a computer with large language models to apply the abstract idea. These limitation only recite the outcomes of “identify and extract annotations” and “generate a prompt including the annotations” and without any details (i.e. specification [0054-0061]) about how the outcomes are accomplished. The outputting the summary through interactive display which is well-understood, routine, and conventional activity (See MPEP 2106.05 (d)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 7, depends on claim 6 and include all the limitation of claim 6. The limitation of claim 7, “wherein each of the one or more summaries is generated by the one or more large language models with a different writing style” similarly a mental grouping of an abstract idea, such as user manually writing various type summaries using the contextual keywords. The use of “large language models” is generally apply the abstract idea without limitation how the large language models functions. The large language models describing at a high level such that it amounts to using a computer with large language models to apply the abstract idea. These limitation only recite the outcomes of “identify and extract annotations” and “generate a prompt including the annotations” and without any details (i.e. specification [0054-0061]) about how the outcomes are accomplished. Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 8, depends on claim 1 and include all the limitation of claim 1.The limitation of claim 8, “wherein the AI summarization workflow software tool is further configured to compute and assign a ranking for each annotation of the annotations based on a prescribed level of importance or usefulness of each annotation in creation of the summary” similarly a mental grouping of an abstract idea, such as user manually compute and assign a ranking for each contextual keyword ( annotation) of the all contextual keywords collected, based on a level of importance or usefulness of each contextual keyword in writing the summary. The recitation of AI summarization workflow software tool is further configured to compute and assign a ranking, are at best the equivalent of merely adding the words “apply it” to the judicial exception (See MPEP 2106.05(f)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 9, depends on claim 8 and include all the limitation of claim 8. The limitation of claim 9, “wherein a first annotation of the annotations constitutes a first type of annotation that is assigned a higher ranking than a second type of annotation or the first type of annotation of a first subtype is assigned a higher ranking than the first type of annotation of a second subtype”, similarly a mental grouping of an abstract idea, such as user manually collected each of different type and associated subtype of all contextual keywords, each type based on a level of higher and lower importance or usefulness of each contextual keyword in writing the summary. The limitation is considered mere data gathering and insignificant extra solution activity (See MPEP 2106.05 (g)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 10, depends on claim 9 and include all the limitation of claim 9. The limitation of claim 9, “wherein the first type of annotation corresponds to a comment and a second type of annotation corresponds to a highlight” similarly a mental grouping of an abstract idea, such as user manually write the first type of contextual keywords with comments and highlights the second type based on a level of higher and lower importance or usefulness of each contextual keyword in writing the summary. The limitation is considered mere data gathering and insignificant extra solution activity (See MPEP 2106.05 (g)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 11, depends on claim 9 and include all the limitation of claim 9. The limitation of claim 11, “wherein the first type of annotation of the first subtype corresponds to a first highlight of text within the document with a first color highlight and the second type of annotation of the second subtype corresponds to a second highlight of text within the document with a second color highlight different than the first color highlight” similarly a mental grouping of an abstract idea, such as user manually highlights first type of contextual keywords of the first subtype corresponds to a first highlight of text within the document with a first color highlight and the second type of contextual keywords of the second subtype corresponds to a second highlight of text within the document with a second color highlight different than the first color highlight. The limitation is considered mere data gathering and insignificant extra solution activity (See MPEP 2106.05 (g)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 13, depends on claim 12 and include all the limitation of claim 12. The limitation of claim 13, ”wherein the generative AI logic includes one or more large language models deployed as a cloud resource or as part of an on-premises hosted service” similar to claim 12 a mental grouping of an abstract idea. The limitation “generative AI logic includes one or more large language models deployed as a cloud resource or as part of an on-premises hosted service” is generally apply the abstract idea without limitation how the trained generative AI logic includes one or more large language models deployed as a cloud resource or as part of an on-premises hosted service, functions to generate the summary. The limitation “AI logic includes one or more large language models deployed as a cloud resource or as part of an on-premises hosted service” at best the equivalent of merely adding the words “apply it” to the judicial exception (See MPEP 2106.05(f)). Accordingly, this additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 14, depends on claim 12 and include all the limitation of claim 12. The limitation of claim 14, “wherein the annotations identified and extracted by the first software module include one or more of (i) highlighted text within the document, (iii) a comment inserted into and adjacent to selected text or images within the document, or (iii) graphical images representing notes placed within margins of the document” repeat limitation of claims 2-4 and similarly do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 15, depends on claim 12 and include all the limitation of claim 12. The limitation of claim 15, “wherein each of the one or more summaries is generated with a different writing style” repeat limitation of claim 7 and similarly do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 16, depends on claim 12 and include all the limitation of claim 12. The limitation of claim 16, “wherein the first software module is further configured to compute and assign a ranking for each annotation of the annotations based on a prescribed level of importance or usefulness of each annotation in creation of at least the summary” repeat limitation of claim 8 and similarly do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 17, depends on claim 16 and include all the limitation of claim 16. The limitation of claim 17,”wherein a first annotation of the annotations constitutes a first type of annotation that is assigned a higher ranking than a second type of annotation and the first type of annotation of a first subtype is assigned a higher ranking than the first type of annotation of a second subtype” repeat limitation of claim 9 and similarly do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
.
Claim 18, depends on claim 17 and include all the limitation of claim 17. The limitation of claim 18, “wherein (1) the first type of annotation corresponds to a comment and a second type of annotation corresponds to a highlight or (2) the first type of annotation of the first subtype corresponds to a first highlight of text within the document with a first color highlight and the second type of annotation of the second subtype corresponds to a second highlight of text with a second color highlight different than the first color highlight” repeat limitation of claims 10-11 and similarly do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Claim 20, depends on claim 19 and include all the limitation of claim 19. The limitation of claim 20, “wherein the annotations include any of: (i) highlighted text or a highlighted image within the document, (ii) a comment inserted into and adjacent to selected text or images within the document, or a graphical image representing a note placed adjacent to text within the document” repeat limitation of claims 2-4 and similarly do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea..
The claims 2-11, 13-18 and 20, do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements device, non-transitory storage medium, interactive display, and AI logic with large language models, are at best the equivalent of merely adding the words “apply it” to the judicial exception (See MPEP 2106.05(f)). The limitations of “identify and extract annotations” “generating prompt” and outputting summary of document” are mere data gathering and output recited at a high level of generality are therefore considered insignificant extra solution activity (See MPEP 2106.05 (g)). The outputting the summary through interactive display which is well-understood, routine, and conventional activity (See MPEP 2106.05 (d)). Even when considered in combination, the additional elements represent mere instruction to apply an exception and insignificant extra-solution activity which cannot provide an inventive concept. The claims 2-11, 13-18 and 20 are thus similarly patent ineligible.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-2, 5, 12, 14, and 19-20, of pending application 18/760981 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-2, 5, and 9-10, of co-pending Application No. 18/761025 (reference application).
Claim 1, of the pending application corresponds to Claim 1 of co-pending application; Claim 2 of pending application corresponds to Claim 2 of co-pending application; Claim 5 of pending application corresponds to Claim 5 of co-pending application; Claim 12 of pending application corresponds to Claim 9 of co-pending application; Claim 14 of pending application corresponds to Claim 10 of co-pending application; Claim 19 of pending application corresponds to Claim 5 of co-pending application (Note: Broader recitation of “network resource including one or more large language models” limitation of pending application anticipated the Co-pending application recitation of “generative AI logic includes one or more large language models” since AI logic are fundamentally network resource; Claim 20 of pending application corresponds to Claim 2 of co-pending application.
Although the claims at issue are not identical, they are not patentably distinct from each other because pending claims are broader than the Co-pending claims and thus anticipated the Co-pending claims . This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Pending Application 18/760981
Co-Pending Application 18/761025
1. A computing device, comprising: one or more processors; and a non-transitory storage medium communicatively coupled to the one or more processors, the non-transitory storage medium comprises an artificial intelligence (AI) summarization workflow software tool configured to identify and extract annotations associated with a document, generate a prompt including the annotations and content associated with the document, and output the prompt to generative AI logic configured to generate at least a summary of the document based on the annotations.
2. The computing device of claim 1, wherein the annotations include highlighted text or image within the document.
5. The computing device of claim 1, wherein the generative AI logic includes one or more large language models.
12. A non-transitory storage medium including software, operating as part of a computing device, conducting analytics on a document to assist generative artificial intelligence (AI) logic configured to generate and return at least a summary of the document in response to submission of the document, comprising: a first software module configured, upon execution, to identify and extract annotations associated with the document, generate a prompt including the annotations and content associated with the document, and output the prompt for submission to the generative AI logic to cause generation and return of at least the summary of the document, wherein the summary is based at least in part on the annotations; and a second software module configured to generate, upon execution, an interactive screen display for concurrently rendering one or more summaries, including at least the summary, produced by the generative AI logic based on analysis of the annotations included in the prompt.
14. The non-transitory storage medium of claim 12, wherein the annotations identified and extracted by the first software module include one or more of (i) highlighted text within the document, (iii) a comment inserted into and adjacent to selected text or images within the document, or (iii) graphical images representing notes placed within margins of the document.
19. A generative artificial intelligence (AI) summarization platform comprising: a network resource including one or more large language models; and a computing device communicatively coupled to the network resource via a network, the computing device comprises one or more processors, and a non-transitory storage medium communicatively coupled to the one or more processors, the non-transitory storage medium includes an artificial intelligence (AI) summarization workflow software tool configured to identify and extract annotations associated with a document, generate a prompt including the annotations and content associated with the document, and output the prompt to the one or more large language models configured to generate at least a summary of the document based on the annotations.
20. The generative AI summarization platform of claim 19, wherein the annotations include any of: (i) highlighted text or a highlighted image within the document, (ii) a comment inserted into and adjacent to selected text or images within the document, or a graphical image representing a note placed adjacent to text within the document.
1. A computing device, comprising: one or more processors; and a non-transitory storage medium communicatively coupled to the one or more processors, the non-transitory storage medium comprises an artificial intelligence (AI) summarization workflow software tool configured to identify and extract annotations associated with a document, generate one or more prompts including the annotations and content associated with the document, and output the one or more prompts to generative AI logic, wherein the computing device is further configured to receive from generative AI logic a plurality of summaries in response to the one or more prompts, where a first summary of the plurality of summaries is formed with at least a different writing style than a second summary of the plurality of summaries.
2. The computing device of claim 1, wherein the annotations include at least one of (i) highlighted text within the document, (ii) a comment inserted into and adjacent to selected text or images within the document, or (iii) graphical images representing notes placed adjacent to text within the document.
5. The computing device of claim 1, wherein the generative AI logic includes one or more large language models.
9. A non-transitory storage medium including software that, when executed by one or more processors, perform operations to generate a plurality of summaries of a document under analysis, the non-transitory storage medium comprising: an artificial intelligence (AI) summarization workflow software tool configured to (i) identify and extract annotations associated with a document, (ii) generate one or more prompts including the annotations and content associated with the document, and (iii) output the one or more prompts to generative AI logic, wherein the AI summarization workflow software tool is further configured to output a plurality of summaries from the generative AI logic in response to the one or more prompts, where a first summary of the plurality of summaries is formed with at least a different writing style than a second summary of the plurality of summaries.
10. The non-transitory storage medium of claim 9, wherein the annotations extracted by the AI summarization workflow software tool includes at least one of (i) highlighted text within the document, (ii) a comment inserted into and adjacent to selected text or images within the document, or (iii) graphical images representing notes placed adjacent to text within the document.
1. A computing device, comprising: one or more processors; and a non-transitory storage medium communicatively coupled to the one or more processors, the non-transitory storage medium comprises an artificial intelligence (AI) summarization workflow software tool configured to identify and extract annotations associated with a document, generate one or more prompts including the annotations and content associated with the document, and output the one or more prompts to generative AI logic, wherein the computing device is further configured to receive from generative AI logic a plurality of summaries in response to the one or more prompts, where a first summary of the plurality of summaries is formed with at least a different writing style than a second summary of the plurality of summaries.
5. The computing device of claim 1, wherein the generative AI logic includes one or more large language models.
2. The computing device of claim 1, wherein the annotations include at least one of (i) highlighted text within the document, (ii) a comment inserted into and adjacent to selected text or images within the document, or (iii) graphical images representing notes placed adjacent to text within the document.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 5-6, 8-9, 12-14, 16, and 19-20, are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Singh et la.(US 2022/0237373 A1).
Regarding Claim 1, Singh et al. teach: A computing device, comprising: one or more processors(one or more processors; Paragraph [0009]); and a non-transitory storage medium communicatively coupled to the one or more processors, the non-transitory storage medium comprises an artificial intelligence (AI) summarization workflow software tool configured to (a non-transitory computer readable storage medium coupled to processors, the non-transitory computer readable storage medium comprises an artificial intelligence for generating correct document summaries (artificial intelligence summarization workflow software tool); Paragraphs [0009, 0021]) identify and extract annotations associated with a document, generate a prompt (summary templates to generate category-specific summaries based on input annotation data[0058]) including the annotations and content (features and grouping of entities) associated with the document (identify and extract particular features (annotations) associated with the input document, generate annotation data including the features and grouping of entities (content) associated with the document; category-specific document summary templates to enable training of ML models to generate category-specific summaries based on input annotation data. Paragraphs [0027-0028, 0030, 0041, 0058, 0104-0105]), and output the prompt to generative AI logic (machine learning model) configured to generate at least a summary of the document based on the annotations(generate summaries based on annotation data from documents associated with a particular document category; Paragraph [0034]).
Regarding Claim 2, Singh et al. teach: The computing device of claim 1, wherein the annotations include highlighted text or image (annotation data that indicates word location or pixel locations of an optimal bounded four-sided polygon and a centroid of each word) within the document ([0030] …In such an example, the annotator 132 may receive features from an input document, and, based on an output of the categorizer 128 indicating that the input document is a lease, the annotator 132 may generate annotation data that indicates a name associated with the first party, a name associated with the second party, a length of time associated with the duration, a date associated with the starting date, and a dollar amount associated with the payment value. [0031] The feature data processed by the annotator 132 to generate annotation data may include qualitative word features, quantitative word features, pixel features (as described above), or a combination thereof. The qualitative word features may include length of words, percentage of capital letters in words, fuzzy representations of words, other qualitative word features, or a combination thereof, as non-limiting examples. The quantitative word features may include word counts, word locations (e.g., pixel locations of an optimal bounded four-sided polygon and a centroid of each word), other quantitative word features, or a combination thereof, as non-limiting examples. Based on the input feature data, the annotator 132 may be configured to tag each word or phrase indicated by the input features either to be classified as an entity or not to be classified. The words or phrases classified as entities may be compared to the category-specific group of entities to determine which label to apply to each of the tagged words or phrases, as further described below. [0038] …The word layout features may include pixel locations of words within a pixel array associated with the input document...).
Regarding Claim 5, Singh et al. teach: The computing device of claim 1, wherein the generative AI logic includes one or more large language models (Fig.1, categorizer 128) (See rejection of claim 1 and [0027] The categorizer 128 is configured to categorize input documents into corresponding categories of a group predefined document categories based on similarities between the input documents and documents of the predefined document categories. [0028] To illustrate, the categorizer 128 may be configured to perform vectorization and natural language processing (NLP) on an input document, such as tokenization, lemmatization, sentencization, part-of-speech tagging, bag of words vectorization, term frequency-inverse document frequency (TF-IDF) vectorization, stop-character parsing, named entity recognition, semantic relation extraction, and the like, to generate word features, such as word identification (e.g., for generating a word corpus), word frequencies, word ratios, and the like, to generate or extract word features from the input document. The categorizer 128 may be configured to generate word layout features based on the input document, such as pixel locations of words in the input document, distances between words, and the like. The categorizer 128 may also be configured to generate pixel layout features based on the input document, such as pixel locations of other elements in the document, distances between elements, element types, and the like… [0029] In some implementations, the categorizer 128 may include or access (e.g., at the memory 106, a storage device of the document processing device 102, or a device that is coupled to or accessible to the document processing device 102 via the networks 160) the first ML models 130 that are configured to categorize input documents. For example, the first ML models 130 may include a single ML model or multiple ML models that are configured to categorize documents into the predefined document categories based on input feature data. In some implementations, the first ML models 130 may be implemented as one or more neural networks (NNs).).
Regarding Claim 6, Singh et al. teach: The computing device of claim 5, wherein the non-transitory storage medium further comprises graphic user interface (GUI) generation logic configured to generate an interactive screen display (Fig.1, GUI of the user device 140) for concurrently rendering one or more summaries (GUI may visually display one or more document summaries, including the document summary 122), including the summary, produced by the one or more large language models (categorizer 128) based on analysis of the annotations (based on input annotation data) included in the prompt (template) (See rejection of claim 5 and [0027] For example, the categorizer 128 may be configured to analyze the input documents to identify and extract particular features (also referred to as key performance indicators (KPIs)) that are used to assign the input documents to predefined document categories that include documents having the most similar features. [0039] The categorizer 128 may assign the input document (e.g., corresponding to the input data 150) to one of multiple predefined document categories based on the first feature data 112. To illustrate, the categorizer 128 may provide first feature data 112 as input data to first ML models 130 to assign the input document to the document category 120 (e.g., one of the multiple predefined document categories). [0047] After generation of the document summary 122, the document processing device 102 may generate an output 152 that includes the document summary 122. In some implementations, the output 152 may be provided to the user device 140 to enable display of a graphical user interface (GUI) at the user device 140 (or the output 152 may be used to cause display of a GUI at the document processing device 102). The GUI may visually display one or more document summaries, including the document summary 122. [0053] The third training data may be generated based on one or more document summary templates (or reference document summaries) for the different predetermined document categories. [0058] The training data generated by the training engine 212 may also be based on category-specific document summary templates to enable training of ML models to generate category-specific summaries based on input annotation data.).
Regarding Claim 8, Singh et al. teach: The computing device of claim 1, wherein the AI summarization workflow software tool is further configured to compute and assign a ranking for each annotation of the annotations based on a prescribed level of importance or usefulness (selected extractions correspond to each of category-specific entities) of each annotation in creation of the summary(See rejection of claim 1 and [0040] The categorizer 128 may assign the input document to the document category associated with the highest probability score (or a highest probability score that satisfies a threshold, if additional categorizations are possible). In the previous example, if the first probability score is 0.7, the second probability score is 0.2, the third probability score is 0.55, and the fourth probability score is 0.4, the document category 120 assigned to the input document is legal documents. [0043] The annotator 132 may provide extractions that are associated with probability scores that satisfy a threshold to a second subset of the second ML models 134 for determination of probability scores that indicate the likelihoods that the selected extractions correspond to each of category-specific entities (or alternatively the annotator 132 may discard extractions that are associated with probability scores that satisfy a threshold if the probability scores indicate the likelihood that the respective extractions are not entity values, and the remaining extractions may be provided to the second subset of the second ML models 134).).
Regarding Claim 9, Singh et al. teach: The computing device of claim 8, wherein a first annotation of the annotations constitutes a first type of annotation (based at least in part on the first feature data 112 including a word ratio for the word “leaser” may assign the input document to a “lease” category) that is assigned a higher ranking than a second type of annotation (based at least in part on the first feature data 112 including a pixel location of a vertex of a table being located at a first particular pixel location and a centroid of a logo being located at a second particular pixel position may assign the input document to a “sales brochure” category) or the first type of annotation of a first subtype is assigned a higher ranking (assign the input document to the document category associated with the highest probability score) than the first type of annotation of a second subtype (See rejection of claim 1 and [0039] The first ML models 130 may be trained to categorize documents into the multiple predefined document categories based on input feature data based on the documents. For example, based at least in part on the first feature data 112 including a word ratio for the word “leaser” that satisfies a threshold (among other relevant features), the first ML models 130 may assign the input document to a “lease” category. As another example, based at least in part on the first feature data 112 including a pixel location of a vertex of a table being located at a first particular pixel location and a centroid of a logo being located at a second particular pixel position (among other relevant features), the first ML models 130 may assign the input document to a “sales brochure” category. These illustrative examples are simplified for ease of understanding, and the first ML models 130 may be configured to categorize documents based on any types of underlying similarities between documents, including similarities that are highly complex, through supervised learning, as further described herein. [0040] As an illustrative example, if the multiple predefined document categories include legal documents, accounting documents, human resources documents, and marketing documents, the probability scores 114 may include a first probability score indicating a likelihood that the input document is a legal document, a second probability score indicating a likelihood that the input document is an accounting document, a third probability score indicating a likelihood that the input document is a human resources document, and a fourth probability score indicating a likelihood that the input document is a marketing document. The categorizer 128 may assign the input document to the document category associated with the highest probability score (or a highest probability score that satisfies a threshold, if additional categorizations are possible). In the previous example, if the first probability score is 0.7, the second probability score is 0.2, the third probability score is 0.55, and the fourth probability score is 0.4, the document category 120 assigned to the input document is legal documents.).
Regarding Claim 12, Singh et al. teach: A non-transitory storage medium including software, operating as part of a computing device ([0009] In another particular aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by one or more processors, cause the one or more processors to perform operations for category-specific document summarization.), conducting analytics on a document to assist generative artificial intelligence (AI) logic(artificial intelligence)configured to generate and return at least a summary of the document in response to submission of the document, comprising ([0021] To illustrate, a system of the present disclosure may use trained artificial intelligence and machine learning to categorize, annotate, and summarize a document. The system may categorize an input document based on similarities between features of the input document and features of documents from multiple predefined document categories by utilizing a first set of one or more machine learning (ML) models. After the input document is categorized, the system may annotate the input document by identifying and tagging specific words or phrases as entity values of one or more category-specific entities (e.g., words or phrases that are highly informative and relevant to summarizing documents of the respective document category). To annotate the input document, the system may utilize a particular set of one or more second ML models that is selected from multiple sets of category-specific second ML models based on the particular set of second ML models corresponding to the determined document category. Each of the multiple sets of category-specific second ML models may be configured to output annotation data for a respective category of documents based on input feature data. To generate a summary of the input document, the system may generate a summarized document based on annotation data that indicates the tagged entity values from the input document. To generate the summarized document, the system may utilize a particular set of one or more third ML models that is selected from multiple sets of category-specific third ML models based on the particular set of third ML models corresponding to the determined document category. Each of the multiple sets of category-specific third ML models may be configured to output summaries for a respective category of documents based on input annotation data. The various ML models may be trained using training data that is based on labeled and annotated documents of each of the predefined document categories and one or more summary templates corresponding to each of the predefined document templates.): a first software module ((ML)machine learning Model) configured, upon execution, to identify and extract annotations associated with the document, generate a prompt including the annotations and content associated with the document(summary templates to generate category-specific summaries based on input annotation data[0058]) including the annotations and content (features and grouping of entities) associated with the document (identify and extract particular features (annotations) associated with the input document, generate annotation data including the features and grouping of entities (content) associated with the document; category-specific document summary templates to enable training of ML models to generate category-specific summaries based on input annotation data. Paragraphs [0027-0028, 0030, 0041, 0058, 0104-0105]), and output the prompt for submission to the generative AI logic to cause generation and return of at least the summary of the document, wherein the summary is based at least in part on the annotations (generate summaries based on annotation data from documents associated with a particular document category; Paragraph [0034]); and a second software module (Fig.1, graphical user interface (GUI) of user device 140)) configured to generate, upon execution, an interactive screen display for concurrently rendering one or more summaries, including at least the summary, produced by the generative AI logic based on analysis of the annotations included in the prompt (See rejection of claim 1 and [0047] After generation of the document summary 122, the document processing device 102 may generate an output 152 that includes the document summary 122. In some implementations, the output 152 may be provided to the user device 140 to enable display of a graphical user interface (GUI) at the user device 140 (or the output 152 may be used to cause display of a GUI at the document processing device 102). The GUI may visually display one or more document summaries, including the document summary 122.).
Regarding Claim 13, Singh et al. teach: The non-transitory storage medium of claim 12, wherein the generative AI logic includes one or more large language models (Fig.1, Categorizer 128) deployed as a cloud resource or as part of an on-premises hosted service (See rejection of claim 5 and [0023] The document processing device 102 includes one or more processors 104, a memory 106, one or more communication interfaces 124, a training engine 126, a categorizer 128, a first set of one or more machine learning (ML) models 130 (referred to herein as “the first ML models 130”), an annotator 132, a second set of one or more ML models 134 (referred to herein as “the second ML models 134”), a summarizer 136, and a third set of one or more ML models 138 (referred to herein as “the third ML models 138”).For example, in some implementations, computing resources and functionality described in connection with the document processing device 102 may be provided in a distributed system using multiple servers or other computing devices, or in a cloud-based system using computing resources and functionality provided by a cloud-based environment that is accessible over a network, such as the one of the one or more networks 160. To illustrate, one or more operations described herein with reference to the document processing device 102 may be performed by one or more servers or a cloud-based system that communicates with one or more client or user devices, a workbench platform (e.g., executed by a server or distributed among multiple devices), or the like. [0027] The categorizer 128 is configured to categorize input documents into corresponding categories of a group predefined document categories based on similarities between the input documents and documents of the predefined document categories.).
Regarding Claim 14, Singh et al. teach: The non-transitory storage medium of claim 12, wherein the annotations identified and extracted by the first software module include one or more of (i) highlighted text within the document (annotation data that indicates word location or pixel locations of an optimal bounded four-sided polygon and a centroid of each word), (iii) a comment inserted into and adjacent to selected text or images within the document, or (iii) graphical images representing notes placed within margins of the document (See rejection of claim 2, [0030] and [0038]).
Regarding Claim 16, Singh et al. teach: The non-transitory storage medium of claim 12, wherein the first software module is further configured to compute and assign a ranking for each annotation of the annotations based on a prescribed level of importance or usefulness (selected extractions correspond to each of category-specific entities) of each annotation in creation of at least the summary (Same rejection to Claim 8 would apply).
Regarding Claim 19, Singh et al. teach: A generative artificial intelligence (AI) summarization platform comprising: a network resource including one or more large language models(Fig.1, categorizer 128) ([0027] The categorizer 128 is configured to categorize input documents into corresponding categories of a group predefined document categories based on similarities between the input documents and documents of the predefined document categories. [0028] To illustrate, the categorizer 128 may be configured to perform vectorization and natural language processing (NLP) on an input document, such as tokenization, lemmatization, sentencization, part-of-speech tagging, bag of words vectorization, term frequency-inverse document frequency (TF-IDF) vectorization, stop-character parsing, named entity recognition, semantic relation extraction, and the like, to generate word features, such as word identification (e.g., for generating a word corpus), word frequencies, word ratios, and the like, to generate or extract word features from the input document.); and a computing device communicatively coupled to the network resource via a network, the computing device comprises one or more processors, and a non-transitory storage medium communicatively coupled to the one or more processors ([0009] In another particular aspect, a non-transitory computer-readable storage medium stores instructions that, when executed by one or more processors, cause the one or more processors to perform operations for category-specific document summarization. [0023] The document processing device 102 includes one or more processors 104, a memory 106, one or more communication interfaces 124, a training engine 126, a categorizer 128, a first set of one or more machine learning (ML) models 130 (referred to herein as “the first ML models 130”), an annotator 132, a second set of one or more ML models 134 (referred to herein as “the second ML models 134”), a summarizer 136, and a third set of one or more ML models 138 (referred to herein as “the third ML models 138”).For example, in some implementations, computing resources and functionality described in connection with the document processing device 102 may be provided in a distributed system using multiple servers or other computing devices, or in a cloud-based system using computing resources and functionality provided by a cloud-based environment that is accessible over a network, such as the one of the one or more networks 160. To illustrate, one or more operations described herein with reference to the document processing device 102 may be performed by one or more servers or a cloud-based system that communicates with one or more client or user devices, a workbench platform (e.g., executed by a server or distributed among multiple devices), or the like.), the non-transitory storage medium includes an artificial intelligence (AI) summarization workflow software tool (artificial intelligence) configured to identify and extract annotations associated with a document, generate a prompt including the annotations and content associated with the document, and output the prompt to the one or more large language models configured to generate at least a summary of the document based on the annotations (See rejection of claim 1).
Regarding Claim 20, Singh et al. teach: The generative AI summarization platform of claim 19, wherein the annotations include any of: (i) highlighted text or a highlighted image(annotation data that indicates word location or pixel locations of an optimal bounded four-sided polygon and a centroid of each word) within the document (Same rejection to Claim 2 would apply), (ii) a comment inserted into and adjacent to selected text or images within the document, or a graphical image representing a note placed adjacent to text within the document.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
12. Claim(s) 3-4, 10-11 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al. in view of Mullins et al.(US 11030259 B2).
Regarding claim 3, Singh et al. do not teach: The computing device of claim 1, wherein the annotations include a comment inserted into and adjacent to selected text or images within the document.
Mullins et al. teach: annotations include a comment (Fig.6-7, indication includes at least one of a color or a pattern; and wherein the indication relates to a relevance of the document portion on a scale) inserted into and adjacent to selected text or images (annotated visual representation including an indication adjacent to text of the document portion) within the document (Col 2, line 64 to col 3, line 35 - In some embodiments, the facility also or instead annotates the document with text describing aggregation results, including copies of some or all of the matching queries; a numerical count of the number of matching queries; a textual indication of where in the search result the query most frequently occurred; the name of a user or group of users whose queries most frequently matched; the time period during which queries most frequently matched; etc. (12) In various embodiments, the facility visually augments the document representation with a variety of charts and graphs, such as a chart or graph displayed beside each document portion, a chart or graph that the facility displays for a document portion if the user hovers over or touches a document portion or a different kind of search information annotation applied to it, etc.….In some embodiments, the facility uses an alternate display scheme in which, rather including document searching information in the context of a representation of the document, the facility displays a chart or graph in which portions of the document identified; for example, in some embodiments, the facility does so by displaying a stack chart in which each stack identifies a different portion of the document to which it corresponds, and whose height indicates the aggregation result for that document portion. Claim 1: … annotate the document portion's visual representation in the document's visual representation in accordance with the obtained aggregation result; and display the annotated visual representation of the document to be displayed, the annotated visual representation including an indication adjacent to text of the document portion, wherein, for each of the two or more document portions whose visual representations are contained by the document's visual representation, the aggregated search transactions include performed search transactions in whose results the document portion was actually included; wherein the indication includes at least one of a color or a pattern; and wherein the indication relates to a relevance of the document portion on a scale…).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filling date of the invention was made for Singh et al. to include the teaching of Mullins et al. above in order to annotation conveying information associating performed aggregated search transactions in whose results the document portion was actually included, a visual representation including an indication adjacent to text of the document portion and further caused the annotated visual representation of the document to be displayed.
Regarding claim 4, Singh et al. do not teach: The computing device of claim 1, wherein the annotations include graphical images representing notes placed adjacent to text within the document or notes placed within margins of the document.
Mullins teach: The computing device of claim 1, wherein the annotations include graphical images representing notes placed adjacent to text within the document(visually augments the document representation with a variety of charts and graphs, such as a chart or graph displayed beside each document portion) or notes placed (annotated visual representation including an indication adjacent to text of the document portion Or a colored rectangle indicates via its color the total number of document search transactions whose queries match it, relative to the other paragraphs ) within margins of the document (See Mullins teaching in rejection of Claim 3 and Col 2, lines 3-14, In some embodiments, for each portion of a document, the facility (1) performs one or more aggregations across the document search transactions whose queries match the portion, then (2) displays a visual representation of the portion that reflects the result of this aggregation. For example, in some embodiments, the facility (1) counts the number of document search transactions whose queries match each paragraph of a document, then (2) displays a visual representation of the document in which, in the margin beside each paragraph, a colored rectangle indicates via its color the total number of document search transactions whose queries match it, relative to the other paragraphs.).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filling date of the invention was made for Singh et al. to include the teaching of Mullins et al. above in order to annotation conveying information associating performed aggregated search transactions in whose results the document portion was actually included, a visual representation including an indication adjacent to text of the document portion and further caused the annotated visual representation of the document to be displayed.
Regarding claim 10, Singh et al. do not teach: The computing device of claim 9, wherein the first type of annotation corresponds to a comment and a second type of annotation corresponds to a highlight.
Mullins teach: The computing device of claim 9, wherein the first type of annotation corresponds to a comment and a second type of annotation corresponds to a highlight (See Mullins teaching in rejection of Claim 3 and Fig.6-7, where first type of annotation corresponds to a comment i.e. in block 711 modern enterprise, block 721, pays as go, cloud, and scalable and a second type of annotation corresponds to a highlight, i.e. different shading.).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filling date of the invention was made for Singh et al. to include the teaching of Mullins et al. above in order to annotation conveying information associating performed aggregated search transactions in whose results the document portion was actually included, a visual representation including an indication adjacent to text of the document portion and further caused the annotated visual representation of the document to be displayed.
Regarding claim 11, Singh et al. do not teach: The computing device of claim 9, wherein the first type of annotation of the first subtype corresponds to a first highlight of text within the document with a first color highlight and the second type of annotation of the second subtype corresponds to a second highlight of text within the document with a second color highlight different than the first color highlight
Mullins teach: The computing device of claim 9, wherein the first type of annotation of the first subtype (high) corresponds to a first highlight of text (Fig.7, Block 721) within the document (Fig.7, Block 710-740 combined) with a first color (Fig.7, shading 754) highlight and the second type of annotation of the second subtype (low) corresponds to a second highlight (Fig.7, shading 751) of text within the document with a second color highlight different than the first color highlight (See Mullins teaching in rejection of Claims 3-4 and Fig.6-7, where first type of annotation corresponds to a comment i.e. in block 711 modern enterprise, block 731, what should be the focus when looking at cloud, block 721, pays as go, cloud, and scalable; And a second type of annotation corresponds to a highlight, i.e. each comments frequency with different shading and 721 representing high frequency shading and 731 which is low frequency shading.).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filling date of the invention was made for Singh et al. to include the teaching of Mullins et al. above in order to annotation conveying information associating performed aggregated search transactions in whose results the document portion was actually included, a visual representation including an indication adjacent to text of the document portion and further caused the annotated visual representation of the document to be displayed.
Regarding claim 17, Singh et al. do not teach: The non-transitory storage medium of claim 16, wherein a first annotation of the annotations constitutes a first type of annotation that is assigned a higher ranking than a second type of annotation and the first type of annotation of a first subtype is assigned a higher ranking than the first type of annotation of a second subtype.
Mullins teach: The non-transitory storage medium of claim 16, wherein a first annotation of the annotations constitutes a first type of annotation that is assigned a higher ranking (Fig.6, block 621) than a second type of annotation (Fig.6, block 631) and the first type of annotation of a first subtype (Fig.7, block 721) is assigned a higher ranking than the first type of annotation of a second subtype (Fig.7, block 731) (See rejection of claim 11, and Fig.6-7, where first type of annotation is Fig.6, block 621 which is assigned higher ranking with respect to second type annotation of block 631 and further first subtype is Fig.7, block 721 which is assigned higher ranking with respect to second subtype annotation of block 731.).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filling date of the invention was made for Singh et al. to include the teaching of Mullins et al. above in order to annotation conveying information associating performed aggregated search transactions in whose results the document portion was actually included, a visual representation including an indication adjacent to text of the document portion and further caused the annotated visual representation of the document to be displayed.
Regarding claim 18, Mullins teach: The non-transitory storage medium of claim 17, wherein (1) the first type of annotation corresponds to a comment and a second type of annotation corresponds to a highlight (see rejection of claim 10) or (2) the first type of annotation of the first subtype corresponds to a first highlight of text within the document with a first color highlight and the second type of annotation of the second subtype corresponds to a second highlight of text with a second color highlight different than the first color highlight (see rejection of claim 11 ).
Claim(s) 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al. in view of Ramamurthy et al.(US 2020/0311122 A1).
Regarding claims 7, and 15, Singh et al. do not teach: wherein each of the one or more summaries is generated with a different writing style.
Ramamurthy et al. teach: one or more summaries is generated with a different writing style ([0066] In some examples, summary personalization unit 110 may generate personalized summaries to indications of relevance of meeting items in different ways according to different styles. Users may configure their preferred style in policies 232. Based on the configured style, summary personalization unit 110 may determine how the relevant information is displayed to each participant once the relevant information determination is made using the techniques described above. In one example output illustrated in FIG. 4, summary personalization unit 110 generates personalized summaries 402, 412, 424 to include only the relevant information to each participant. Thus, as illustrated in FIG. 4, summary personalization unit 110 may generate personalized summary 402 to include all of the meeting items from non-personalized summary 430 for the Project Manager, while summary personalization unit 110 may generate personalized summary 412 to include only the meeting item “New action item 1” for Project Resources 1. Similarly, if a Reviewer were present in the meeting, the only relevant meeting items for this individual may be the “Agenda” and the “Summary”; thus, for this individual, the summary personalization unit 110 may generate personalized summary 424 to only include these two meeting items. [0073] Personalization styles manager 470 may control the style of personalized summaries generated by summary personalization unit 442.).
Therefore, it would have been obvious to one of ordinary skilled in the art before the effective filling date of the invention was made for Singh et al. to include the teaching of Ramamurthy et al. above in order to generate personalized summaries with indications of relevance of items from non-personalized summary in different ways according to different styles.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art of record Li et al.(US 2020/0081909 A1), “Multi-Document Summary Generation Method And Apparatus, And Terminal” teach: a summary generation method is training a corpus using a deep neural network model to obtain a word vector representation of an eigenword, obtaining a candidate sentence set in the corpus based on a preset query word, obtaining semantic similarity between different candidate sentences in the candidate sentence set based on the word vector representation of the eigenword in order to obtain similarity between two candidate sentences and construct a sentence graph model, and calculating a weight of a candidate sentence after the sentence graph model is constructed, and finally generating a document summary using a maximal marginal relevance algorithm.
The pertinent art of record Chu et al.(US 2025/0013827 A1) “GENERATING EXPLANATIONS OF CONTENT RECOMMENDATIONS USING LANGUAGE MODEL NEURAL NETWORKS” teach: The system can obtain metadata for the particular content item and then generate a natural language text description of the metadata. For example, the system can process the metadata and an appropriate prompt using the language model neural network or another language model neural network to generate a natural language text description of the metadata. As a particular example, the natural language text description can be a natural language summary of the metadata. That is, the appropriate prompt can instruct the language model neural network to generate a summary of the metadata.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD K ISLAM whose telephone number is (571)270-5878. The examiner can normally be reached Monday -Friday, EST (IFP).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras Shah can be reached at 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMAD K ISLAM/Primary Examiner, Art Unit 2653