DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to because of these informalities in Step 120 of Figure 1:
“presetpredetermined” should be “preset predetermined”
“categorycategories to” should be “category to”
“determininge” should be “determining”
“a a minutes sentences” should be “minutes sentences”
“belonging correspondingbelonging” should be “correspondingly belonging”
“to each of each preset the” should be “to each of the preset”
“categorycategories” should be categories”.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office Action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, Applicants will be notified and informed of any required corrective action in the next Office Action. The objection to the drawings will not be held in abeyance.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Sentence-Based Minutes Processing to Determine Minutes Categories of Associated Minutes Sentences.
The disclosure is objected to because of the following informalities:
In ¶[0035], “another sentences” should be “another sentence”.
In ¶[0035], “to each predetermined minutes categories” should be “to each predetermined minutes category”.
In ¶[0041], “to each predetermined minutes categories” should be “to each predetermined minutes category”. (two occurrences)
In ¶[0042], “to a certain predetermined minutes categories” should be “to a certain predetermined minutes category”.
In ¶[0044], “each predetermined minutes categories” should be “each predetermined minutes category”. (two occurrences)
In ¶[0068], “each predetermined minutes categories” should be “each predetermined minutes category”. (two occurrences)
In ¶[0069], “and a minutes can be generated” should be “and minutes can be generated”.
In ¶[0079], “and display a minutes” should be “and display minutes”.
Appropriate correction is required.
Claim Objections
Claims 1 to 6, 13, 15 to 19, and 24 to 31 are objected to because of the following informalities:
Independent claims 1, 13, and 31 set forth a limitation of “the other sentence” which should be “the another sentence” to provide proper antecedent basis for the prior limitation of “another sentence”.
Claims 3 and 16 set forth a limitation of “a output” which should be “an output”.
Claims 4 and 17 set forth a limitation of “a predetermined minutes categories” which should be “a predetermined minutes category”.
Appropriate correction is required.
Election/Restrictions
Applicants’ election without traverse of Invention I, Claims 1 to 6, 13, and 15 to 19, in the reply filed on 21 July 2025 is acknowledged.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 to 3, 5 to 6, 13, 15 to 16, 18 to 19, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (U.S. Patent Publication 2021/0407499) in view of Choi et al. (U.S. Patent No. 12,205,024).
Concerning independent claims 1, 13, and 31, Sun et al. discloses a method, system and computer program product for automatically generating conference minutes, comprising:
“obtaining a to-be-processed text: -- conference minutes generation comprises acquiring a text conference record (Abstract); minutes generation includes acquiring a text conference record (¶[0022] - ¶[0023]: Figure 1: Step 101);
“performing minutes extraction on the to-be-processed text based on predetermined minutes categories to determine a minutes sentence belonging to each of the predetermined minutes categories” – a text conference record is divided into a plurality of paragraphs (Abstract); generating a conference paragraph summary comprises evaluating each sentence in a conference paragraph to obtain an evaluation of the sentence (¶[0035]: Figure 4: Step 10221); different topic entities may be extracted from a text conference record based on topic entities that are identified; based on a same topic entity, opinions and attitudes expressed by several participant entities on the topic entity are obtained (¶[0062] - ¶[0063]); keywords and high-frequency words from a text conference record are extracted by performing semantic calculation on the text conference record, and conference topics are determined based on the keywords and high-frequency words; words may comprise categories generated; keywords and high-frequency words representing conference topics may be determined by screening sentences containing the keywords and high-frequency words; semantic calculation on the text record may be performed so that content that users are interested in can be extracted (¶[0069]); here, topics of a conference minutes record are “predetermined minutes categories”, and sentences are analyzed to determine a topic for a sentence based on keywords and high-frequency words in a sentence (“determine a minutes sentence belonging to each of the predetermined minutes categories”);
“determining another sentence associated with the minutes sentence from the to-be-processed text, and storing an association relationship between the minutes sentence and the other sentence” – candidate sentences are determined according to an evaluation value of each sentence in the conference paragraph to form a candidate sentence set; a conference paragraph summary is generated based on the candidate sentence set (¶[0035]: Figure 4: Steps 10222 to 10223); a similarity is calculated between a sentence and other sentences in a paragraph; a coherence value is obtained by comparing the correlation between the sentence and other sentences in the paragraph, and a higher coherence value indicates that the relationship between the sentence and the other sentences in the paragraph is closer; a coherence value is calculated by comparing a coherence degree of a correlation between the sentence and the other sentences in the paragraph (¶[0040] - ¶[0044]); here, determining a coherence of a sentence and other sentences in the same paragraph is “determining another sentence associated with the minutes sentence from the to-be-processed text”; a conference paragraph summary may be generated based on the candidate sentence set; a conference paragraph summary may be generated by directly arranging the candidate sentences in order (¶[0049]); candidate summary sentences are determined according to an evaluation value of each summary sentence in the conference paragraph summary to form a candidate summary sentence set, and a conference record summary is generated based on the candidate summary sentence set (¶[0051]); broadly, “storing an association relationship between the minutes sentence and the other sentence” is performed by associating sentences in a generated conference summary; that is, an association is ‘stored’ for all coherent sentences in a sentence set of a summary by associating the sentences that belong together in the summary.
Concerning independent claims 1, 13, and 31, Sun et al. arguably anticipates these independent claims by disclosing all of the limitations of minutes generation, but does not clearly disclose “predetermined categories” to determine a “sentence belonging to each of the predetermined categories”. Still, Sun et al. appears to disclose that topics are equivalent to ‘categories’ and determining the topics of individual sentences based on keywords and high-frequency words. Even if this limitation of determining sentences as belonging to each of a plurality of predetermined categories is not disclosed by Sun et al., it is taught by Choi et al. Generally, Choi et al. teaches classifying categories of data using a neural network. (Abstract) At least one category is determined with respect to at least one sentence of unstructured data based on classification levels of a category of the data. An expected category of a plurality of sentences of unstructured data is repeatedly determined and at least one category with respect to the at least one sentence of the unstructured data is determined. (Column 2, Lines 9 to 25) A computing device may predict a category of a newly received sentence, and may predict the category of the new sentence based on a similarity with a centroid for each category. (Column 9, Lines 3 to 10: Figure 1) A computing device may segment each sentence of data using sentence boundary recognition, and may segment each sentence by receiving the data as input and dividing text included in the data in units of sentences. (Column 16, Lines 21 to 26: Figure 4) A computing device may determine at least one category with respect to at least one sentence of data 510, and may determine a category as a TV or Lamp. (Column 18, Lines 1 to 16: Figure 5) An objective is to comprehensively analyze and accurately understand data according to a classification system and a user’s intention. (Column 1, Lines 50 to 60) It would have been obvious to one having ordinary skill in the art to classify sentences into predetermined categories as taught by Choi et al. in automatically generating conference minutes from a plurality of coherent sentences in Sun et al. for a purpose of comprehensively analyzing and accurately understanding data according to a classification system and a user’s intention.
Concerning claims 2 and 15, Sun et al. discloses that keywords and high-frequency words from a text conference record are extracted by performing semantic calculation on the text conference record, and conference topics are determined based on the keywords and high-frequency words; words may comprise categories generated; keywords and high-frequency words representing conference topics may be determined by screening sentences containing the keywords and high-frequency words; semantic calculation on the text record may be performed so that content that users are interested in can be extracted (¶[0069]). Here, determining conference topics is “at least one of . . . a text topic category.”
Concerning claims 3 and 16, Choi et al. teaches performing sentence embedding for each category and determining a sentence vector of each sentence (“performing vectorization processing on each sentence in the to-be-processed text to determine a vectorization result for the sentence”) (column 8, lines 60 to 66: Figure 1: Step 150); an artificial intelligence model may be created through training by a learning algorithm, and may include a plurality of neural network layers (“a pretrained text recognition model”) (column 6, line 65 to column 7, line 11); a computing device may determine at least one category with respect to at least one sentence of the data according to a classification system of category of the data using a neural network (column 9, line 53 to column 10, line 23: Figure 2); a computing device may perform sentence embedding for each category, and predict a category of an input sentence (“inputting the vectorization result into a pre-trained text recognition model, and determining the [minutes] sentence corresponding to each of the predetermined [minutes] categories according to a output result of the text recognition model, the text recognition model being configured to recognize whether the sentence belongs to one of the predetermined [minutes] categories”) (column 13, line 66 to column 15, line 57: Figure 3).
Concerning claims 5 and 18, Sun et al. discloses comparing a sentence and other sentences according to a semantic similarity (¶[0040]); keywords and high-frequency words from a text conference record are extracted by performing semantic calculation on the text conference record, and conference topics are determined based on the keywords and high-frequency words; words may comprise categories generated; keywords and high-frequency words representing conference topics may be determined by screening sentences containing the keywords and high-frequency words; semantic calculation on the text record may be performed so that content that users are interested in can be extracted (¶[0069]). Here, determining that sentences of conference minutes are semantically similar to topics is “text matching on the to-be-processed text based on category indications” and “determining the minutes sentence belonging to the predetermined minutes categories according to the text matching.” That is, keywords and high-frequency words of sentences are ‘matched’ to topics, or “categories”. Similarly, Choi et al. teaches that a computing device may predict a category of an input sentence based on a category of a most similar centroid. (Column 15, Lines 41 to 57: Figure 4: Step 470) Here, comparing an embedding vector of a sentence to a centroid of a most similar category is “text matching”.
Concerning claims 6 and 19, Sun et al. discloses that two sentences in a paragraph that are to be compared may or may not be adjacent. (¶[0042]) Broadly, if two sentences are adjacent, then they are in adjacent positions (“determining the other sentence corresponding to the minutes sentence based on a position of each sentence in the to-be-processed text and a position of the minutes sentence in the to-be-processed text”). Similarly, Choi et al. teaches determining a sentence vector of each sentence and performing sentence embedding for each category to predict a category of a newly received sentence (column 8, line 60 to column 9, line 6: Figure 1); a computing device may predict a category of an input sentence from a sentence vector and a sentence embedding of each category to determine a most similar centroid as a predicted category (“determining the other sentence . . . based on a vectorization result of each sentence in the to-be-processed text”) (column 13, line 65 to column 15, line 58: Figure 3).
Claims 4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (U.S. Patent Publication 2021/0407499) in view of Choi et al. (U.S. Patent No. 12,205,024) as applied to claims 1, 3, 13, and 16 above, and further in view of Singh Bawa et al. (U.S. Patent Publication 2022/0237373).
Choi et al. briefly describes generating training data as a result of performing sentence embedding on a plurality of sentences including a sentence about TV, a sentence about drama, a sentence about image quality, a sentence about a mobile phone, a sentence about performance of a tablet PC, and a sentence about a voice assistant. (Column 17, Lines 16 to 32: Figure 4: Step 480) Implicitly, Choi et al. appears to be training a neural network as an artificial intelligence model to classify sentences into categories according to training data of these sentences labeled by category. Choi et al., then, may arguably be understood to teach the limitations of “wherein the text recognition model is a machine learning model, a training sample of the machine learning model being a plurality of sentences having category indications, and each of the category indications indicating a predetermined [minutes] categories to which a corresponding sentence belongs.” Here, labeling training sentences as being about TV, drama, image quality, a mobile phone, performance of a tablet PC, and a voice assistant are “category indications, and each of the category indication indicating a predetermined [minutes] category to which a corresponding sentence belongs.” Even if training using training data that is labeled by category is not expressly taught by Choi et al., Singh Bawa et al. teaches automated categorization and summarization of documents using machine learning to select a document category of a plurality of document categories. (Abstract) A first set of ML models may be trained to determine underlying similarities and differences between different categories of documents based on word features from labeled documents of different categories. (¶[0005]) The various models may be trained using training data that is based on labeled and annotated documents of each of the predefined categories. (¶[0021]) Training engine 126 may generate training data 110 based on labeled, e.g., categorized, documents from databases 142. (¶[0026]) An objective is to leverage machine learning and artificial intelligence to categorize and summarize various categories of documents. (¶[0001]) It would have been obvious to one having ordinary skill in the art to provide training samples having category indications to train a machine learning model as taught by Singh Bawa et al. to categorize sentences with a machine learning model of Choi et al. for a purpose of leveraging machine learning and artificial intelligence to categorize and summarize categories of documents.
Claims 24 to 30 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (U.S. Patent Publication 2021/0407499) in view of Choi et al. (U.S. Patent No. 12,205,024) as applied to claims 1, 3, 13, and 16 above, and further in view of Meretab (U.S. Patent Publication 2011/0016416).
Concerning claims 24 and 28, Sun et al. discloses generating a conference summary of a plurality of sentences, but does not provide for display and selection of alternative minutes sentences for the summary in the limitations of “displaying the predetermined minutes category and the minutes sentence corresponding to the predetermined minutes category” and “displaying, in response to detecting an association display instruction corresponding to a target minutes sentence among the minutes sentences, another minutes sentence in the to-be-processed text associated with the target minutes sentence, based on the target minutes sentence and the stored association relationship between the target minutes sentence and the other sentence.”
Concerning claims 24 and 28, Meretab teaches storing and retrieving individual sentences on an interactive display so that relevant sentences from a single source or different sources can be retrieved, aggregated, and displayed along with context specifically tailored for each user-relevant sentence. (Abstract) Based on source material, subunits of sentences can be indexed. (¶[0050]) Sentences can be distinguished by sentence boundary disambiguation (SBD) techniques. (¶[0054]) An end user could first request from an article all identifier-associated sentences containing the word ‘gain’, and that request would designate sentences 225 and 238, or a user may start with all identifier-associated sentences containing the word ‘future’, and request associated sentences 225 and 232. (¶[0057]: Figure 2A) Indexed sentences can be stored in a record. (¶[0081]) Specifically, given a displayed sentence, a sentence navigation bar is provided that displays one or more additional sentences based on a defined relation between the displayed sentence and the additional sentences. Each relation can be associated with an icon that gets triggered by an event, e.g., actions performed by a user of a mouse click. A computer designates sentences based on relations so that designated sentences can be retrieved, displayed, and/or stored. Given an initial sentence and a specific relation of interest, a computer system may designate the sentences that meet the criteria of the relation. (¶[0083] - ¶[0085]) An event on a navigation bar, e.g., a mouse click, mouse over, or touch on a touch screen, causes additional sentences to be displayed. (¶[0091]) Figures 3 to 9 of Meretab illustrate displaying a predetermined category of ‘dividend’ and a sentence in a predetermined category of ‘dividend’ (‘We paid out approximately . . .’) (“displaying the predetermined [minutes] category and the [minutes] sentence corresponding to the predetermine [minutes] category”), and then in response to a mouse click by a user (“an association display instruction”), displaying another sentence that meets criteria of a relation (“the stored association relationship between the target [minutes] sentence and the other sentence”) so that another sentence relating to ‘dividend’ is displayed (‘As Dave stated . . .’) (“displaying, in response to detecting an association display instruction corresponding to a target [minutes] sentence among the [minutes] sentence, another sentence in the to-be-processed text associated with the target [minutes] sentence, based on the target [minutes] sentence and the stored association relationship between the target [minutes] sentence and the other sentence”). An objective is to collect and display information that enables a user to access information on a topic and read it coherently from across documents or from different locations within a document while enabling immediate access to surrounding content that may be required for further understanding. (¶[0003]) It would have been obvious to one having ordinary skill in the art to display another sentence that has a relationship to a sentence in a category pursuant to an association display instruction from a user as taught by Meretab to generate a conference record summary in Sun et al. for a purpose of enabling user access to information on a topic from different documents or within a same document to further understanding from surrounding content.
Concerning claims 25 to 26 and 29 to 30, Meretab teaches that, given a displayed sentence, a sentence navigation bar is provided that displays one or more additional sentences based on a defined relation between the displayed sentence and the additional sentences. Each relation can be associated with an icon that gets triggered by an event, e.g., actions performed by a user of a mouse click. A computer designates sentences based on relations so that designated sentences can be retrieved, displayed, and/or stored. Given an initial sentence and a specific relation of interest, a computer system may designate the sentences that meet the criteria of the relation. (¶[0083] - ¶[0085]) An event on a navigation bar, e.g., a mouse click, mouse over, or touch on a touch screen, causes additional sentences to be displayed. (¶[0091]) Here, a mouse click to trigger display of another sentence which meets specific criteria of a relationship is “a sentence triggering operation” on a target sentence and “a control triggering operation on an association display control” at the target sentence to determine “an instruction corresponding to the sentence triggering operation as the association display instruction” and “an instruction corresponding to the control triggering operation as the association display instruction.” That is, a mouse click causing additional related sentences to be displayed is “a sentence triggering operation” and “a control triggering operation” because a mouse click triggers control of additional associated sentences to be displayed.
Concerning claim 27, Meretab teaches at least “wherein the displaying the other sentence associated with the target [minutes] sentence comprises: displaying the other sentence associated with the target [minutes] sentence at a predetermined position of the target [minutes] sentence” as illustrated in Figures 3 to 9 with an additional sentence being positioned above or below a given sentence on the graphical user interface, “and/or prominently displaying the other sentence associated with the target [minutes] sentence in the to-be-processed text” because color can be used to illustrate a relationship between originally displayed sentences in red and incrementally displayed sentences in green. (¶[0100]) That is, displaying a given sentence and another sentence in different colors provides for “prominently displaying the other sentence associated with the target [minutes] sentence”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure.
Saleh et al., Biswas et al., Glavaš et al., Fontes et al., and Asi et al. disclose related prior art.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN LERNER whose telephone number is (571) 272-7608. The examiner can normally be reached Monday-Thursday 8:30 AM-6:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARTIN LERNER/Primary Examiner
Art Unit 2658 August 25, 2025