DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on June 18, 2024 (2) are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Agarwal et al. (US 2022/0207483 A1).
As to claim 1, Agarwal discloses a computer implemented method [Paragraph 0001], comprising:
identifying a set of sentences [“Identified documents that are privileged.” Paragraph 0023];
determining a first contextual meaning among words of a sentence of the set of sentences using a first word embedding technique [“The identity of an entity is interpretated (contextual meaning) by comparing a joint difference of every effective candidate entity.” Paragraph 0024];
determining a second contextual meaning shared by a subset of sentences from the set of sentences using a second word embedding technique, wherein the determining the first contextual meaning [“To improve the accuracy of the improved privilege analysis systems, a role predictor frame utilizes a privilege list receive that includes a plurality of known entities. The examiner choose this limitation because of the simple or.” Paragraph 0025];
based on the first contextual meaning and the second contextual meaning for the set of sentences, determining a category and a subcategory corresponding to the set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025]; and
assigning a classification code to the set of sentences, wherein the classification code identifies the category [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 2, Agarwal discloses the computer implemented method of claim 1, further comprising: receiving the set of sentences as search engine results [“The searching techniques of convention solutions yield the results of attorneys of party.” Paragraph 0019].
As to claim 3, Agarwal discloses the computer implemented method of claim 2, wherein the subcategory corresponding to the set of sentences is determined based on applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025].
As to claim 4, Agarwal discloses the computer implemented method of claim 3, wherein the set of sentences is a first set of sentences, the classification code is a first classification code, the category is a first category, and the subcategory is a first subcategory, the method further comprising:
identifying a second set of sentences, wherein the first set of sentences and the second set of sentences share at least one common word [“Identified documents that are privileged.” Paragraph 0023];
determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique [“The identity of an entity is interpretated (contextual meaning) by comparing a joint difference of every effective candidate entity.” Paragraph 0024];
determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique [“To improve the accuracy of the improved privilege analysis systems, a role predictor frame utilizes a privilege list receive that includes a plurality of known entities.” Paragraph 0025];
based on the first contextual meaning for the second set of sentences and the second contextual meaning for the second set of sentences, determining a second category and a second subcategory corresponding to the second set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025];
based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025];
and
assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 5, Agarwal discloses the computer implemented method of claim 4, wherein the second classification code differs from the first classification code [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 6, Agarwal discloses the computer implemented method of claim 1, wherein the determining the first contextual meaning among words of a sentence of the set of sentences further comprises: applying the set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 7, Agarwal discloses the computer implemented method of claim 1, wherein the determining the second contextual meaning shared by the subset of sentences of the set of sentences further comprises: applying the set of sentences to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence and a specified number of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025].
As to claim 8, Agarwal discloses a system [FIG. 3], comprising:
a memory [310 on FIG. 3]; and
at least one processor [315 on FIG. 3] coupled to the memory and configured to perform operations comprising:
identifying a set of sentences [“Identified documents that are privileged.” Paragraph 0023];
determining a first contextual meaning among words of a sentence of the set of sentences using a first word embedding technique [“The identity of an entity is interpretated (contextual meaning) by comparing a joint difference of every effective candidate entity.” Paragraph 0024];
determining a second contextual meaning shared by a subset of sentences from the et of sentences using a second word embedding technique, wherein the determining the first contextual meaning [“To improve the accuracy of the improved privilege analysis systems, a role predictor frame utilizes a privilege list receive that includes a plurality of known entities. The examiner chooses this limitation because of the simple or.” Paragraph 0025];
based on the first contextual meaning and the second contextual meaning for the set of sentences, determining a category and a subcategory corresponding to the set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025]; and
assigning a classification code to the set of sentences, wherein the classification code identifies the category [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 9, Agarwal discloses the system of claim 8, wherein the operations further comprise: receiving the set of sentences as search engine results [“The searching techniques of convention solutions yield the results of attorneys of party.” Paragraph 0019].
As to claim 10, Agarwal discloses the system of claim 9, wherein the subcategory corresponding to the set of sentences: is determined based on applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025].
As to claim 11, Agarwal discloses the system of claim 10, wherein the set of sentences is a first set of sentences, the classification code is a first classification code, the category is a first category, the subcategory is a first subcategory, and the operations further comprise:
identifying a second set of sentences, wherein the first set of sentences and the second set of sentences share at least one common word [“Identified documents that are privileged.” Paragraph 0023];
determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique [“The identity of an entity is interpretated (contextual meaning) by comparing a joint difference of every effective candidate entity.” Paragraph 0024];
determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique [“To improve the accuracy of the improved privilege analysis systems, a role predictor frame utilizes a privilege list receive that includes a plurality of known entities.” Paragraph 0025];
based on the first contextual meaning for the second set of sentences and the second contextual meaning for the second set of sentences, determining a second category and a second subcategory corresponding to the second set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025];
based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025]; and
assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 12, Agarwal discloses the system of claim 11, wherein the second classification code differs from the first classification code [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 13, Agarwal discloses the system of claim 8, wherein the determining the first contextual meaning among words of a sentence of the set of sentences further comprises:
applying the set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 14, Agarwal discloses the system of claim 8, wherein the determining the second contextual meaning shared by the subset of sentences of the set of sentences further comprises:
applying the set of sentences to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence and a specified number of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025].
As to claim 15, Agarwal discloses a non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device [Paragraph 0167], cause the at least one computing device to perform operations comprising:
identifying a set of sentences [“Identified documents that are privileged.” Paragraph 0023];
determining a first contextual meaning among words of a sentence of the set of sentences using a first word embedding technique [“Identified documents that are privileged.” Paragraph 0023];
determining a second contextual meaning shared by a subset of sentences from the set of sentences using a second word embedding technique, wherein the determining the first contextual meaning [“To improve the accuracy of the improved privilege analysis systems, a role predictor frame utilizes a privilege list receive that includes a plurality of known entities. The examiner choose this limitation because of the simple or.” Paragraph 0025];
based on the first contextual meaning and the second contextual meaning for the set of sentences, determining a category and a subcategory corresponding to the set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025]; and
assigning a classification code to the set of sentences, wherein the classification code identifies the category [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 16, Agarwal discloses the non-transitory computer-readable device of claim 15, wherein the operations further comprise: receiving the set of sentences as search engine results [“The searching techniques of convention solutions yield the results of attorneys of party.” Paragraph 0019].
As to claim 17, Agarwal discloses the non-transitory computer-readable device of claim 16, wherein the subcategory corresponding to the set of sentences is determined based on applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025].
As to claim 18, Agarwal discloses the non-transitory computer-readable device of claim 17, wherein the set of sentences is a first set of sentences, the classification code is a first classification code, the category is a first category, the subcategory is a first subcategory, and the operations further comprise:
identifying a second set of sentences, wherein the first set of sentences and the second set of sentence share at least one common word [“Identified documents that are privileged.” Paragraph 0023];
determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique [“The identity of an entity is interpretated (contextual meaning) by comparing a joint difference of every effective candidate entity.” Paragraph 0024];
determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique; [“To improve the accuracy of the improved privilege analysis systems, a role predictor frame utilizes a privilege list receive that includes a plurality of known entities.” Paragraph 0025]
based on the first contextual meaning for the second set of sentences and the second contextual meaning for the second set of sentences, determining a second category and a second subcategory corresponding to the second set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025];
based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences [“Based on the number of entities tokens, the sentence is identified as privileges and also based on embryo entities using the name variant generation determined a subcategory.” Paragraph 0025];
and
assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 19, Agarwal discloses the non-transitory computer-readable device of claim 18, wherein the second classification code differs from the first classification code [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
As to claim 20, Agarwal discloses the non-transitory computer-readable device of claim 15, wherein the determining the first contextual meaning among words of a sentence of the set of sentences further comprises:
applying the set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words [“The domains are classified using a machine learning as law firm or non-law firm (classification codes).” Paragraph 0065].
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-17 of U.S. Patent No. 12,045,571 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because at least one claim of the instant application is being taught by the claims of the U.S. Patent.
Note that the previous patent granted of the parent application family are also considered such as: 11,625,535.
Patented claim 1 recites a computer implemented method which perform the feature of assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category.
The pending claim 1 recites a computer implemented method which perform the similar feature of assigning a classification code to the set of sentences, wherein the classification code identifies the category.
Therefore, the patented claim 1 anticipates the pending 1.
Pending claims 2-20 have similar limitations comparing the patented claims 2-17 as shown on the table below.
Pending claims
Patented claims
1. A computer implemented method, comprising: identifying a set of sentences; determining a first contextual meaning among words of a sentence of the set of sentences using a first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the set of sentences using a second word embedding technique, wherein the determining the first contextual meaning or the determining the second contextual meaning for the set of sentences further comprises using a term frequency-inverse document frequency (TF-IDF) process; based on the first contextual meaning and the second contextual meaning for the set of sentences, determining a category and a subcategory corresponding to the set of sentences; and assigning a classification code to the set of sentences, wherein the classification code identifies the category.
2. The computer implemented method of claim 1, further comprising: receiving the set of sentences as search engine results.
3. The computer implemented method of claim 2, wherein the subcategory corresponding to the set of sentences is determined based on applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence.
4. The computer implemented method of claim 3, wherein the set of sentences is a first set of sentences, the classification code is a first classification code, the category is a first category, and the subcategory is a first subcategory, the method further comprising: identifying a second set of sentences, wherein the first set of sentences and the second set of sentence share at least one common word; determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique; based on the first contextual meaning for the second set of sentences and the second contextual meaning for the second set of sentences, determining a second category and a second subcategory corresponding to the second set of sentences; based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences; and assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category.
5. The computer implemented method of claim 4, wherein the second classification code differs from the first classification code.
6. The computer implemented method of claim 1, wherein the determining the first contextual meaning among words of a sentence of the set of sentences further comprises: applying the set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words.
7. The computer implemented method of claim 1, wherein the determining the second contextual meaning shared by the subset of sentences of the set of sentences further comprises: applying the set of sentences to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence and a specified number of sentences.
8. A system, comprising: a memory; and at least one processor coupled to the memory and configured to perform operations comprising: identifying a set of sentences; determining a first contextual meaning among words of a sentence of the set of sentences using a first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the set of sentences using a second word embedding technique, wherein the determining the first contextual meaning or the determining the second contextual meaning for the set of sentences further comprises using a term frequency-inverse document frequency (TF-IDF) process; based on the first contextual meaning and the second contextual meaning for the set of sentences, determining a category and a subcategory corresponding to the set of sentences; and assigning a classification code to the set of sentences, wherein the classification code identifies the category.
9. The system of claim 8, wherein the operations further comprise: receiving the set of sentences as search engine results.
10. The system of claim 9, wherein the subcategory corresponding to the set of sentences is determined based on applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence.
11. The system of claim 10, wherein the set of sentences is a first set of sentences, the classification code is a first classification code, the category is a first category, the subcategory is a first subcategory, and the operations further comprise: identifying a second set of sentences, wherein the first set of sentences and the second set of sentence share at least one common word; determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique; based on the first contextual meaning for the second set of sentences and the second contextual meaning for the second set of sentences, determining a second category and a second subcategory corresponding to the second set of sentences; based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences; and assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category.
12. The system of claim 11, wherein the second classification code differs from the first classification code.
13. The system of claim 8, wherein the determining the first contextual meaning among words of a sentence of the set of sentences further comprises: applying the set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words.
14. The system of claim 8, wherein the determining the second contextual meaning shared by the subset of sentences of the set of sentences further comprises: applying the set of sentences to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence and a specified number of sentences.
15. A non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations comprising: identifying a set of sentences; determining a first contextual meaning among words of a sentence of the set of sentences using a first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the set of sentences using a second word embedding technique, wherein the determining the first contextual meaning or the determining the second contextual meaning for the set of sentences further comprises using a term frequency-inverse document frequency (TF-IDF) process; based on the first contextual meaning and the second contextual meaning for the set of sentences, determining a category and a subcategory corresponding to the set of sentences; and assigning a classification code to the set of sentences, wherein the classification code identifies the category.
16. The non-transitory computer-readable device of claim 15, wherein the operations further comprise: receiving the set of sentences as search engine results.
17. The non-transitory computer-readable device of claim 16, wherein the subcategory corresponding to the set of sentences is determined based on applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence.
18. The non-transitory computer-readable device of claim 17, wherein the set of sentences is a first set of sentences, the classification code is a first classification code, the category is a first category, the subcategory is a first subcategory, and the operations further comprise: identifying a second set of sentences, wherein the first set of sentences and the second set of sentence share at least one common word; determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique; based on the first contextual meaning for the second set of sentences and the second contextual meaning for the second set of sentences, determining a second category and a second subcategory corresponding to the second set of sentences; based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences; and assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category.
19. The non-transitory computer-readable device of claim 18, wherein the second classification code differs from the first classification code.
20. The non-transitory computer-readable device of claim 15, wherein the determining the first contextual meaning among words of a sentence of the set of sentences further comprises: applying the set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words.
1. A computer implemented method, comprising: identifying a first set of sentences and a second set of sentences, wherein the first set of sentences and the second set of sentence share at least one common word; for the first set of sentences: determining a first contextual meaning among words of a sentence of the first set of sentences using a first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the first set of sentences using a second word embedding technique; based on the first contextual meaning for the first set of sentences, determining a first category corresponding to the first set of sentences; applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence; based on applying the second contextual meaning to the subcategorization neural network, determining a first subcategory corresponding to the first set of sentences; assigning a first classification code to the first set of sentences, wherein the first classification code identifies the first category; for the second set of sentences: determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique; based on the first contextual meaning for the second set of sentences, determining a second category corresponding to the second set of sentences; applying the second contextual meaning to the subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence; based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences; and assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category and wherein the second classification code differs from the first classification code.
2. The computer implemented method of claim 1, further comprising: receiving the first set of sentences and the second set of sentences as search engine results.
3. The computer implemented method of claim 1, wherein determining the first contextual meaning or determining the second contextual meaning for the first set of sentences further comprises: removing one or more words from the first set of sentences, wherein the one or more words appear on a list of stop-words.
4. The computer implemented method of claim 1, wherein determining the first contextual meaning or determining the second contextual meaning for the first set of sentences further comprises: removing one or more words from the first set of sentences, wherein the one or more words have been categorized as generic.
5. The computer implemented method of claim 1, wherein determining the first contextual meaning or determining the second contextual meaning for the first set of sentences further comprises: removing one or more words from the first set of sentences via a Term Frequency-Inverse Document Frequency (TFIDF) process.
6. The computer implemented method of claim 1, wherein determining the first contextual meaning among words of a sentence of the first set of sentences further comprises: applying the first set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words.
7. A system, comprising: a memory; and at least one processor coupled to the memory and configured to: identify a first set of sentences and a second set of sentences, wherein the first set of sentences and the second set of sentence share at least one common word; for the first set of sentences: determine a first contextual meaning among words of a sentence of the first set of sentences using a first word embedding technique; determine a second contextual meaning shared by a subset of sentences from the first set of sentences using a second word embedding technique; based on the first contextual meaning and the second contextual meaning for the first set of sentences, determine a first category and a first subcategory corresponding to the first set of sentences; apply the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence; based on applying the second contextual meaning to the subcategorization neural network, determine a first subcategory corresponding to the first set of sentences; assign a first classification code to the first set of sentences, wherein the first classification code identifies the first category; for the second set of sentences: determine a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique; determine a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique; based on the first contextual meaning and the second contextual meaning for the second set of sentences, determine a second category and a second subcategory corresponding to the second set of sentences; apply the second contextual meaning to the subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence; based on apply the second contextual meaning to the subcategorization neural network, determine a second subcategory corresponding to the second set of sentences; and assign a second classification code to the second set of sentences, wherein the second classification code identifies the second category and wherein the second classification code differs from the first classification code.
8. The system of claim 7, wherein the at least one processor is further configured to: receive the first set of sentences and the second set of sentences as search engine results.
9. The system of claim 7, wherein to determine the first contextual meaning or the second contextual meaning for the first set of sentences, the at least one processor is further configured to: remove one or more words from the first set of sentences, wherein the one or more words appear on a list of stop-words.
10. The system of claim 7, wherein to determine the first contextual meaning or the second contextual meaning for the first set of sentences, the at least one processor is further configured to: remove one or more words from the first set of sentences, wherein the one or more words have been categorized as generic.
11. The system of claim 7, wherein to determine the first contextual meaning or the second contextual meaning for the first set of sentences, the at least one processor is further configured to: remove one or more words from the first set of sentences via a Term Frequency-Inverse Document Frequency (TFIDF) process.
12. The system of claim 7, wherein to determine the first contextual meaning among words of a sentence of the first set of sentences, the at least one processor is further configured to: apply the first set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words.
13. A non-transitory computer-readable device having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations comprising: identifying a first set of sentences and a second set of sentences, wherein the first set of sentences and the second set of sentence share at least one common word; for the first set of sentences: determining a first contextual meaning among words of a sentence of the first set of sentences using a first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the first set of sentences using a second word embedding technique; based on the first contextual meaning and the second contextual meaning for the first set of sentences, determining a first category and a first subcategory corresponding to the first set of sentences; applying the second contextual meaning to a subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence; based on applying the second contextual meaning to the subcategorization neural network, determining a first subcategory corresponding to the first set of sentences; assigning a first classification code to the first set of sentences, wherein the first classification code identifies the first category; for the second set of sentences: determining a first contextual meaning among words of a sentence of the second set of sentences using the first word embedding technique; determining a second contextual meaning shared by a subset of sentences from the second set of sentences using the second word embedding technique; based on the first contextual meaning and the second contextual meaning for the second set of sentences, determining a second category and a second subcategory corresponding to the second set of sentences; applying the second contextual meaning to the subcategorization neural network that has been trained to identify a subcategory type for a specified number of words in a sentence; based on applying the second contextual meaning to the subcategorization neural network, determining a second subcategory corresponding to the second set of sentences; and assigning a second classification code to the second set of sentences, wherein the second classification code identifies the second category and wherein the second classification code differs from the first classification code.
14. The non-transitory computer-readable device of claim 13, wherein determining the first contextual meaning or determining the second contextual meaning for the first set of sentences further comprises: removing one or more words from the first set of sentences, wherein the one or more words appear on a list of stop-words.
15. The non-transitory computer-readable device of claim 13, wherein determining the first contextual meaning or determining the second contextual meaning for the first set of sentences further comprises: removing one or more words from the first set of sentences, wherein the one or more words have been categorized as generic.
16. The non-transitory computer-readable device of claim 13, wherein determining the first contextual meaning or determining the second contextual meaning for the first set of sentences further comprises: removing one or more words from the first set of sentences via a Term Frequency-Inverse Document Frequency (TFIDF) process.
17. The non-transitory computer-readable device of claim 13, wherein determining the first contextual meaning among words of a sentence of the first set of sentences further comprises: applying the first set of sentences to a categorization neural network that has been trained to identify a category type for a specified number of words.
.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 form.
Biswas et al. (US 2020/0387570 A1) discloses a method describing which unstructured computer text is analyzed for identification and classification of complaint-specific user interactions.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERALD GAUTHIER whose telephone number is (571)272-7539. The examiner can normally be reached 8:00 AM to 4:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN R EDWARDS can be reached at (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GERALD GAUTHIER/Primary Examiner, Art Unit 2692
February 23, 2026
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692