Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
2. In response to the office action mailed on 09/03/2025, applicant filed an amendment on 10/22/2025, amending claims 1, 10, and 17. The pending claims are 1-20.
Response to Arguments
3. Applicant's arguments filed 10/22/2025 have been fully considered but they are not persuasive.
Applicant argue that the prior art of record does not teach “wherein the processing comprises iteratively processing each word of the plurality of words in sequence by storing a current word that is a proper noun or a noun in the set of n-grams and storing a subsequent word that is a proper noun, a noun, or an adposition following the current word in the set of n-grams until all the plurality of words are processed”.
The examiner notes that the prior art Finkelshtein traverses and analyzes text documents word after word and sentence after sentence ([0055]) and tags words in sentences with their grammatical part-of-speech, such as adjective, adposition, adverb, conjunction, article, noun, particle, pronoun, verb, etc. The results of the POS tagging and the dependency parsing is utilized in determining paths of dependencies connecting the traversed words and sentences ([0062]) to estimate whether each relation between a person-type entity and another person-type or non-person-type entity is indicative of some personal information. Therefore, storing the traversed words and their corresponding tags is necessarily disclosed. In order to determine the syntactic dependencies and estimate the entities relationships, the used data requires processing that involves accessing, transforming, and analyzing data over time.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-10, 12-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Finkelshtein (US 20210089620) in view of Bender (US 20210294829).
As per claim 1, Finkelshtein teaches determining, by a language model, a plurality of speech tags for a plurality of words associated with a body of text ([0017], applying a parts-of-speech (POS) tagging algorithm and a dependency parsing algorithm to sentences of the multiple other digital text documents);
processing the plurality of words by determining whether each word is a noun, proper noun, or adposition to generate a set of n-grams corresponding to a domain-agnostic context of the body of text ([0058], wherein said, the POS tagging algorithm may tag words in a sentence with their grammatical part-of-speech, such as adjective, adposition, adverb, conjunction, article, noun, particle, pronoun, verb, etc. The POS tagging algorithm may also tag multi-word expressions which jointly form a part-of-speech, such as a compound noun, compound adverb, etc.). As to wherein the processing comprises iteratively processing each word of the plurality of words in sequence by storing a current word that is a proper noun or a noun in the set of n-grams and storing a subsequent word that is a proper noun, a noun, or an adposition following the current word in the set of n-grams until all the plurality of words are processed, Finkelshtein traverses and analyzes text documents word after word and sentence after sentence ([0055]) and tags words in sentences with their grammatical part-of-speech, such as adjective, adposition, adverb, conjunction, article, noun, particle, pronoun, verb, etc. The results of the POS tagging and the dependency parsing is utilized in determining paths of dependencies connecting the traversed words and sentences ([0062]) to estimate whether each relation between a person-type entity and another person-type or non-person-type entity is indicative of some personal information. Therefore, storing the traversed words and their corresponding tags is necessarily disclosed in order to determine the syntactic dependencies and estimate the entities relationships. Processing data involves accessing, transforming, and analyzing data over time, necessarily requires storing of the data. Therefore, it would have been obvious at the time the application was filed for the system of Finkelshtein to iteratively processing each word of the plurality of words in sequence by storing a current word that is a proper noun or a noun in the set of n-grams and storing a subsequent word that is a proper noun, a noun, or an adposition following the current word in the set of n-grams until all the plurality of words are processed.
Finkelshtein may not explicitly disclose using a domain-agnostic context extraction (DCE) model for processing the plurality of words, and generating, based on the set of n-grams, a contextual summary of the body of text. However, Bender in the same field of endeavor teaches using a domain-agnostic context extraction (DCE) model for processing the plurality of words by determining whether each word is a noun, proper noun, or adposition to generate a set of n-grams corresponding to a domain-agnostic context of the body of text, and generating, based on the set of n-grams, a contextual summary of the body of text ([0233]). Therefore, it would have been obvious at the time the application was filed to use Bender’s above features with the system of Finkelshtein, in order to improve natural language processing and provide the target information in response to queries.
As per claim 3, Finkelshtein teaches wherein processing the plurality of words to generate the set of n-grams is not based on a domain associated with the body of text ([0049], [0057]- [0058], wherein the prior art applies and uses grammatical tagging to determine grammatical part-of-speech, such as adjective, adposition, adverb, conjunction, article, noun, particle, pronoun, verb, etc.).
As per claim 4, Finkelshtein may not explicitly disclose wherein the DCE model is not trained on domain-specific data. However, a model not specifically trained on a particular domain, often referred to as a "general-purpose" or "pre-trained" model. Bender in the same field of endeavor teaches wherein the DCE model is not trained on domain-specific data ([0105], [0063], [0067], wherein the used neural networks models are pre-trained models). Therefore, it would have been obvious at the time the application was filed to use Bender’s above feature of using a DCE model that is not trained on domain-specific data with the system of Finkelshtein, in order to achieve better performance by compensating for the lack of specific examples of specific domains.
As per claim 5, Finkelshtein teaches determining a first word in the set of n-grams comprises all uppercase letters or a length of the first word is not greater than one; adding the first word to an abbreviation-and-acronym list; and deleting the first word from the set of n-grams ([0055], determining n-grams with all uppercase letters and replacing mentions with full names such as J. K Rowling and Joane Rowling).
As per claim 6, Finkelshtein teaches wherein the body of text is associated with a user, and wherein the method further comprises: determining one or more user intents associated with the body of text; and generating an understanding of the body of text based on the set of n-grams and the one or more user intents ([0040]- [0045], [0050], [0073], wherein a text received from a user is analyzed and refined, the user’s intent is determined and result is generated).
As per claim 7, Finkelshtein teaches wherein the one or more user intents comprise one or more of: an informational intent indicating the user wants to learn information; a transactional intent indicating the user seeks for a particular product or service; or a navigational intent indicating the user seeks for a particular site ([0007], estimating whether the at least one relation between the named entities is indicative of personal information; and automatically issue a notification of a result of the estimation).
As per claim 8, Finkelshtein teaches updating one or more machine-learning models based on the one or more user intents and the set of n-grams, wherein the one or more machine-learning models comprise one or more of the language model, the DCE model, or a ranking model ([0053], machine learning model).
As per claim 9, Finkelshtein teaches wherein the DCE model is configured to generate sets of n-grams corresponding to domain-agnostic contexts for bodies of text in a plurality of languages ([0041]).
As per claims 10, 12-16, system claims 10, 12-16 and method claims 1, 3-9 are related as apparatus and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claims 10, 12-16 are similarly rejected under the same rationale as applied above with respect to method claims 1, 3-9. Furthermore, Finkelshtein teaches one or more non-transitory computer-readable storage media including instructions; and one or more processors coupled to the storage media, as claimed ([0102]- [0103]).
As per claims 17, 19-20, Finkelshtein teaches a computer readable medium ([0102]). The remaining steps are rejected under the same rationale as applied to the method steps of rejected claims 1, 3-4.
Allowable Subject Matter
5. Claims 2, 11, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO_892.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELALI SERROU whose telephone number is (571)272-7638. The examiner can normally be reached M-F 9 Am - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDELALI SERROU/Primary Examiner, Art Unit 2659