Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2025 has been entered.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1,4,7 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Panuganty et al (20210248136).
As per claim 1, Panuganty et al (20210248136) teaches a method of generating, by a recommendation apparatus, content to a user and recommending an asset to be displayed in the content (as displaying content – figures 3,4,6; by use of the personalized analytics system – figure 5b, 5c), the method comprising:
receiving a sentence for generating the content from a user; transforming the sentence into a story type text through a language model (as, taking in a Natural Language query – para 0105; eventually generating a storyline – para 0121 – output of story narrator);
transforming the story type text into a storyline including 1) a background, 2) a main character, and 3) a main element through the language model (as, the storyline/story narrator utilizes background information and displays it – para 0147, generates a main character/object – para 0215 – product/event/concept; main summary – para 0242 – extracting summary points from a chart);
transforming the storyline into sentence data for generating the content or recommending the asset through the language model; and generating the content or recommending the asset based on the sentence data (wherein the output of the story narrator used is a descriptive output, as well as using charts,graphs, etc. – para 0121);
wherein the sentence data includes for each individual source word in the sentence, an English word corresponding to the source word, parts of speech, and importance value (as, using part of speech as attributes – para 0474); assigned by the language model, wherein the sentence data is provided as a machine readable structured data set that is linked or associated with metadata of an asset so as to enable automated retrieval of the asset based on the linguistic attributes (as, using the metadata to find/prioritize what is needed – para 0117; including search for assets, such as image/sound/text/documents, etc – para 0102; using a sentence structure – para 0518; wherein the matching is toward the attributes stored for the storyline database – see para 0540, and the matching is toward the query and the stored attributes of the item of interest – see para 0302 – “matching the keywords to content included in the metadata, and generating tag information; examiner further notes, to the “linked or associated with metadata”, after matching the keywords to content, the current “keywords” “attributes” to the stored asset, can be updated via longer term data curation – see para 0117 – with a scoring based on higher priority – para 0117; the curation engine uses natural language processing models as well as semantic models, to perform the matching, and updating the models themselves – para 0219).
As per claim 4, Panuganty et al (20210248136) teaches the method of claim 1, wherein the story type text is transformed through the language model based on a command for writing a synopsis (as, generating a summary based upon the narrated analytics results – para 0396, see, ‘represents a summary’).
Claim 7 is an apparatus claim whose steps are performed by the various features in the method claims 1,4 above; as such, claim 7 is similar in scope and content to claims 1,4 above; therefore, claim 7 is rejected under similar rationale as presented against claims 1,4 above.
Response to Arguments
Applicant's arguments filed 8/19/2025 have been fully considered but they are not persuasive. On pp 5 of the response, applicants state that the cited paragraphs of the Panuganty reference relates “not token level linguistic annotations”. Examiner argues, that these features, in detail, or nowhere to be found in the claim scope. Examiner further notes, in the rejection above, further recitation of Panuganty teaching a data curation technique that updates the database of associated attributes. On p6 of the response, applicants argues that “Panuganty does not embed linguistic information within asset metadata”. Examiner argues that the recited section of Panuganty show this, and additional, the use of latent semantic analysis shows “an embedding”. As mentioned above, examiner suggests further claim limitation toward the token structures, and the derived linguistic structure, to possibly overcome the Panuganty reference.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see related art listed on the PTO-892 form.
Amer et al (20190304157) teaches tokenization of an input sentence structure based on linguistics – para 0062, reflecting back on attributes of the storyline – para 0038.
Lewis (20240046074) teaches the use of tokenized descriptors, on a language level, to represent the translation – see para 0035.
Kapoor et al (20180089156) teaches query understanding, and generating reports/slides based on the input (Fig. 1)
Panuganty (20200401593) teaches taking a user query, generating a storyline, and displaying – see Fig. 1-3, para 0045-0053).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Opsasnick, telephone number (571)272-7623, who is available Monday-Friday, 9am-5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Richemond Dorvil, can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Michael N Opsasnick/Primary Examiner, Art Unit 2658
01/08/2026