Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This final rejection is in response to the applicant’s arguments/claims filed on 12/17/2025.
Claim(s) 1-4, 7-11, 14-17 and 20 remain rejected under 35 U.S.C. 103 as being unpatentable over Elassaad (US Application: US 2017/0228239, published: Aug. 10, 2017, filed: Apr. 25, 2017) in view of Salama (US Application: US 2013/0145241, published: Jun . 6, 2013, filed: Dec. 4, 2012).
Claim(s) 5, 12 and 18 remain rejected under 35 U.S.C. 103 as being unpatentable over Elassaad (US Application: US 2017/0228239, published: Aug. 10, 2017, filed: Apr. 25, 2017) in view of Salama (US Application: US 2013/0145241, published: Jun . 6, 2013, filed: Dec. 4, 2012) in view Severn et al (US Application: US 20170230589, published: Aug. 10, 2017, filed: Sep. 29, 2014).
Claim(s) 6, 13 and 19 remain rejected under 35 U.S.C. 103 as being unpatentable over Elassaad (US Application: US 2017/0228239, published: Aug. 10, 2017, filed: Apr. 25, 2017) in view of Salama (US Application: US 2013/0145241, published: Jun . 6, 2013, filed: Dec. 4, 2012) in view of Korn (US Patent: 9558733, issued: Jan. 31, 2017, filed: Sep. 29, 2014).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 7-11, 14-17 and 20 remain rejected under 35 U.S.C. 103 as being unpatentable over Elassaad (US Application: US 2017/0228239, published: Aug. 10, 2017, filed: Apr. 25, 2017) in view of Salama (US Application: US 2013/0145241, published: Jun . 6, 2013, filed: Dec. 4, 2012).
With regards to claim 1. Elassaad A computer-implemented method (paragraph
0041: a computer with memory is implemented), comprising:
analyzing, by one or more processors, a … resource comprising textual content (Fig 14, paragraphs 0011, 0026 and 0139: reference content includes text which is analyzed);
extracting, by one or more processors, a plurality of textual content segments from the … resource (paragraphs 0011, 0026: a portion of textual content is parsed (such as a portion selected 920));
obtaining, by the one or more processors for each textual content segment of the plurality of textual content segments, visual content [or] audio content related to each respective textual content segment (paragraphs 0011, 0024, 0026: augmented content is retrieved/obtained based upon features of the reference content (such as in response to user selection/request) and the augmented content can include visual or audio or text data deemed related to the text of reference content);
generating, by the one or more processors, target content for an audio [or] visual display of the web-based resource, wherein generating the target content comprises combining at least a portion of each respective textual content segment from the plurality of textual content segments with the visual content [or] the audio content related to the respective textual content segment (Fig. 14, paragraphs 0011, 0024, 0026, 0139: target content is generated by referencing not only the text content segment(s) of the reference content, the retrieved visual or audio content, but also further combined with additional preference layout augmentation data (such as visibility)); and
providing, by the one or more processors, data descriptive of the generated target content to a computing device for presentation of the audio [or] visual display of the web-based resource (Fig 14: the generated content is visually displayed/displayed in the form of a combined display with the obtained and reference content and optionally based on the additional preference rendering/layout data. It is noted that the claimed ‘computing device’ does not specify whether this ‘computing device’ also includes the claimed ‘processors’, and the examiner will make the interpretation that the ‘providing’ step can generate and provide/access the ‘data’ to a computing device also having the claimed ‘processors’).
However although Elassaad teaches analyze … a resource ; and extract … from the resource, Elassaad does not expressly teach analyze … a web-based resource …; and extract … from the web-based resource… . Additionally, Elassaad does not expressly teach does not expressly teach obtaining … visual and audio content, … generating … target content for audio and visual content display … wherein generating the target content comprises … segments with the visual content and the audio content.
Yet Salama teaches analyze … a web-based resource …; and extract … from the web-based resource… (Abstract, paragraph 0002, 0029: text content from web sites can be included for analysis and extraction); obtaining … visual and audio content, … generating … target content for audio and visual content display … wherein generating the target content comprises … segments with the visual content and the audio content (Abstract, paragraph 0029 and 0033: multimedia content (audio and video) is combined/generated with analyzed text and displayed to a computing device).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Elassaad’s ability to process text from a resource and obtain audio or video to generate target content including the text and obtained content (of audio or video), such that the resource could be a web based type resource and the obtained content includes both visual and audio (multimedia content) when generating the target content with the text, as taught by Salama. The combination would have allowed Elassaad to have promoted a better understanding of contents without having to manually search the web (Salama, paragraph 0002).
With regards to claim 2. (Original) The computer-implemented method of claim 1, Elassaad and Salama teaches further comprising: obtaining, by the one or more processors, the web-based resource based at least in part on information associated with a request, as explained in the rejection of claim 1 (Elassaad was explained in to teach that the web based resource text is obtained/referenced based upon user selection information to search and further obtain audio/video content to augment with the selected resource text in a provided visual display (visual descriptive augmentation). Additionally as explained in the rejection of claim 1, Elassaad’s resource was modified to be web-based resource via the teachings of Salama), and is rejected under similar rationale.
With regards to claim 3. (Original) The computer-implemented method of claim 2, Elassaad and Salama teaches wherein the request is a search query received from a computing device , as explained in the rejection of claim 1 (Elassaad was explained in to teach that the web based resource text is obtained/referenced based upon user selection information to search and further obtain audio/video content to augment with the selected resource text in a provided visual display (visual descriptive augmentation). Additionally as explained in the rejection of claim 1, Elassaad’s resource was modified to be web-based resource via the teachings of Salama), and is rejected under similar rationale.
With regards to claim 4. (Original) The computer-implemented method of claim 3, Elassaad and Salama teaches wherein the data descriptive of the generated target content is provided in response to the search query , as explained in the rejection of claim 1 (Elassaad was explained in to teach that the web based resource text is obtained/referenced based upon user selection information to search and further obtain audio/video content to augment with the selected resource text in a provided visual display (visual descriptive augmentation). Additionally as explained in the rejection of claim 1, Elassaad’s resource was modified to be web-based resource via the teachings of Salama), and is rejected under similar rationale.
With regards to claim 7. (Original) The computer-implemented method of claim 1, the combination of Elassaad and Salama teaches further comprising: analyzing, by the one or more processors, each of the textual content segments from the web-based resource; and determining, by the one or more processors for each textual content segment of the plurality of textual content segments, a weighting for one or more items identified respectively in each textual content segment (as similarly explained in the rejection of claim 1, Elassaad and Salama was explained to analyze textual content segments from a web resource and further allows a user to emphasize/select a portion of the textual content segments to indicate focus (weight) for obtaining the augmented audio/video content. Additionally, Elassaad in paragraph 0078 further explains the augmented content is generated based upon content inputted/selected by the user (interpreted as weighted/focused upon user selection)), and is rejected under similar rationale.
With regards to claim 8, Elassaad and Salama teaches A computing system, comprising: a non-transitory computer-readable medium; and one or more processors communicatively coupled to the non-transitory computer- readable medium, wherein the one or more processors execute instructions from the non- transitory computer-readable medium that cause the computing system to: analyze a web-based resource comprising textual content; extract a plurality of textual content segments from the web-based resource; obtain for each textual content segment of the plurality of textual content segments, visual content and audio content related to each respective textual content segment; generate target content for an audio-visual display of the web-based resource, wherein generating the target content comprises combining at least a portion of each respective textual content segment from the plurality of textual content segments with the visual content and the audio content related to the respective textual content segment; and provide data descriptive of the generated target content to a computing device for presentation of the audio-visual display of the web-based resource, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 9. Original) The computing system of claim 8, Elassaad and Salama teaches wherein the computing system further is to: obtain, by the one or more processors, the web-based resource based at least in part on information associated with a request, as similarly explained in the rejection of claim 2, and is rejected under similar rationale.
With regards to claim 10. (Original) The computing system of claim 9, Elassaad and Salama teaches wherein the request is a search query received from a computing device, as similarly explained in the rejection of claim 3, and is rejected under similar rationale.
With regards to claim 11. (Original) The computer-implemented method of claim 10, Elassaad and Salama teaches wherein the data descriptive of the generated target content is provided in response to the search query, as similarly explained in the rejection of claim 4, and is rejected under similar rationale.
With regards to claim 14. (Original) The computing system of claim 8, Elassaad and Salama teaches wherein the computing system further is to: analyze each of the textual content segments from the web-based resource; and determine for each textual content segment of the plurality of textual content segments, a weighting for one or more items identified respectively in each textual content segment, as similarly explained in the rejection of claim 7, and is rejected under similar rationale.
With regards to claim 15, Elassaad and Salama teaches A non-transitory computer-readable medium having instructions that, when executed by one or more processors associated with a computing device, cause the computing device to: analyze a web-based resource comprising textual content; extract a plurality of textual content segments from the web-based resource; obtain for each textual content segment of the plurality of textual content segments, visual content and audio content related to each respective textual content segment; generate target content for an audio-visual display of the web-based resource, wherein generating the target content comprises combining at least a portion of each respective textual content segment from the plurality of textual content segments with the visual content and the audio content related to the respective textual content segment; and provide data descriptive of the generated target content to a computing device for presentation of the audio-visual display of the web-based resource, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 16. (Original) The non-transitory computer-readable medium of claim 15, Elassaad and Salama teaches wherein the computing device further is to: obtain the web-based resource based at least in part on information associated with a request, as similarly explained in the rejection of claim 2, and is rejected under similar rationale.
With regards to claim 17. (Original) The non-transitory computer-readable medium of claim 16, Elassaad and Salama teaches wherein the request is a search query received from a computing device (as similarly explained in the rejection of claim 3, and rejected under similar rationale) and the data descriptive of the generated target content is provided in response to the search query (as similarly explained in the rejection of claim 4, and is rejected under similar rationale).
With regards to claim 20. (Original) The non-transitory computer-readable medium of claim 15, Elassaad and Salama teaches wherein the computing device further is to: analyze each of the textual content segments from the web-based resource; and determine for each textual content segment of the plurality of textual content segments, a weighting for one or more items identified respectively in each textual content segment, as similarly explained in the rejection of claim 7, and is rejected under similar rationale.
Claim(s) 5, 12 and 18 remain rejected under 35 U.S.C. 103 as being unpatentable over Elassaad (US Application: US 2017/0228239, published: Aug. 10, 2017, filed: Apr. 25, 2017) in view of Salama (US Application: US 2013/0145241, published: Jun . 6, 2013, filed: Dec. 4, 2012) in view Severn et al (US Application: US 20170230589, published: Aug. 10, 2017, filed: Sep. 29, 2014).
With regards to claim 5. (Original) The computer-implemented method of claim 1, the combination of Elassaad and Salama teaches further comprising: determining, by the one or more processors,… generating the target content for the audio-visual display of the web based resource. … , as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not expressly determining … a template for generating the target content for the audio-visual display of the web-based resource, and wherein the generating of the target content for the audio-visual display of the web-based resource is based at least in part on the determined template.
Yet Severn et al teaches determining … a template for generating the target content for the audio-visual display of the web-based resource, and wherein the generating of the target content for the audio-visual display of the web-based resource is based at least in part on the determined template (paragraph 0029: a template can be used as part of the generation to augment content as an overlay for content that can include audio, visual, graphics, animation).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Elassaad and Salama’s ability to generate target content for the web based resource such that the content could have been generated using a template, as taught by Severn et al. The combination would have allowed Elassaad and Severn to have allowed an enhanced and targeted augmented experience.
With regards to claim 12. (Original) The computing system of claim 8, the combination of Elassaad, Salama and Severn et al teaches wherein the computing system further is to: determine a template for generating the target content for the audio-visual display of the web-based resource, and wherein the target content for the audio-visual display of the web-based resource is generated based at least in part on the determined template, as similarly explained in the rejection of claim 5, and is rejected under similar rationale.
With regards to claim 18. (Original) The non-transitory computer-readable medium of claim 15, the combination of Elassaad, Salama and Severn et al teaches wherein the computing device further is to: determine a template for generating the target content for the audio-visual display of the web-based resource, and wherein the target content for the audio-visual display of the web-based resource is generated based at least in part on the determined template, as similarly explained in the rejection of claim 5, and is rejected under similar rationale.
Claim(s) 6, 13 and 19 remain rejected under 35 U.S.C. 103 as being unpatentable over Elassaad (US Application: US 2017/0228239, published: Aug. 10, 2017, filed: Apr. 25, 2017) in view of Salama (US Application: US 2013/0145241, published: Jun . 6, 2013, filed: Dec. 4, 2012) in view of Korn (US Patent: 9558733, issued: Jan. 31, 2017, filed: Sep. 29, 2014).
With regards to claim 6. (Original) The computer-implemented method of claim 1, the combination of Elassaad and Salama teaches further comprising: generating, by the one or more processors, the audio content, based on … the textual content segments …, as similarly explained in the rejection of claim 1 (Elassaad was explained to teach that the audio content is augmented to correspond to the textual reference content), and is rejected under similar rationale.
However the combination does not expressly teach … audio content .. based on converting one or more of the textual content segments to speech.
Yet Korn teaches … audio content .. based on converting one or more of the textual content segments to speech (column 27, lines 1-17: audio content is generated and inserted to accompany the textual segments, such that the textual content segments can be converted to be augmented with additional speech data).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Elassaad and Salama’s ability to generate the audio content, such that the audio content in the form of speech can be generated based upon conversion of text content data into a supplemental invocable footnote to access the speech, as taught by Korn. The combination would have allowed Elassaad and Salama to have enhanced a reader’s experience when consuming electronic media (Korn, column 1, lines 60-65)
With regards to claim 13. (Original) The computing system of claim 8, Elassaad, Salama and Korn teaches wherein the computing system further is to: generate the audio content based on converting one or more of the textual content segments to speech, as similarly explained in the rejection of claim 6, and is rejected under similar rationale.
With regards to claim 19. (Original) The non-transitory computer-readable medium of claim 15, Elassaad, Salama and Korn teaches wherein the computing device further is to: generate the audio content based on converting one or more of the textual content segments to speech, as similarly explained in the rejection of claim 6, and is rejected under similar rationale.
Response to Arguments
Applicant's arguments filed 12/17/2025 have been fully considered but they are not persuasive.
With regards to claim 1, the applicant argues that Elassaad does not teach extract “a plurality of textual content segments” [and] “instead Elassaad obtains content from the overall reference content”. However this argument is not persuasive since Elassaad specifically explains in paragraph 0026 that a plurality of features are extracted from reference document text ( each extracted feature can be a keyword, title or name, topic or event) , and there are a plurality of features (the examiner notes each feature is interpreted as one of the plurality of claimed content segments) that can be extracted from the reference document. Thus a plurality of textual content segments (as features) are extracted. As also explained in paragraph 0079, extracting a plurality of features (‘a set of features’) is also well known and implemented by this invention. Paragraph 0104 further explains how these plurality of features/segments from the document can extract a plurality of names, concepts, dates and/name phrases).
The applicant further argues that Elassaad does not generate target content by combining audio content and video content [and instead] Elassaad teaches the original content is augmented with ‘see-through layers’ which sit on top of the original content’. However, first In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). More specifically, the examiner has explained the combination of Elassaad and Salama et al teaches that the content of the textual content being displayed is augmented with the claimed target content (See Fig. 12 Elassaad, which shows the reference document/content is visually modified with augmented content such as 760 that was deemed relevant to the textual content segment(s)) to collectively form target content. Additionally, this augmented content was further modified to be multimedia content that would include both video and audio as taught by Salama et al in Abstract, paragraph 0029 and 0033. Thus the combination teaches the required claim limitations of claim 1 and the applicant’s argument is not persuasive.
With regards to claims 8 and 15, the applicant argues they are allowable for reasons presented by the applicant for claim 1. However this argument is not persuasive since those reasons have been explained/shown as not persuasive in the paragraphs above.
With regards to the dependent claims, the applicant argues that they are allowable for reasons presented by the applicant for why their corresponding independent claim is allowable. However this argument is not persuasive since each of the pending independent claims have been shown/explained to be rejected above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILSON W TSUI/Primary Examiner, Art Unit 2172