Prosecution Insights
Last updated: April 19, 2026
Application No. 18/208,683

System and Method for Automated Integration of Contextual Information with Content Displayed in a Display Space

Non-Final OA §102§103§DP
Filed
Jun 12, 2023
Examiner
ABOUD, ABDULLAH KHALED
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Pangee Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
7 currently pending
Career history
7
Total Applications
across all art units

Statute-Specific Performance

§101
24.0%
-16.0% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg , 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman , 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi , 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum , 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel , 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington , 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or PTO/AIA/26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer . Claims FILLIN "Pluralize \“Claim\” if necessary, insert \“is\” or \“are\” as appropriate, and insert the claim number(s) which are under rejection." 1,8-15, 22-23, and 27 rejected on the ground of nonstatutory double patenting a s being unpatentable over claims FILLIN "Pluralize \“Claim\” if necessary, and insert the claim number(s) of the U.S. Patent." 1, 4-11, and 14-16 of U.S. Patent No. FILLIN "Insert the number of the patent." 12,455,919 B2 . Although the claims at issue are not identical, they are not patentably distinct from each other because FILLIN "Explain why the instant claims are not patentably distinct over the claims of the U.S. Patent." the claims from US Patent No. 12,455,919 B2 anticipates the claims in the instant application (see table below). US Patent 12,455,919 B2 Application Number: 18/208,683 1. A computing system, comprising: 1. (Currently Amended) A computing system, comprising: a display space; a display space; one or more processors; and one or more processors; and a memory to store computer-executable instructions, comprising a user interface application and a chatbot application that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: a memory to store computer-executable instructions, comprising a user interface application and a chatbot application that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: displaying, by the user interface application, a series of digital images in the display space; displaying, by the user interface application, digital data content in the display space; and receiving, via the user interface application, user input to select one of the series of digital images, or a portion thereof; displaying the user-selected digital image, or the portion thereof, in a location within a field of view of the displayed series of digital images or the display space; and while the user interface application continues to display the user-selected digital image, or portion thereof, while the user interface application continues to display the digital data content in the display space: searching in one or more digital data sources for, and retrieving, by the chatbot application, contextual information based on the displayed user-selected digital image, or portion thereof, without receiving user input to searching in one or more digital data sources for, and retrieving, by the chatbot application, contextual information based on the displayed digital data content, without receiving user input to request searching in the one or more digital data sources for contextual information based on the displayed user-selected digital image, or portion thereof; perform the searching; detecting, by the chatbot application, one or more user interactions with one or more of the user interface application , the displayed detecting, by the chatbot application, one or more user interactions with one or more of the user interface application , the displayed series of digital images, the display space, the location within the field of view of the displayed series of digital images or the display space, or the user-selected digital image or the portion thereof; digital data content, and the display space; displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content in the location within the field of view of the displayed series of digital images or the display space, based in part on the detected one or more user interactions with the one or more of the user interface application, the displayed series of digital images, or the display space, without receiving user input to perform the displaying the portion of the retrieved contextual information as related digital data content; and displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content in a location within a field of view of the displayed digital data content or the display space, based in part on the detected one or more user interactions with the one or more of the user interface application, the displayed digital data content, and the display space, without receiving user input to perform the displaying of the portion of the retrieved contextual information as related digital data content; and receiving, by the chatbot application, user input, responsive to the displayed portion of the retrieved contextual information as related digital data content. receiving, by the chatbot application, user input, responsive to the displayed portion of the retrieved contextual information as related digital data content. 4. The computing system of claim 1, 8. (Currently Amended) The computing system of claim1, wherein the related digital data content is a digital image in which one or more objects appear; wherein the related digital data content is a digital image in which one or more objects appear; wherein displaying, by the chatbot application, the portion of the retrieved contextual information as related digital data content in the location within the field of view of the displayed series of digital images or the display space, comprises displaying, by the chatbot application, the digital image in the location within the field of view of the displayed series of digital images or the display space; and wherein displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content in a location within a field of view of the displayed digital data content or the display space, comprises displaying, by the chatbot application, the digital image in the location within the field of view of the displayed digital data content or the display space; and wherein the computer executable instructions cause the one or more processors to perform further operations, comprising receiving, by the chatbot application, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image. wherein the computer executable instructions cause the one or more processors to perform further operations, comprising receiving, by the chatbot application, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image. 5. The computing system of claim 1, wherein the computer executable instructions cause the one or more processors to perform further operations, comprising adding the related digital data content to, or associating the related digital content with, a file, a repository, or a storage location in or at which the displayed series of digital images and/or the displayed user-selected digital image, or portion thereof, is maintained, 9. (Currently Amended) The computing system of claim 1, wherein the computer executable instructions cause the one or more processors to perform further operations, comprising adding the related digital data content to, or associating the related digital content with, a file, a repository, or a location in or at which the displayed digital data content is maintained, based in part on the detected one or more user interactions with the one or more of the user interface application , the displayed series of digital images, the display space, the location within the field of view of the displayed series of digital images or the display space, or the user-selected digital image, or the portion thereof, without receiving user input to perform the adding or associating. based in part on the detected one or more user interactions with the one or more of the user interface application , the displayed digital data content, and the display space, without receiving user input to perform the adding or associating. 6. The computing system of claim 5, wherein the related digital data content is a digital image in which one or more objects appear; and 10. (Original) The computing system of claim 9, wherein the displayed digital data content is a digital image comprising a plurality of pixels; and wherein adding the related digital data content to, or associating the related digital content with, the file in which the displayed series of digital images is maintained, comprises adding the digital image to, or associating the digital image with, the file in which the displayed series of digital images is maintained. wherein adding the related digital data content to, or associating the related digital content with, the file in which the displayed digital data content is maintained, comprises adding the related digital data content to, or associating the related digital content with, one or more of the plurality of pixels in the file in which the digital image is maintained. 7. The computing system of claim 6, wherein the computer executable instructions cause the one or more processors to perform further operations, comprising adding, by a Non-Fungible Token (NFT) engine, an NFT layer to the digital image, thereby creating an NFT file comprising the digital image. 11. (Original) The computing system of claim 10, wherein the computer executable instructions cause the one or more processors to perform further operations, comprising adding, by a Non-Fungible Token (NFT) engine, an NFT layer to the digital image, thereby creating an NFT file comprising the digital image, based on the related digital data content added to or associated with the one or more of the plurality of pixels in the file in which the digital image is maintained. 8. The computing system of claim 5, wherein adding the related digital data content to, or associating the related digital content with, a file, a repository, or a location in or at which the displayed series of digital images is maintained, comprises adding the related digital data content to, or associating the related digital content with, a location in an distributed digital ledger at which the displayed series of digital data images is maintained, or to a location chained to the location in the distributed digital ledger at which the displayed series of digital images is maintained. 12. (Currently Amended) The computing system of claim 9, wherein adding the related digital data content to, or associating the related digital content with, a file, a repository, or a location in or at which the displayed digital data content is maintained, comprises adding the related digital data content to, or associating the related digital content with, a location in a distributed digital ledger at which the displayed digital data content is maintained, or to a location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained. 9. The computing system of claim 8, 13. (Original) The computing system of claim 12, wherein the related digital data content is a digital image in which one or more objects appear; wherein the related digital data content is a digital image in which one or more objects appear; wherein adding the related digital data content to, or associating the related digital content with, the location in the distributed digital ledger at which the displayed series of digital images is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed series of digital images is maintained, comprises adding the digital image to, or associating the digital image with, the location in the distributed digital ledger at which the displayed series of digital images is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed series of digital data images is maintained; and wherein adding the related digital data content to, or associating the related digital content with, the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained, comprises adding the digital image to, or associating the digital image with, the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained; and wherein the computer executable instructions cause the one or more processors to perform further operations, comprising: wherein the computer executable instructions cause the one or more processors to perform further operations, comprising: receiving, by the chatbot application, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image; and receiving, by the chatbot application, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image; and searching the location in the distributed digital ledger at which the displayed series of digital images is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed series of digital images is maintained, for information about the one or more objects added to or associated with the displayed series of digital images. searching the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained, for information about the one or more objects added to or associated with the displayed digital content. 10. The computing system of claim 8, 14. (Currently Amended) The computing system of claim 12, wherein the related digital data content is a digital image in which one or more objects appear; wherein the related digital data content is a digital image in which one or more objects appear; wherein adding the related digital data content to, or associating the related digital content with, the location in the distributed digital ledger at which the displayed series of digital data images is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed series of digital images is maintained, comprises adding the digital image to, or associating the digital image with, the location in the distributed digital ledger at which the displayed series of digital images is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed series of digital images is maintained; and wherein adding the related digital data content to, or associating the related digital content with, the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained, comprises adding the digital image to, or associating the digital image with, the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained; and wherein the computer executable instructions cause the one or more processors to perform further operations, comprising: wherein the computer executable instructions cause the one or more processors to perform further operations, comprising: accessing, by a machine learning application, information about the one or more objects that appear in the displayed digital image added to or associated with at the location in the distributed digital ledger at which the displayed series of digital images is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed series of digital images is maintained; and accessing, by a machine learning application, information about the one or more objects that appear in the digital image added to or associated with, the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained; and training, by the machine learning application, on the information about the one or more objects. training, by the machine learning application, on the information about the one or more objects. 11. A computer-implemented method, comprising: 15. (Currently Amended) A computer-implemented method, comprising: displaying a series of digital images in a display space; displaying digital data content in a display space; and receiving user input to select one of the series of digital images, or a portion thereof; displaying the user-selected digital image, or the portion thereof, in a location within a field of view of the displayed series of digital images or the display space; and while continuing to display the user-selected digital image, or the portion thereof, while continuing to display the digital data content in the display space: searching in one or more digital data sources for, and retrieving therefrom, contextual information based on the displayed user-selected digital image, or portion thereof, without receiving user input to searching in one or more digital data sources for, and retrieving therefrom, contextual information based on the displayed digital data content, without receiving user input to request searching in the one or more digital data sources for contextual information based on the displayed user-selected digital image, or portion thereof; perform the searching; detecting, by the chatbot application, one or more user interactions with one or more of the user interface application , the displayed series of digital images, the display space, detecting one or more user interactions with one or more of a user interface application, the displayed digital data content, and the display space; the location within the field of view of the displayed series of digital images or the display space, or the user-selected digital image or the portion thereof; displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content in the location within the field of view of the displayed series of digital images or the display space, based in part on the detected one or more user interactions with the one or more of the user interface application, the displayed series of digital images, or the display space, without receiving user input to perform the displaying the portion of the retrieved contextual information as related digital data content; and displaying a portion of the retrieved contextual information as related digital data content in a location within a field of view of the displayed digital data content or the display space, based in part on the detected one or more user interactions with the one or more of the user interface application, the displayed digital data content, and the display space, without receiving user input to perform the displaying of the portion of the retrieved contextual information as related digital data content; and receiving, by the chatbot application, user input, responsive to the displayed portion of the retrieved contextual information as related digital data content. receiving user input, responsive to the displayed portion of the retrieved contextual information as related digital data content. 14. The computer-implemented method of claim 11, 22. (Currently Amended) The computer-implemented method of claim 15, wherein the related digital data content is a digital image in which one or more objects appear; wherein the related digital data content is a digital image in which one or more objects appear; wherein displaying the portion of the retrieved contextual information as related digital data content in the location within the field of view of the displayed series of digital images or the display space, comprises displaying the digital image in the location within the field of view of the displayed series of digital images or the display space; and wherein displaying a portion of the retrieved contextual information as related digital data content in a location within a field of view of the displayed digital data content or the display space, comprises displaying the digital image in the location within the field of view of the displayed digital data content or the display space; and further comprising receiving, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image. further comprising receiving, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image. 15. The computer-implemented method of claim 11 , further comprising adding the related digital data content to, or associating the related digital content with, a file, a repository, or a storage location in or at which the displayed 23. (Currently Amended) The computer-implemented method of claim 15 , further comprising adding the related digital data content to, or associating the related digital content with, a file, a repository, or a location in or at which the displayed series of digital images and/or the displayed user-selected digital image, or portion thereof, digital data content is maintained, based in part on the detected one or more user interactions with the one or more of the user interface application , the displayed is maintained, based in part on the detected one or more user interactions with the one or more of the user interface application , the displayed series of digital images, the display space, the location within the field of view of the displayed series of digital images or the display space, or the user-selected digital image, or the portion thereof, digital data content, or the display space, without receiving user input to perform the displaying. without receiving user input to perform the displaying. 16. The computer-implemented method of claim 15, 27. (Original) The computer-implemented method of claim 26, wherein the related digital data content is a digital image in which one or more objects appear; and wherein the related digital data content is a digital image in which one or more objects appear; wherein adding the related digital data content to, or associating the related digital content with, the file in which the displayed series of digital images is maintained, wherein adding the related digital data content to, or associating the related digital content with, the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained, comprises adding the digital image to, or associating the digital image with, the file in which the displayed series of digital images is maintained. comprises adding the digital image to, or associating the digital image with, the location in the distributed digital ledger at which the displayed digital data content is maintained, or to the location chained to the location in the distributed digital ledger at which the displayed digital data content is maintained; and Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 1, 4, 7, 15, 18, and 21 is/are rejected under 35 U.S.C. 102(a)( 1 ) as being FILLIN "Insert either --clearly anticipated-- or--anticipated-- with an explanation at the end of the paragraph." \d "[ 2 ]" anticipated by FILLIN "Insert the prior art relied upon" \d "[ 3 ]" Ahuja et al. (US10878471B1) . As to claim 1 Ahuja teaches a computing system, comprising: a display space; (see Ahuja [Col 4 L 20] “The contextual notification 124 may be presented as a content strip to reduce the display space required to inform the user”) one or more processors; and (see Ahuja [Col 8 L 32] “The modules included within and including contextual service module 500 may be software modules, … and processed by a processor in any of the computer systems described herein.”) a memory to store computer-executable instructions, (see Ahuja [Col 13 L 30] “may include at least one memory 912 and one or more processing units or processor(s) 914”) comprising a user interface application and (see Ahuja [Col 3 L 27] “Step 1 depicts a user 102 interacting with a user device 104 (such as a mobile phone) to view a web browser 106 that is presenting information”, and see Ahuja [Col 4 L64] “may be presented via a user device 202 (such as a mobile phone) and present information from a native application 204 of the user device 202.”) a chatbot application that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: ( see Ahuja [Col 2 L9] “contextual browsing assistant service”, and see Ahuja [Col 6 L 52] “and a notification module 414. In embodiments, the contextual service computers”) displaying, by the user interface application, digital data content in the display space; and (see Ahuja [Col 3 L 27] “Step 1 depicts a user 102 interacting with a user device 104 (such as a mobile phone) to view a web browser 106 that is presenting information about shoes 108-114 such as images of shoes 108 and 110”) while the user interface application continues to display the digital data content in the display space, : (see Ahuja [Col 3 L 27] “Step 1 depicts a user 102 interacting with a user device 104 (such as a mobile phone) to view a web browser 106 that is presenting information about shoes 108-114 such as images of shoes 108 and 110 or ordering information about the shoes 112 and 114. The viewing session of the user 102 viewing the information about the shoes 108-114 via the web browser on user device 104 may be obtained by components of the contextual browsing assistant service on the user device”) searching in one or more digital data sources for, and retrieving, by the chatbot application, contextual information based on the displayed digital data content, (see Ahuja [Col 3 L 50] “extracting content of interest that may be utilized to find matching items offered by a different vendor for comparison shopping purposes.”, and see Ahuja [Col 7 L35] “the content extractor module 416 determines if any item matches exist between the items identified in the extracted content and a plurality of items offered by an online marketplace associated with the contextual browsing assistant service. The determination of any item matches may be based at least in part on a comparison of first information about the items identified in the extracted content and second information maintained by the contextual service computers 404 that identifies one or more items offered by the online marketplace associated with the contextual browsing assistant service.”) without receiving user input to perform the searching ; (see Ahuja [Col 2 L 26] “provide the contextual notification without the user having to request a specific action or provide any specific input.”) detecting, by the chatbot application, one or more user interactions with one or more of the user interface application, the displayed digital data content, and the display space; (see Ahuja [Col 2 L 42] “the contextual notifications may be dynamically generated as the user views or browses to new items, performs searches, or interacts with native applications for shopping and/or browsing of items and services.”, and see Ahuja [Col 3 L 39] “event data can include web browsing events including uniform resource locator (URL) information, HyperText Markup Language (HTML) information, Extensible Markup Language (XML) information, plain text information, image information, video information, accessibility event information, operating system level event information, or other suitable information that can be obtained from a user's browsing/shopping session.”) displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content (see Ahuja [Col 2 L 36] “the contextual notification saves display space on the user's device and reduces intrusion by being configured to display as an overlay or content strip that can be easily dismissed by the user (such as by utilizing a swipe motion with a touch interface of the user device).”, and see Ahuja [Col 4 L 18] “the contextual notification 124 is presented as an overlay or content strip over the web browser 106 shopping session.”, and see Ahuja [Col 5 L 35] “the contextual notification 210 is presented within the viewing space of user interface 200 to allow a user viewing the user interface 200 to view the current item 206 and information about the item 208 and compare it”) in a location within a field of view of the displayed digital data content or the display space, based in part on the detected one or more user interactions with the one or more of the user interface application, the displayed digital data content, and the display space, (see Ahuja [Col 2 L 42] “the contextual notifications may be dynamically generated as the user views or browses to new items, performs searches, or interacts with native applications for shopping and/or browsing of items and services”, and see Ahuja [Col 4 L 29] “a content strip refers to a user interface element that is configured to be presented in a non-intrusive or disruptive way in a web browser of a user device or a native application of the user device. The non-intrusive and non-disruptive nature of the content strip may be based at least in part on information obtained from the event data including viewing space available”) without receiving user input to perform the displaying of the portion of the retrieved contextual information as related digital data content; and (see Ahuja [Col 2 L25] “generate and provide the contextual notification without the user having to request a specific action or provide any specific input.”, and see Ahuja [Col 6 L 5] “The obtaining of the event data and identification of items and eventual generation of the contextual notification 316 occur absent the user requesting the contextual notification 316 for comparative shopping purposes.”) receiving, by the chatbot application, user input, responsive to the displayed portion of the retrieved contextual information as related digital data content. (see Ahuja [Col 3 L 19] “The user may interact with the notification to be presented with more information about the particular shoe offered by the different vendor or the user can interact with the notification to dismiss or close the notification and continue shopping via the native application.”) As to claim 4, Ahuja teaches the computing system of claim 1, wherein displaying, by the user interface application, digital data content in the display space, comprises displaying, by the user interface application, digital data content authored by a first entity in the display space; (see Ahuja [Col 3 L 4] “a user may be viewing, on their mobile phone, details about a particular shoe on a native application provided by a particular vendor”) wherein searching in one or more digital data sources for, and retrieving, by the chatbot application, contextual information based on the displayed digital data content, without receiving user input to perform the searching, comprises searching in one or more digital data sources for, and retrieving, by the chatbot application, contextual information authored by one or more entities other than the first entity based on the displayed digital data content authored by the first entity, without receiving user input to perform the searching ; (see Ahuja [Col 3 L 49] “request appropriate content extractors for extracting content of interest that may be utilized to find matching items offered by a different vendor for comparison shopping purposes.”, and see Ahuja [Col 5 L 4] “The user interface 200 depicts a contextual notification 210 being presented via the user interface 200. The contextual notification 210 includes information about an item match offered by another vendor (Online Marketplace as opposed to Native Browsing Application)”) wherein displaying, by the chatbot application, the portion of the retrieved contextual information as related digital data content in the location within the field of view of the displayed digital data content or the display space, comprises displaying, by the chatbot application, the portion of the retrieved contextual information authored by the one or more entities other than the first entity as related digital data content in the location within the field of view of the displayed digital data content authored by the first entity or the display space; and (see Ahuja [Col 4 L 10] “a contextual notification 124 that includes information about one of the items (shoe 108) being offered by a different vendor and for a better price”) wherein receiving, by the chatbot application, user input, responsive to the displayed portion of the retrieved contextual information as related digital data content, comprises receiving, by the chatbot application, user input, responsive to the displayed portion of the retrieved contextual information authored by one or more entities other than the first entity as related digital data content. (see Ahuja [Col 2 L 42] “the contextual notifications may be dynamically generated as the user views or browses to new items, performs searches, or interacts with native applications for shopping and/or browsing of items and services.”, and see Ahuja [Col 3 L 39] “event data can include web browsing events including uniform resource locator (URL) information, HyperText Markup Language (HTML) information, Extensible Markup Language (XML) information, plain text information, image information, video information, accessibility event information, operating system level event information, or other suitable information that can be obtained from a user's browsing/shopping session.”) As to claim 7, Ahuja teaches the computing system of claim 1, wherein the displayed digital data content identifies a first object purchasable from a first entity, (see Ahuja [Col 3 L 4] “a user may be viewing, on their mobile phone, details about a particular shoe on a native application provided by a particular vendor”) wherein the displayed related digital data content identifies a second object purchasable from a second entity different than the first entity, and (see Ahuja [Col 3 L 16] “an overlay over the native application informing the user that the particular shoe is available for ordering from a different vendor and at a better price.”) wherein displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content in a location within a field of view of the displayed digital data content or the display space, comprises displaying, by the chatbot application, the digital data content that identifies the first object and the related digital data content that identifies the second object in an online shopping cart. (see Ahuja [Col 3 L 16] “an overlay over the native application informing the user that the particular shoe is available for ordering from a different vendor and at a better price. The user may interact with the notification to be presented with more information about the particular shoe offered by the different vendor or the user can interact with the notification to dismiss or close the notification and continue shopping via the native application.”, and see Ahuja [Col 10 L 36] “The notification module 604 may be configured to utilize the user input to generate and maintain a wish list for items included in a contextual notification, and compile and maintain a comparative list of one or more items included in multiple contextual notifications for subsequent comparison shopping.”) As to claim 15, this is directed to a method that corresponds to the system of claim 1, See the rejection for claim 1 above, which also applies to claim 15. As to claim 18, this is directed to a method that corresponds to the system of claim 4, See the rejection for claim 4 above, which also applies to claim 18. As to claim 21, this is directed to a method that corresponds to the system of claim 7, See the rejection for claim 7 above, which also applies to claim 21. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 8-10, and 22-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" Ahuja et al. (US10878471B1) in view of Luk et al. (US11263385B1) . As to claim 8, Ahuja teaches the computing system of claim 1, Ahuja does not teach " wherein the related digital data content is a digital image in which one or more objects appear; ", " wherein displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content in a location within a field of view of the displayed digital data content or the display space, comprises displaying, by the chatbot application, the digital image in the location within the field of view of the displayed digital data content or the display space; and ", " wherein the computer executable instructions cause the one or more processors to perform further operations, comprising receiving, by the chatbot application, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image. " However, Luk teaches wherein the related digital data content is a digital image in which one or more objects appear; (see Luk [Col 2 L 11] “FIG. 2 is a computing device having a display that is displaying a web browser providing an image comprising graphic objects”, and see Luk [Col 3 L 62] “Here, the web browser extension can monitor images or video, including graphic objects that are within the images or video,”) wherein displaying, by the chatbot application, a portion of the retrieved contextual information as related digital data content in a location within a field of view of the displayed digital data content or the display space, comprises displaying, by the chatbot application, the digital image in the location within the field of view of the displayed digital data content or the display space; and ( see Luk [Col 3 L 62] “ the web browser extension can monitor images or video, including graphic objects that are within the images or video, being provided by the web browser.”, and see Luk [Col 4 L 9] “the graphic object can be the object of a reverse image search. This might be a general reverse image search to identify webpages across multiple websites or may be a focused reverse image search that identifies a webpage of a particular website. In doing this, the identified webpage is related to the graphic object… The web link can be embedded within the graphic object boundary so that a user can interact with the web link within the graphic object boundary.”, and see Luk [Col 11 L 6] “web link embedder 114 progressively embeds the web link within the locations of the area corresponding to the graphic object in the image. In this way, the user watching the video can interact with a graphic object during different times of the video and at any location of the graphic object on the screen, even as the graphic object moves across the screen.”) wherein the computer executable instructions cause the one or more processors to perform further operations, comprising receiving, by the chatbot application, user input, responsive to the displayed digital image, to search for information about the one or more objects that appear in the displayed digital image. (see Luk [Col 4 L 9 “the graphic object can be the object of a reverse image search. This might be a general reverse image search to identify webpages across multiple websites or may be a focused reverse image search that identifies a webpage of a particular website. In doing this, the identified webpage is related to the graphic object… The web link can be embedded within the graphic object boundary so that a user can interact with the web link within the graphic object boundary.”, and see Luk [Col 4 L 20] “the user may provide a selection input anywhere within the graphic object boundary to interact with the web link.”) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Ahuja to include image recognition for identifying objects and their boundaries to provide a more effective method for navigating the web by allowing users to interact with specific items within an image or vides. Luk [Col 3 L 40] As to claim 9, Ahuja teaches the computing system of claim 1, Ahuja does not teach "wherein the computer executable instructions cause the one or more processors to perform further operations, comprising adding the related digital data content to, or associating the related digital content with, a file, a repository, or a location in or at which the displayed digital data content is maintained, based in part on the detected one or more user interactions with the one or more of the user interface application, the displayed digital data content, and the display space, without receiving user input to perform the adding or associating." However, Luk teaches wherein the computer executable instructions cause the one or more processors to perform further operations, comprising adding the related digital data content to, or associating the related digital content with, a file, a repository, or a location in or at which the displayed digital data content is maintained, based in part on the detected one or more user interactions with the one or more of the user interface application, the displayed digital data content, and the display space, without receiving user input to perform the adding or associating . (see Luk [Col 11 L 2] “web link embedder 114 can progressively embed the web link for each frame of the video or embed the web link a determined number of times over a timeframe. Thus, as a graphic object moves locations across the graphical user interface while the video is playing, web link embedder 114 progressively embeds the web link within the locations of the area corresponding to the graphic object in the image. In this way, the user watching the video can interact with a graphic object during different times of the video and at any location of the graphic object on the screen, even as the graphic object moves across the screen.”, and see Luk [Col 11 L 17] “The image of the graphic object can be presented at a stop point of the video as a selectable option, as will be illustrated in more detail in FIG. 5. Thus, when a user provides a selection input to the image at the stop point of the video, the webpage redirect command is provided to the web browser to navigate the web browser to the related webpage. Web link embedder 114 may also use the area of the image corresponding to the identified graphic object to provide as the image of the graphic object at the stop point of the video as the selectable option.”) As to claim 10, Ahuja as modified by Luk teaches the computing system of claim 9, wherein the displayed digital data content is a digital image comprising a plurality of pixels; and (see Luk [Col 2 L 11] “FIG. 2 is a computing device having a display that is displaying a web browser providing an image comprising graphic objects”, and see Luk [Col 3 L 62] “Here, the web browser extension can monitor images or video, including graphic objects that are within the images or video,”) note: All images comprising of plurality of pixels. wherein adding the related digital data content to, or associating the related digital content with, the file in which the displayed digital data content is maintained, comprises adding the related digital data content to, or associating the related digital content with, one or more of the plurality of pixels in the file in which the digital image is maintained. (See Luk [Col 5 L 12] “Operating environment 100 comprises data store 106. Data store 106 generally stores information including data … Although depicted as a single database component, data store 106 may be embodied as one or more data stores or may be in the cloud. … Web browser extension 108 is also illustrated as part of operating environment 100.” As to claim 22, this is directed to a method that corresponds to the system of claim 8, See the rejection for claim 8 above, which also applies to claim 22. As to claim 23, this is directed to a method that corresponds to the system of claim 9, See the rejection for claim 9 above, which also applies to claim 23. As to claim 24, this is directed to a method that corresponds to the system of claim 10, See the rejection for claim 10 above, which also applies to claim 24. Claim(s) FILLIN "Insert the claim numbers which are under rejection." \d "[ 1 ]" 11-14, and 25-28 is/are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the prior art relied upon." \d "[ 2 ]" Ahuja et al. (US10878471B1) in view of Luk et al. (US11263385B1) and Andon et al. (US20220300966A1) . A
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Mar 16, 2026
Non-Final Rejection — §102, §103, §DP (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month