DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Non-Final Office Action is in response to the application 18/597,946 filed on 02/10/2026.
Status of Claims:
Claims 1, 3, 5, and 7 are amended in this Office Action.
Claims 21 and 22 are new in this Office Action.
Claims 12 and 17 are canceled in this Office Action.
Claims 1-11, 13-16, and 18-22 are pending in this Office Action.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/10/2026 has been entered.
Response to Arguments
CLAIM REJECTIONS UNDER 35 U.S.C. § 101
Applicant’s arguments filed on 02/10/2026 (pages 10-14) regarding claim
rejections under 35 U.S.C 101 and the amendments submitted have been fully
considered. The rejections made under 35 U.S.C 101 in the previous office action are
now withdrawn after considering the applicant’s remarks and amendments.
CLAIM REJECTIONS UNDER 35 U.S.C. § 112(a) and 112(b)
Applicant’s arguments filed on 02/10/2026 (pages 9-10) regarding claim
rejections under 35 U.S.C 112(a) and 112(b) and the amendments submitted have been fully considered. The rejections made under 35 U.S.C 112(a) and 112(b) in the previous office action are now withdrawn after considering the applicant’s remarks and amendments.
Rejection of Claims under 35 U.S.C. §103
After reviewing the Applicant’s arguments filed in the remarks filed 02/10/2026 (pages 14-15) regarding to 1,3,5, and 7 the Examiner respectfully submits that the arguments are partially not persuasive.
The applicant argues that none of the cited references makes any mention of event-driven pipeline orchestration, nor provide any indication of architectures for handling scalable processing of incoming data. The examiner respectfully disagrees with the Applicant; the Examiner respectfully submits that Shah discloses “Fig. 4 & Col 4 line 4-18: Merchants and/or content providers will benefit from a scalable and automated system to ensure the quality and/or appropriateness of published images, before they associate their content with the published image via implementation of the various use-cases. FIGS. 4-6, and the additional embodiments presented, outline examples of scalable and automated mechanisms for filtering out inappropriate images, and/or otherwise ensuring the quality and/or appropriateness of published images prior to using the published images to host additional content”. The system of Lagle Ruiz is directed to a scalable and automated processing of incoming contents such as images. The system also generates particular output based on the accepted incoming contents which can correspond to event-driven pipeline orchestration. Therefore, Lagle Ruiz at least teaches “implement event-driven pipeline orchestration for scalable processing”.
The Applicant’s remaining arguments filed in the remarks on 02/10/2026 (pg. 15) regarding to claims 21 and 22 are fully considered but are moot in view of new grounds of rejection necessitated by applicant’s amendments. Please refer to the rejections under 35 USC § 103 below for further details.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8, 10-11, 13, and 15-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lagle Ruiz et al. (US Patent 8234168) “Lagle Ruiz” in view of Liu et al. (US PGPUB 20160259888) “Liu”.
Regarding claim 1, Lagle Ruiz teaches a computing system for analysis of images and video that is capable of recognizing, classifying, and processing the context and meaning contained therein in a manner similar to human intuitive understanding of such context and meaning employing a collaborative platform (Fig. 2 & Col 3 line 24-46), the computing system comprising: one or more hardware processors configured for: receiving image and video data from a first plurality of sources ,wherein the received image and video data includes or is associated with textual data (Col 3 line 16-23: Disclosed herein are computer-implement systems and methods for identifying and analyzing content (e.g., text, images, videos, etc.) published on digital content platforms... Col 3 line 24-27: In one embodiment, for example, the computer-implement systems and methods disclosed herein are used for identifying and analyzing the context/content within images published on digital content platforms. The context/content of an image is then used to determine whether the image is appropriate for association with (or "hosting of") of a third-party's content…Col 3 line 5-55: The embodiment includes using a computer-implemented image collection system to (1) identify an image published on a digital content platform, and (2) identify a publisher of the image…Col 6 line 60-64: As will be described with reference to FIG. 4, an image 412, which is received from a publisher's platform 410, is first processed through a quality assurance filter 490, before being processed through an image-content matching engine 422… Examiner’s note: The system receives text, images, videos for processing); gathering contextual data from a second plurality of sources on the internet using deep web extraction including crawling, wherein the gathered contextual data is related to the received image and video data (Fig. 2 & Col 7 line 28-41: Within image database 430, the images 412 (or image identifiers) may be cataloged, categorized, sub-categorized, and/or scored based on image metadata and/or existing image tags. In one embodiment, the scoring may be based on data obtained from the digital content platform 410 that published the image 412. The data may be selected from the group consisting of: image hash, digital publisher identification, publisher priority, image category, image metadata, quality of digital image, size of digital image, date of publication of the digital image, time of publication of digital image, image traffic statistics, and any combination or equivalents thereof. The images 412 may also be tagged with the location of origin of the image. The image 412 may also be thumb-nailed, resized, or otherwise modified to optimize processing…Col 8 line 50-412: Content (e.g., ads, apps, etc.) stored in a content database 432 can then be matched to appropriate images and sent to the digital content platform 410 for publication proximate to the image 412… Examiner’s note: contextual data such as image hash, digital publisher identification, publisher priority, image category, image metadata, quality of digital image, size of digital image, date of publication of the digital image, time of publication of digital image, image traffic statistics, and contents can be collected from a source such as a content platform and from a service provider within a web browsing environment) and wherein the gathered contextual data is weighted based on temporal factors to account for information relevance over time (Col 7 line 28-38: Within image database 430, the images 412 (or image identifiers) may be cataloged, categorized, sub-categorized, and/or scored based on image metadata and/or existing image tags. In one embodiment, the scoring may be based on data obtained from the digital content platform 410 that published the image 412. The data may be selected from the group consisting of: image hash, digital publisher identification, publisher priority, image category, image metadata, quality of digital image, size of digital image, date of publication of the digital image, time of publication of digital image, image traffic statistics, and any combination or equivalents thereof… Examiner’s note: An image can be scored based on metadata and tags. Thus the score can be similar to a weight that describes the image and metadata can contain temporal factors such as date of publication of the digital image, time of publication of digital image); retrieving real-world data based at least in part on weighted gathered contextual data from the second plurality of sources (Col 7 line 28-41: Within image database 430, the images 412 (or image identifiers) may be cataloged, categorized, sub-categorized, and/or scored based on image metadata and/or existing image tags. In one embodiment, the scoring may be based on data obtained from the digital content platform 410 that published the image 412. The data may be selected from the group consisting of: image hash, digital publisher identification, publisher priority, image category, image metadata, quality of digital image, size of digital image, date of publication of the digital image, time of publication of digital image, image traffic statistics, and any combination or equivalents thereof…Col 8 line 50-412: Content (e.g., ads, apps, etc.) stored in a content database 432 can then be matched to appropriate images and sent to the digital content platform 410 for publication proximate to the image 412…Examiner’s note: The gathered data such as image hash, digital publisher identification, publisher priority, image category, image metadata, quality of digital image, size of digital image, date of publication of the digital image, time of publication of digital image, image traffic statistics, and content (e.g., ads, apps, etc.) can be considered as real-world data and they are subsequently applied to the retrieved data); retrieving one or more machine learning algorithms from an algorithm database (Col 7 line 64-col 8 line 1: A content-based filter can then be applied to images that pass the hash-based filter. Within the content-based filter, image recognition algorithms and/or crowdsourcing protocols can be applied to review and analyze the context/content of the processed images…Col 10 line 9-16: The embodiment includes filtering (or processing) the plurality of images through a first algorithmic filter to identify any ineligible images. The first algorithmic filter may include "machine learning" of publisher tendencies; a crawler function that adjusts depending on learning of publisher tendencies; a crawler function that identifies the image as inappropriate by matching the image to a pre-identified ineligible image; and/or an image hash function analysis… Examiner’s note: Images data can be processed using a machine learning algorithm that is used to analyze the images to determine context and content. Thus, a machine learning algorithm is retrieved by the system to process the data); and implementing event-driven pipeline orchestration for scalable processing (Fig. 4 & Col 4 line 4-18: Merchants and/or content providers will benefit from a scalable and automated system to ensure the quality and/or appropriateness of published images, before they associate their content with the published image via implementation of the various use-cases…Examiner’s note: The system can be scalable and automated to process contents and generate particular output which can correspond to event-driven pipeline orchestration for scalable processing) to analyze the received image and video data using the retrieved machine learning algorithms and the retrieved real-world data to classify and add metadata to the received image and video data (Fig. 4-5 & Col 8 line 50-58: Content (e.g., ads, apps, etc.) stored in a content database 432 can then be matched to appropriate images and sent to the digital content platform 410 for publication proximate to the image 412. For example, if quality assurance filter 490 deems the image 412 to be appropriate for hosting content, then when an end-user activates hotspot 414, the service provider 420 can provide contextually relevant content 462, 463, and 464, in a scrollable image frame 470 on the digital content platform 410… Col 10 line 9-16: The embodiment includes filtering (or processing) the plurality of images through a first algorithmic filter to identify any ineligible images. The first algorithmic filter may include "machine learning" of publisher tendencies; a crawler function that adjusts depending on learning of publisher tendencies; a crawler function that identifies the image as inappropriate by matching the image to a pre-identified ineligible image; and/or an image hash function analysis… Examiner’s note: The system uses machine learning and the retrieved contextual data to further add relevant content data to the multimodal data.).
Lagle Ruiz does not explicitly teach allowing individuals and groups to review and annotate classifications and metadata associated with the received image and video data.
Liu teaches allowing individuals and groups to review and annotate classifications and metadata associated with the received image and video data ([0086] The result interface may be further configured to enable the user to perform tagging of the one or more video images, while the user views the one or more video images through the result interface. For example, the result interface may enable the user to tag a non-tissue region in a video image being displayed to the user with a correct content identifier, if the user observes that a wrong content identifier is currently associated with the non-tissue region. Further, the result interface may enable the user to identify a region in the video image as a non-tissue region that could not be identified by the content management server. The user may tag such non-tissue regions with an appropriate content identifier. The user may also identify regions in the video image that may have been wrongly identified as non-tissue regions, though these may correspond to other artifacts or tissue regions in the video image. In addition, the result interface may enable the user to add annotations and notes at one or more portions of the video images... Examiner’s note: Thus, the system allows user to review and annotate information relating to the received image and video data). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Liu teachings in the Lagle Ruiz system. Skilled artisan would have been motivated to incorporate user reviews and corrections on data associating with image and video data taught by Liu in the Lagle Ruiz system to enhance data accuracy which directs to an increase in data reliability and increase in operational efficiency. This close relation between both of the references highly suggests an expectation of success.
Regarding claim 2, Lagle Ruiz in view of Liu teaches all of limitations of claim 1. Lagle Ruiz further teaches wherein the first plurality of sources comprises crowdsourced or collaborative data sources (Col 7 line 64-67: A content-based filter can then be applied to images that pass the hash-based filter. Within the content-based filter, image recognition algorithms and/or crowdsourcing protocols can be applied to review and analyze the context/content of the processed images. The content-based filter may further include image pattern matching algorithms to automatically scan and detect image content based on metrics such as patter).
Regarding claim 3, note the rejections of claim 1. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 4, note the rejections of claim 2. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 5, note the rejections of claim 1. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 6, note the rejections of claim 2. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 7, note the rejections of claim 1. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 8, note the rejections of claim 2. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 10, Lagle Ruiz in view of Liu teaches all of limitations of claim 1. Lagle Ruiz does not explicitly teach wherein the machine learning algorithms comprise reinforcement learning algorithms that update model parameters based on the human review and annotation feedback.
Liu teaches the machine learning algorithms comprise reinforcement learning algorithms that update model parameters based on the human review and annotation feedback ([0036]: [0036] In accordance with an embodiment, the content management server 104 may be further configured to perform machine learning based on the identified one or more non-tissue regions, the determined one or more content identifiers, and the association of each of the determined one or more content identifiers with the corresponding non-tissue region… [0086] In accordance with an embodiment, the result interface may be further configured to enable the user to perform tagging of the one or more video images, while the user views the one or more video images through the result interface. For example, the result interface may enable the user to tag a non-tissue region in a video image being displayed to the user with a correct content identifier, if the user observes that a wrong content identifier is currently associated with the non-tissue region… Examiner’s note: The system implements a machine learning model that is based on the identified one or more non-tissue regions and the determined one or more content identifiers wherein the determined content identifiers can be reviewed and corrected by a user. Thus, a human review can be have an effect to machine learning algorithm that is similar to reinforcement learning algorithms). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Liu teachings in the Lagle Ruiz system. Skilled artisan would have been motivated to incorporate user reviews and corrections to support a machine learning model taught by Liu in the Lagle Ruiz system to increase the accuracy of training data in a model and to train the model efficiently. This close relation between both of the references highly suggests an expectation of success.
Regarding claim 11, Lagle Ruiz in view of Liu teaches all of limitations of claim 1. Lagle Ruiz further teaches wherein analyzing comprises performing cyclical processing of the image and video data with feedback loops between processing stages (Fig. 6… Examiner’s note: The method described in Fig.6 processes image data and the steps of process can be in a cyclical structure).
Regarding claim 13, Lagle Ruiz in view of Liu teaches all of limitations of claim 1. Lagle Ruiz further teaches wherein analyzing the received image and video data comprises identifying relationships between entities appearing in different data modalities (Col 4 line 25-30: When an end-user 105 accesses the webpage 110 and activates a hotspot 114, a call is made upon a service provider 120 to provide contextually relevant content (e.g., one or more ad creatives 162, 163, and 164) within an image frame 170 provided proximate to the image 112...Col 8 line 50-58: Content (e.g., ads, apps, etc.) stored in a content database 432 can then be matched to appropriate images and sent to the digital content platform 410 for publication proximate to the image 412. For example, if quality assurance filter 490 deems the image 412 to be appropriate for hosting content, then when an end-user activates hotspot 414, the service provider 420 can provide contextually relevant content 462, 463, and 464, in a scrollable image frame 470 on the digital content platform 410… Examiner’s note: The system identifies relationships such as relationships between contents stored to appropriate images).
Regarding claim 15, note the rejections of claim 10. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 16, note the rejections of claim 11. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 18, note the rejections of claim 13. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Regarding claim 19, Lagle Ruiz in view of Liu teaches all of limitations of claim 1. Lagle Ruiz further teaches recursively expanding the scope of contextual data gathering based on intermediate analysis results (Fig. 5 & Col 9 line 15-19: If the image 512 and/or the UGC publisher clears the quality assurance filter 590, the image-content matching engine 522 can be used to match the image 512 to contextually relevant ads 562, content 563, and/or apps 555a, 555b, and 555c... Examiner’s note: Contents can be matched with an image such as plurality of apps, contents and ads when a condition is met. Thus, the contents can be expanded to different types of contents and number of contents).
Regarding claim 20, note the rejections of claim 19. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Claims 9 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lagle Ruiz et al. (US Patent 8234168) “Lagle Ruiz” in view of Liu et al. (US PGPUB 20160259888) “Liu” and Nachman et al. (US PGPUB 20170091628) “Nachman”.
Regarding claim 9, Lagle Ruiz in view of Liu teaches all of limitations of claim 1. Lagle Ruiz in view of Liu does not explicitly teach wherein gathering contextual data from the second plurality of sources comprises accessing both public internet sources and private internal sources.
Nachman teaches gathering contextual data from the second plurality of sources comprises accessing both public internet sources and private internal sources ([0031] The method 300 begins with block 302, in which the computing device 102 acquires one or more media objects. The media objects may include image files, video files, audio files, or any other media data… In some embodiments, in block 304 the computing device 102 may receive one or more images from a mobile computing device 104 that were captured by the mobile computing device 104. For example, the user may upload or otherwise submit images captured with the camera 152 of the mobile computing device 104 to the computing device 102. In some embodiments, in block 306 the computing device 102 may retrieve one or more images from a private media repository, such as the media data 180 maintained by the private media server 108. For example, the computing device 102 may retrieve images from a private cloud image service associated with a user. In some embodiments, in block 308 the computing device 102 may retrieve one or more images from a public media repository, such as the media data 160 maintained by the public media server 106. For example, the computing device 102 may retrieve images from a public web server or a public social media network... Examiner’s note: The system receives an image as an input and further processed to received data from private storage or public storage for subsequent processing). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Nachman teachings in the Lagle Ruiz and Liu system. Skilled artisan would have been motivated to incorporate accessing public and private storage to obtain data taught by Nachman in the Lagle Ruiz and Liu system to allow the system to access a variety of storage types to obtain data from. Thus, allowing for a strategic balance of cost, security, flexibility, and performance. This close relation between both of the references highly suggests an expectation of success.
Regarding claim 14, note the rejections of claim 9. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Lagle Ruiz et al. (US Patent 8234168) “Lagle Ruiz” in view of Liu et al. (US PGPUB 20160259888) “Liu” and Terrazas et al. (US PGPUB 20130226667) “Terrazas”.
Regarding claim 21, Lagle Ruiz in view of Liu teaches all of limitations of claim 1. Lagle Ruiz in view of Liu does not explicitly teach allocating compensation among the first plurality of sources or the reviewing and annotating individuals and groups, wherein the allocation is based on a predictive calculation of contribution by each of the first plurality of sources or each of the reviewing and annotating individuals and groups to developing understanding of context and meaning of the received image and video data.
Terrazas teaches allocating compensation among the first plurality of sources or the reviewing and annotating individuals and groups, wherein the allocation is based on a predictive calculation of contribution by each of the first plurality of sources or each of the reviewing and annotating individuals and groups to developing understanding of context and meaning of the received image and video data ([0018]: Example methods and apparatus disclosed herein analyze aerial images to generate sampling paths for sampling a geographic area, estimate future development including market channels, and/or initiate crowdsourcing of market channel information by establishing a crowdsourcing platform…[0019]: As used herein, the term "crowdsourcing" refers to obtaining information from a collective of individuals (e.g., the general public, a knowledgeable group of persons) for a designated purpose (e.g., a project). Persons contributing information for the designated purpose may be compensated or not compensated for time and effort spent providing the information… Examiner’s note: The system adopts a crowdsourcing platform as part of analyzing aerial images wherein the platform obtains information from a collective of individuals. In return, the system determines compensation amount such as to compensate or to not compensate for the contributions of the individuals). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Terrazas teachings in the Lagle Ruiz and Liu system. Skilled artisan would have been motivated to incorporate individual compensation within a content analyzing system taught by Terrazas in the Lagle Ruiz and Liu system to increase user’s engagement in a crowdsourcing environment, thus can improve productions from the individuals and increase participations from the individuals. This close relation between both of the references highly suggests an expectation of success.
Regarding claim 22, note the rejections of claim 21. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Shankaranarayanan et al. (US PGPUB 20110143776) is directed to crowdsourcing environment that distributes tasks by soliciting the participation of loosely defined groups of individual contributors, rather than by establishing formal employment or contractual relationships to secure the labor. A group of contributors may include, for example, individuals responding to a solicitation posted on a certain website, or individuals who are targeted for market research by some other means. Each contributor may perform one or more tasks that generate data that contribute to a defined result, such as proofreading part of a digital version of an ancient text or analyzing a small quantum of a large volume of data. The contributors may also gather and submit data that can be compiled to establish the existence of trends or conditions, such as traffic density. Each contributor may be compensated for the contribution, or participation may be rewarded with intangibles such as personal satisfaction or gaining valuable experience.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAO DANG VUONG whose telephone number is (571)272-1812. The examiner can normally be reached M-F 7:30-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at (571) 272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.D.V./ Examiner, Art Unit 2153 03/20/2026
/KAVITA STANLEY/Supervisory Patent Examiner, Art Unit 2153