DETAILED ACTION
Response to Amendment
This action is in response to the response to the amendment filed on 10/28/2025. Claims 1, 8, and 15 have been amended. Claims 1-20 are pending and currently under consideration for patentability.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Inventorship
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-15 of U.S. Patent No. 11,461,813. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in U.S. Patent No. 11,461,813 recite the entirety of limitations of claims 1-20 of the instant application. For example, in the instant application independent claim 1, 8, and 15 are anticipated by claims 1, 7, and 12 of U.S. Patent No. 11,461,813 because claims 1, 7, and 12 of U.S. Patent No. 11,461,813 recite additional features such as “wherein generating augmented media content based on the source information further comprises converting the text-based validation information to audio content” wherein in the instant application claims 1, 8, and 15 do not recite these features and are essentially broader than claims 1, 7, and 12 of U.S. Patent No. 11,461,813. Therefore claim 1 of U.S. Patent No. 11,461,813 is in essence a “species” of the generic invention of the instant application claim 1. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Claims 2-7 (Dependent on Claim 1), claims 9-14 (Dependent on Claim 8), and claims 16-20 (Dependent on claim 15) do not cure the deficiencies of the independent claims. Appropriate correction is required.
Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of U.S. Patent No. 12,086,839. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in U.S. Patent No. 12,086,839 recite the entirety of limitations of claims 1-20 of the instant application. For example, in the instant application independent claim 1, 8, and 15 are anticipated by claims 1, 8, and 15 of U.S. Patent No. 12,086,839 because claims 1, 8, and 15 of U.S. Patent No. 12,086,839 recite additional features such as “wherein the indication of sourcing information is embedded with the media item via blockchain” wherein in the instant application claims 1, 8, and 15 do not recite these features and are essentially broader than claims 1, 8, and 15 of U.S. Patent No. 12,086,839. Therefore claim 1 of U.S. Patent No. 12,086,839 is in essence a “species” of the generic invention of the instant application claim 1. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Claims 2-7 (Dependent on Claim 1), claims 9-14 (Dependent on Claim 8), and claims 16-20 (Dependent on claim 15) do not cure the deficiencies of the independent claims. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to a judicial exception (i.e., a law of nature, natural phenomenon, or abstract idea) without significantly more.
Step 1: In a test for patent subject matter eligibility, claims 1-20 are found to be in accordance with Step 1 (see 2019 Revised Patent Subject Matter Eligibility), as they are related to a process, machine, manufacture, or composition of matter. Claims 1-7 recite a system, claims 8-14 recite a computer-readable medium, and claims 15-20 recite a method. When assessed under Step 2A, Prong I, they are found to be directed towards an abstract idea. The rationale for this finding is explained below:
Step 2A, Prong I: Under Step 2A, Prong I, independent claims 1, 8, and 15 are directed to an abstract idea without significantly more, as they all recite a judicial exception. Claims 1, 8, and 15 recite limitations directed to the abstract idea including “receiving a media item; receiving source indication information associated with a publisher of the media item; validating the received source indication information by: identifying a set of associated validation rules, the set of associated validation rules including an identity validation rule; identifying a set of associated validation resources, the set of associated validation resources including an identity validation resource; applying the set of associated validation rules to validate the source indication information by: sending a request to at least one validation resource from the set of associated validation resources, wherein the at least one validation resource comprises the identity validation resource; receiving a validation message from the at least one validation resource; and applying at least one validation rule from the set of associated validation rules, the at least one validation rule comprising the identity validation rule; generating augmented media content that includes the source indication information; and wherein the set of validation rules comprises a rule that requires indication of sourcing information, the augmented media content comprises graphic content that is appended to the media item, and the graphic content comprises the required indication of sourcing information.” These further limitations are not seen as any more than the judicial exception. Claims 1, 8, and 15 recite additional limitations including “extracting media content from the media item; extracting validation information based on the received source indication information; and generating and storing an augmented media item that includes the augmented media content and the extracted media content.” Generating an augmented media based on extracted source indication that has been validated is considered to be an abstract idea, specifically, certain methods of organizing human activity; such as commercial interactions, advertising, marketing, and sales because generating media (i.e. augmented media) based on information (i.e. source indication information that has been extracted), wherein the source indication has been validated has been a well-known merchant problem and the abstract idea is merely appending this well-known merchant problem to the environment of the internet and is not a problem necessarily rooted in computer technology. For example, it is well-known for merchants/sellers to generate media/offers based on information (i.e. sourced from somewhere such as the merchant’s POS or customer information) wherein the data or source information needs to be validated or confirmed first (i.e. Is this the customer? Are these the accurate purchase information?). Furthermore, the analysis in which the extracted source indication is validated takes place falls under another abstract idea, specifically mental processes; such as concepts performed in the human mind (including an observation, evaluation, judgement, opinion) because identifying a set of validation rules; identifying a set of validation resources; applying validation rules to validate the source indication information; sending a request to a validation resource; and receiving a validating message from the validation resource are all concepts that can be performed by a user in their mind with pen and paper and the necessary information. Therefore, under Step 2A, Prong I, claims 1, 8, and 15 are directed towards an abstract idea.
Step 2A, Prong II: Step 2A, Prong II is to determine whether any claim recites any additional element that integrate the judicial exception (abstract idea) into a practical application. Claims 1, 8, and 15 recite additional limitations including “extracting media content from the media item; extracting validation information based on the received source indication information; and generating and storing an augmented media item that includes the augmented media content and the extracted media content.” These additional limitations are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. internet or media content, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). Under Step 2A, Prong II, these claims remain directed towards an abstract idea.
Step 2B: Claims 1, 8, and 15 recite additional limitations including “extracting media content from the media item; extracting validation information based on the received source indication information; and generating and storing an augmented media item that includes the augmented media content and the extracted media content.” These additional limitations do not integrate the judicial exception (abstract idea) into a practical application because of the analysis provided in Step 2A, Prong II. Furthermore, Merely, extracting data from another source of information does not integrate the claims into a practical application because this is seen as a well-understood, routine, and conventional computer function. See: Electronically scanning or extracting data from a document is seen as a well-understood, routine, and conventional computer function (See: Content Extraction and Transmission, LLC v. Wells Fargo Bank, 776 F.3d 1343, 1348, 113 USPQ2d 1354, 1358 (Fed. Cir. 2014) (optical character recognition). Claims 1, 8, and 15 do not include additional elements or a combination of elements that result in the claims amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements listed amount to no more than mere instructions to apply an exception using a generic computer component. In addition, the applicant’s specifications describe “any device” elements, ¶ [0053], for implementing the computer system, which do not amount to significantly more than the abstract idea of itself, which is not enough to transform an abstract idea into eligible subject matter. Furthermore, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Under Step 2B in a test for patent subject matter eligibility, these claims are not patent eligible.
Dependent claims 2-7, 9-14, and 16-20 further recite the system, computer-readable medium, and method of claims 1, 8, and 15, respectively. Dependent claims 2-7, 9-14, and 16-20 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation fail to establish that the claims are not directed to an abstract idea:
Under Step 2A, Prong I, these additional claims only further narrow the abstract idea set forth in claims 1, 8, and 15. For example, claims 2-7, 9-14, and 16-20 describe the limitations for generating an augmented media based on extracted source indication that has been validated – which is only further narrowing the scope of the abstract idea recited in the independent claims.
Under Step 2A, Prong II, for dependent claims 2-7, 9-14, and 16-20, there are no additional elements introduced. Thus, they do not present integration into a practical application, or amount to significantly more.
Under Step 2B, the dependent claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Additionally, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. As discussed above with respect to integration of the abstract idea into a practical application, the additional claims do not provide any additional elements that would amount to significantly more than the judicial exception. Under Step 2B, these claims are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 8-14, and 15-20 are system, computer-readable media, and method claims, respectively, with substantially indistinguishable features between each group. For purposes of compact prosecution, the Office has grouped the common method, system and non-transitory computer readable storage medium claims in applying applicable prior art.
Claim(s) 1-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication 2018/0060921 to Mengle in view of U.S. Publication 2021/0281928 to Boskovich.
With respect to Claim 1:
Mengle teaches:
A device, comprising: one or more processors configured to (Mengle: ¶ [0085]):
receive a media item (i.e. identify document) (Mengle: ¶ [0069] “At block 402, the system may identify one or more documents associated with a destination linked to by an ad creative under consideration for augmentation.”);
extract media content from the media item (i.e. extracting content from document) (Mengle: ¶ [0083] “If matching content is found in the document( s) and is not present in the corresponding ad creative, the system may utilize the relationship in the template to identify and/or extract from the document(s) content with to be considered as a content candidate for augmentation of visible content of the ad creative. For example, of the relationship in the template is simple equivalency, then the system may extract the content from the document(s).”);
receive source indication information associated with a publisher of the media item (i.e. identify source of document) (Mengle: ¶ [0070] “However the one or more documents are identified at block 402, at block 404 (which it should be emphasized is optional), the system may determine a type of the one or more documents. For instance, in some implementations, a document directly linked to by an ad creative may be deemed a "landing page," whereas other document in the same domain may be deemed "associated" pages. In some implementations, documents may be assigned other types commensurate with various document attributes, such as media type ( e.g., web page, photo, video, spreadsheet, presentation), source (e.g., domain), and so forth.”); and
generate augmented media content that includes the source indication information (Mengle: ¶ [0049] “In some implementations, one or more templates may be configured to identify content candidates based on sources other than documents associated with a destination linked to by an ad creative. For example, in some implementations, a semantic index of entities (not depicted in the Figures) may exist that tracks entities such as people, places, things, and relationships between those entities. Some templates may detect such an entity in an ad creative and, based on information contained in this semantic index of entities and relationships, may augment the ad creative.” Furthermore, as cited in ¶ [0076] “At block 418, the system may format the one or more content candidates selected at block 416 in various ways for augmenting the ad creative. For example, the system may select one or more formatting attributes for the content candidate based on one or more formatting attributes of the ad creative, so that the content candidate may seamlessly fit in.”),
the augmented media content comprises graphic content that is appended to the media item, and the graphic content comprises the required indication of sourcing information (i.e. augmented media includes visible or graphic content that is appended or added to media content and comprises indication of sourcing or destination information) (Mengle: ¶¶ [0082] [0083] “At block 506, the system may identify occurrence of the same relationship as was identified at block 502 between third content of a second ad creative and fourth content of a second document associated with a second destination linked to by the second ad creative. Identifying another occurrence of such a relationship may support a conclusion that users desire or expect the particular relationship to be present between ad creatives and linked to landing pages. This conclusion may be solidified and/or corroborated at block 508, when the system determines a pattern that matches the first, second, third and fourth patterns…At block 510, the system may generate a template that incorporates the relationship(s) and/or pattern(s) identified at blocks 502-508. As alluded to above with regard to block 506 and 508, such a template may be applied to one or more documents associated with one or more destinations linked to by one or more ad creatives. If matching content is found in the document( s) and is not present in the corresponding ad creative, the system may utilize the relationship in the template to identify and/or extract from the document(s) content with to be considered as a content candidate for augmentation of visible content of the ad creative. For example, of the relationship in the template is simple equivalency, then the system may extract the content from the document(s). As another example, suppose a template incorporates a relationship in which a rewrite variant of content of a landing page matched visible content of a corresponding ad creative. Such a template (and its one or more rewrite rules) may be applied to a subsequent landing page to generate a similar variant of content of the landing page. This variant may then be considered as a content candidate for augmenting visible content of the ad creative.”).
Mengle does not explicitly disclose validate the received source indication information by: identifying a set of associated validation rules, the set of associated validation rules including an identity validation rule; identifying a set of associated validation resources, the set of associated validation resources including an identity validation resource; extracting validation information based on the received source indication information; applying the set of associated validation rules to validate the source indication information by: sending a request to at least one validation resource from the set of associated validation resources, wherein the at least one validation resource comprises the identity validation resource; receiving a validation message from the at least one validation resource; and applying at least one validation rule from the set of associated validation rules, the at least one validation rule comprising the identity validation rule; generate and store an augmented media item that includes the augmented media content and the extracted media content, wherein the set of validation rules comprises a rule that requires indication of sourcing information.
However, Boskovich further discloses:
validate the received source indication information by: identifying a set of associated validation rules, the set of associated validation rules including an identity validation rule (i.e. identifying rules associated with negative perceptions of ad content) (Boskovich: ¶ [0311] “In one exemplary implementation, this process comprises a calculated rule set with a hierarchy. For example, a given media item may be characterized as to source and/or quality; e.g., a video is low (rendering/resolution) quality and shot from a smartphone, and hence an associated rule set such as "don't tum on any items/serve any advertiser" is applied, so as to e.g., avoid negative perception from an advertiser who may be upset by association of the advertising content with such a low quality media asset.”);
identifying a set of associated validation resources, the set of associated validation resources including an identity validation resource (i.e. identifying valid/appropriate content according to context or resources) (Boskovich: ¶ [0311] “Moreover, analysis of the content of the media asset may be conducted so as to divine the context and/or subject matter of the asset (e.g., an audio asset discussing socially unacceptable or controversial topics, a video asset depicting violence or surreptitious activities). Notably, the aforementioned object detection and analysis algorithms of the system may be effective at object detection; e.g., the engine may be able to find a prescribed object within the asset, but such detection does not in and of itself understand the context of that object and its placement. For instance, a video regarding a new military handgun for the U.S. Army may be acceptable, whereas that same (or similar) gun shown held to someone's head in a video asset may be wholly unacceptable. As such, the exemplary system configuration uses the obtained textual, audio, and other "ancillary" data relating to the asset to determine whether to activate an asset for use by an advertiser (which notably can be on a per-advertiser basis, based on e.g., data provided by that advertiser to the system operator).”);
extracting validation information based on the received source indication information (i.e. extracting appropriate/valid information from content source such as determining that graphic content should be shown for a new site) (Boskovich: ¶ [0311] “For example, a news corporation client of the system operator may deem it acceptable to show the aforementioned hypothetical video of a gun to someone's head in that it comprises a newsworthy event ( e.g., someone of note taken hostage), whereas a gun manufacturer client may not, as it presents their product in a negative light. Hence, client-specific masks or whitelists/blacklists may be stored in the system database and applied to each media asset when inventory is sold or otherwise assigned to a given system operator client.”);
applying the set of associated validation rules to validate the source indication information by: sending a request to at least one validation resource from the set of associated validation resources, wherein the at least one validation resource comprises the identity validation resource (i.e. rules are applied to content in order to determine if content is valid/appropriate by requesting ancillary data or data related to the asset/content) (Boskovich: ¶ [0311] “Notably, the aforementioned object detection and analysis algorithms of the system may be effective at object detection; e.g., the engine may be able to find a prescribed object within the asset, but such detection does not in and of itself understand the context of that object and its placement. For instance, a video regarding a new military handgun for the U.S. Army may be acceptable, whereas that same (or similar) gun shown held to someone's head in a video asset may be wholly unacceptable. As such, the exemplary system configuration uses the obtained textual, audio, and other "ancillary" data relating to the asset to determine whether to activate an asset for use by an advertiser (which notably can be on a per-advertiser basis, based on e.g., data provided by that advertiser to the system operator).”);
receiving a validation message from the at least one validation resource; and applying at least one validation rule from the set of associated validation rules, the at least one validation rule comprising the identity validation rule (i.e. validation message says “don’t turn on any items/serve any advertiser”) (Boskovich: ¶ [0311] “In one exemplary implementation, this process comprises a calculated rule set with a hierarchy. For example, a given media item may be characterized as to source and/or quality; e.g., a video is low (rendering/resolution) quality and shot from a smartphone, and hence an associated rule set such as "don't tum on any items/serve any advertiser" is applied, so as to e.g., avoid negative perception from an advertiser who may be upset by association of the advertising content with such a low quality media asset. Moreover, analysis of the content of the media asset may be conducted so as to divine the context and/or subject matter of the asset (e.g., an audio asset discussing socially unacceptable or controversial topics, a video asset depicting violence or surreptitious activities).”); and
generate and store an augmented media item that includes the augmented media content and the extracted media content, wherein the set of validation rules comprises a rule that requires indication of sourcing information (i.e. augmented layer is overlaid with media content and augmented layer is stored in database, wherein the rules require that the media content or asset indicates sourcing information) (Boskovich: ¶ [0182] “In one exemplary implementation, the back-end system 101 is configured to enable placement of an augmentation layer which is rendered over or on top of the rendered media asset content on a user's computerized device. This layer is notably not native to either the media asset (e.g., digital video data stream or files), or the media player application program used to decode and render the digital media data, but rather is ancillary thereto. This approach provides several benefits, including obviating having to re-encode the (source) media data to include the data included in the non-native overlay or layer, or require installation of "customized" or enhanced media player applications within the rendering user device.” Furthermore, as cited in ¶ [0184] “As one example of the foregoing overlay approach, consider a previously ingested video asset; i.e., one that has been pre-processed using the system 101 and associated analytical engines (e.g., shape/polygon recognition, comparison, color/texture analysis, human identity determination, etc.). The various attributes of the asset are categorized and cataloged within the database 153, including say identification of a white men's polo shirt being worn by a person in certain I-frames of the asset.” Furthermore, as cited in ¶ [0311] “For example, a given media item may be characterized as to source and/or quality; e.g., a video is low (rendering/resolution) quality and shot from a smartphone, and hence an associated rule set such as "don't tum on any items/serve any advertiser" is applied, so as to e.g., avoid negative perception from an advertiser who may be upset by association of the advertising content with such a low quality media asset. Moreover, analysis of the content of the media asset may be conducted so as to divine the context and/or subject matter of the asset (e.g., an audio asset discussing socially unacceptable or controversial topics, a video asset depicting violence or surreptitious activities). Notably, the aforementioned object detection and analysis algorithms of the system may be effective at object detection; e.g., the engine may be able to find a prescribed object within the asset, but such detection does not in and of itself understand the context of that object and its placement. For instance, a video regarding a new military handgun for the U.S. Army may be acceptable, whereas that same (or similar) gun shown held to someone's head in a video asset may be wholly unacceptable. As such, the exemplary system configuration uses the obtained textual, audio, and other "ancillary" data relating to the asset to determine whether to activate an asset for use by an advertiser (which notably can be on a per-advertiser basis, based on e.g., data provided by that advertiser to the system operator). For example, a news corporation client of the system operator may deem it acceptable to show the aforementioned hypothetical video of a gun to someone's head in that it comprises a newsworthy event ( e.g., someone of note taken hostage), whereas a gun manufacturer client may not, as it presents their product in a negative light.” Furthermore, as cited in ¶ [0313] “Accordingly, in one implementation of the system, data is maintained within the database as to each asset's absolute desirability or ranking. Note that the system operator generally will have no control over the placement of an inventory asset; rather, the system operator merely (i) identifies the existence and location of the asset(s); (ii) characterizes them via inter alia pre-processing; and (iii) stores that data relating to the characterization in the database under the unique ID generated for the asset.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add Boskovich’s validate the received source indication information by: identifying a set of associated validation rules, the set of associated validation rules including an identity validation rule; identifying a set of associated validation resources, the set of associated validation resources including an identity validation resource; extracting validation information based on the received source indication information; applying the set of associated validation rules to validate the source indication information by: sending a request to at least one validation resource from the set of associated validation resources, wherein the at least one validation resource comprises the identity validation resource; receiving a validation message from the at least one validation resource; and applying at least one validation rule from the set of associated validation rules, the at least one validation rule comprising the identity validation rule; and generate and store an augmented media item that includes the augmented media content and the extracted media content, wherein the set of validation rules comprises a rule that requires indication of sourcing information to Mengle’s generating augmented media content based on the source information. One of ordinary skill in the art would have been motivated to do so in order to allow “dynamic provision of relevant secondary content (e.g., advertising, product or service information, or even general background information) relating to the identified attribute(s).” (Boskovich: ¶ [0091]).
With respect to Claims 8 and 15:
All limitations as recited have been analyzed and rejected to claim 1. Claim 8 recites “A non-transitory computer-readable medium, storing a plurality of processor executable instructions to:” (Mengle: ¶ [0089]) to perform the steps of system claim 1. Claim 15 recites “A method comprising:” the steps of system claim 1. Claims 8 and 15 do not teach or define any new limitations beyond claim 1. Therefore they are rejected under the same rationale.
With respect to Claim 2:
Mengle teaches:
The device of claim 1, wherein the media item comprises audio content, audiovisual content, or graphic content (Mengle: ¶ [0030] “A document is any data that is associated with a document address. Documents include web pages, word processing documents, portable document format (PDF) documents, images, emails, calendar entries, videos, and web feeds, to name just a few. Each document may include content such as, for example: text, images, videos, sounds, embedded information ( e.g., meta information and/or hyperlinks ); and/or embedded instructions (e.g., ECMAScript implementations such as JavaScript).”).
With respect to Claims 9 and 16:
All limitations as recited have been analyzed and rejected to claim 2. Claims 9 and 16 do not teach or define any new limitations beyond claim 2. Therefore they are rejected under the same rationale.
With respect to Claim 3:
Mengle does not explicitly disclose the device of claim 1, wherein the augmented media content comprises audio content, audiovisual content, or graphic content.
However, Boskovich further discloses wherein the augmented media content comprises audio content, audiovisual content, or graphic content (Boskovich: ¶ [0332] “Similarly, with regard to (ii) above, a service can be represented by a tangible (visually or audibly perceptible) element. Such element may be a sound or series of sounds such as the aforementioned Intel logo, or a tangible one, such as an icon or graphic mechanism which connotes the service. For instance, an H&R Block logo or icon, if recognizable as such by a user viewing a video, can act as a "video proxy" for the services that H&R Block provides (e.g., tax-related), since there may be no good way to succinctly and intuitively visually show someone actually providing tax-related services (do I mouse over the person giving tax advice? Or the piece of paper he/she is holding? Or the office they are in?). Hence, a user mousing-over, touching, or pausing a frame having the aforementioned icon would be counted as (at least putatively) expressing interest in the associated service. In one implementation, the graphic rendering boundaries of the icon are used, in the same way the outer boundaries of the aforementioned white polo shirt are used, to define an "interest" for the service represented by the icon.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add Boskovich’s augmented media content comprises audio content, audiovisual content, or graphic content to Mengle’s generating augmented media content based on the source information. One of ordinary skill in the art would have been motivated to do so in order to allow “dynamic provision of relevant secondary content (e.g., advertising, product or service information, or even general background information) relating to the identified attribute(s).” (Boskovich: ¶ [0091]).
With respect to Claims 10 and 17:
All limitations as recited have been analyzed and rejected to claim 3. Claims 10 and 17 do not teach or define any new limitations beyond claim 3. Therefore they are rejected under the same rationale.
With respect to Claim 4:
Mengle does not explicitly disclose the device of claim 1, wherein the identity validation resource is a payment processing resource, a social media site, a search engine, or an email account provider.
However, Boskovich further discloses wherein the identity validation resource is a payment processing resource, a social media site, a search engine, or an email account provider (i.e. products that are purchased via retailers’ website in order to validate popular product, social media log-in in order to validate user, search engine validating trending topics/content, and email confirming identity to designated account) (Boskovich: ¶ [0335] “For example, an online department store retailer might have thousands of pages listing or displaying hundreds of thousands of individual items available for purchase ( or otherwise displayed, such as via promotional photos on the site showing people actually wearing or using goods/services, whether available for sale or not). The service provider/system operator may, using the system disclosed herein, rapidly (and algorithmically) canvass the retailer's website/portal to characterize its goods and services, and then use that data to correlate already processed inventory having similar goods/services.” Furthermore, as cited in ¶ [0370] “Most graph API requests require the use of access tokens, which can be generated for example by implementing login on the social media site.” Furthermore, as cited in ¶ [0267] “In one implementation, the contextual relevancy engine (CRE) is configured to utilize one or more of the already-generated descriptors in the individual video object detection and processing process. For example, in one variant, when the CRE routine is called by the system, the CRE returns the previously identified descriptors associated with the processing of the current media asset. In some instances, this may also include ancillary source information, such as web page information pulled by the internal/external search engines (e.g., a web "scraping" routine).” Furthermore, as cited in ¶ [0363] “Alternatively, since pausing play of the game or hovering/clicking on the car may disrupt game flow, another variant of the system is configured to cause caching or storage of the relevant information for the user, or transmission of the relevant data via another modality (e.g., text message with link to the user's designated smartphone, email to their designated account, etc .) such that the user can view that information off-line after completion of the game.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to add Boskovich’s identity validation resource is a payment processing resource, a social media site, a search engine, or an email account provider to Mengle’s generating augmented media content based on the source information. One of ordinary skill in the art would have been motivated to do so in order to allow “dynamic provision of relevant secondary content (e.g., advertising, product or service information, or even general background information) relating to the identified attribute(s).” (Boskovich: ¶ [0091]).
With respect to Claim 5:
Mengle does not explicitly disclose the device of claim 4, wherein generating augmented media content based on the source information comprises generating text-based validation information based on the received validation message.
However, Boskovich further discloses wherein generating augmented media content based on the source information comprises generating text-based validation information based on the received validation message (i.e. assets are validated according to similarity, wherein the text-based response is generated) (Boskovich: ¶¶ [0228] [0229] “For instance, once two assets are determined to be "identical" or sufficiently similar, the extant database entry for the previously processed asset may be updated with the additional website URL or other reference of the newly identified asset…Conversely, if the two assets are not deemed identical per step 706, the current asset may be evaluated for similarity according to a second prescribed threshold; e.g., as "correlated but not identical" per step 710.”).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Boskovich’s generating text-based validation information based on the received validation message to Mengle’s generating augmented media content based on the source information. One of ordinary skill in the art would have been motivated to do so in order to allow “dynamic provision of relevant secondary content (e.g., advertising, product or service information, or even general background information) relating to the identified attribute(s).” (Boskovich: ¶ [0091]).
With respect to Claims 12 and 19:
All limitations as recited have been analyzed and rejected to claim 5. Claims 12 and 19 do not teach or define any new limitations beyond claim 5. Therefore they are rejected under the same rationale.
With respect to Claim 6:
Mengle does not explicitly disclose the device of claim 5, wherein generating augmented media content based on the source information further comprises converting the text-based validation information to audio content.
However, Boskovich further discloses wherein generating augmented media content based on the source information further comprises converting the text-based validation information to audio content (i.e. validation message may be given via text message, wherein texts are converted to audio content via speech recognition) (Boskovich: ¶¶ [0205] [0273] “(vii) audio track analysis (e.g., use of speech recognition or other technology to identify words with audio portions of the video asset)… In another variant, the audio portion of the video may be analyzed, including for context determination. For instance, in a typical MPEG4 or H.264 rendered video, audio data will be associated with the video frames in a temporally synchronized fashion. In one variant, this audio data is processed via a speech recognition algorithms such that one or more contexts associated with the video can be discerned, including via a third party search engine such as Google.” Furthermore, as cited in ¶ [0118] “In one embodiment of the present disclosure, an encoder process 164 encodes a source file 162 from a content source 160 into at least one encoding format (e.g., transcodes a source file from one format to at least one other format). In another variant, the source file 162 is encoded into a plurality of encodings that correspond to a respective plurality of one or more device types, codecs, resolutions, file formats, audio encodings, bit rates, etc.” Furthermore, as cited in ¶ [0243] “In one exemplary implementation, an intra-space object recognition engine is first utilized by the back-end server(s) 140 to analyze content on a given web page and convert any media assets located thereon (e.g., videos hosted on a new provider website) into inventory for storage in the system database.”).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Boskovich’s converting the text-based validation information to audio content to Mengle’s generating augmented media content based on the source information. One of ordinary skill in the art would have been motivated to do so in order to allow “dynamic provision of relevant secondary content (e.g., advertising, product or service information, or even general background information) relating to the identified attribute(s).” (Boskovich: ¶ [0091]).
With respect to Claim 13:
All limitations as recited have been analyzed and rejected to claim 6. Claim 13 does not teach or define any new limitations beyond claim 6. Therefore it is rejected under the same rationale.
With respect to Claim 7:
Mengle does not explicitly disclose the device of claim 6, wherein the source information comprises at least one of a name, username, email address, or publishing platform.
However, Boskovich further discloses wherein the source information comprises at least one of a name, username, email address, or publishing platform (i.e. identifying name, brand and name of the page from source information) (Boskovich: ¶ [0264] “More specific data may also be obtained from such sources (e.g., a text object analysis indicates a particular putative identity of the individual ( e.g., a name banner under the facial image indicating "Dwayne Johnson"), thereby enabling the algorithm to converge immediately on an identity hypothesis and confirm/refute this hypothesis through comparison of the processed media asset image to entries within the facial database/pool for Dwayne Johnson.” Furthermore, as cited in ¶ [0355] “Accordingly, in a hypothetical system where a particular individual is identified based on e.g., their identity or celebrity ( e.g., Dwayne Johnson), it could be inferred that since the service provider affirmatively determines and knows the identity of the individual in an asset (video with Dwayne Johnson wearing a white polo shirt) and can affirmatively identify the product (white polo) such as via shape-/color-/texture-based analysis and/or contextual analysis, the service provider is misleading the asset user's or its customers; i.e., implying that Dwayne Johnson sponsors white polos sold by the retailer (e.g., Macy's).” Furthermore, as cited in ¶ [0370] “The graph API includes: (i) nodes-basically "things" such as a user, image, a page, or other; (ii) edges the connections between "things", such as a page's images, or an image's comments; and (iii) fields-information about the "things", such as a person's birthday, or the name of a page.”).
Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Boskovich’s wherein the source information comprises at least one of a name, username, email address, or publishing platform to Mengle’s generating augmented media content based on the source information. One of ordinary skill in the art would have been motivated to do so in order to allow “dynamic provision of relevant secondary content (e.g., advertising, product or service information, or even general background information) relating to the identified attribute(s).” (Boskovich: ¶ [0091]).
With respect to Claim 14:
All limitations as recited have been analyzed and rejected to claim 7. Claim 14 does not teach or define any new limitations beyond claim 7. Therefore it is rejected under the same rationale.
With respect to Claim 11:
Mengle does not explicitly disclose the non-transitory computer-readable medium of claim 8, wherein the validation rules comprise a first rule that requires user verification and a second rule that requires indication of sourcing information.
However, Boskovich further discloses wherein the validation rules comprise a first rule that requires user verification and a second rule that requires indication of sourcing information (i.e. in order to share content via social media site, user must log-in for user verification and then the content is validated according to sourcing information) (Boskovich: ¶ [0370] “Most graph API requests require the use of access tokens, which can be generated for example by implementing login on the social media site.” Furthermore, as cited in ¶ [0311] “In one exemplary implementation, this process comprises a calculated rule