DETAILED ACTION
Status of the Application
In response filed on December 23, 2025, the Applicant amended claims 2, 8-10, and 16-18. Claims 2-18 are pending and currently under consideration for patentability.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments and Arguments
v Applicant’s arguments, with respect to the rejection of claims 2-6, 8-14, and 16-18 under 35 U.S.C. 101 have been fully considered and are not persuasive. The rejections of claims 2-6, 8-14, and 16-18 under 35 U.S.C. 101 have been maintained accordingly.
Applicant specifically argues that
1) “Applicants respectfully submit that the claims do not recite an abstract idea…applicants respectfully disagree that the claims are directed to subject matter falling within the categories of "certain methods of organizing human activity" or a mental process. For example, applicants submit that the claims, particularly as amended, do not merely recite using a computer as a tool to implement an abstract idea (whether related to a mental process, organizing human activity or otherwise), but rather are limited to a specific practical application of this (allegedly abstract idea) or any other abstract idea potentially related to the claims.”
Examiner respectfully disagrees with Applicant’s first argument.
Each of the assertions made in Applicant’s first argument are entirely conclusory, and therefore Applicant’s first argument is not persuasive
Applicant specifically argues that
2) “applicants have further amended Claim 2, for example, to recite…Applicants submit that the above-recited claim language limits the claims to a practical application of any alleged abstract idea recited in the claims, particularly when read in context of other elements of Claim 2, such as "based on output of the one or more machine learning models, identifying, from among the plurality of media items, at least a portion of a first media item that depicts a first object associated with a first sponsor." These recitations, among other details as claimed, relate to a specific computer implementation that is not analogous to manual or mental processes in the advertising field, steps performed in the human mind or related to organizing human activity. The above claim recitations, particularly in combination with the other elements recited in the claim, "apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception," meeting the requirements in the revised guidance for integrating an abstract idea into a practical application. More specifically, the above claim language in combination with the other recited claim elements "applies or uses the judicial exception in [a] meaningful way beyond generally linking the use of the judicial exception to a particular technological environment," as discussed in the revised guidance.”
Examiner respectfully disagrees with Applicant’s second argument.
Integration into a practical application involves an analysis of additional elements recited in the claim beyond the judicial exception(s). The relevant question is whether the claim includes additional elements beyond the judicial exception that integrate the judicial exception into a practical application. In this case, the argued limitation(s) of "determining two or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item based on an…analysis of the image data or video data of the first media item, wherein the two or more factors comprise (a) at least one of size, clarity, or duration of the first object within the first media item, and (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item” is not an “additional element”. These are descriptions of “factors” that are used to adjust (in some way) an initial media value estimate to determine a sponsorship value to the first sponsor. These limitations are part of the abstract idea, and do not integrate the judicial exception into a practical application.
That the analysis of the image data is required to be “an automated analysis” at most is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. This is not a “specific computer implementation”.
That the claim requires that the portions of the media item in which the first object is depicted are identified “based on output of the one or more machine learning models”, does nothing to further integrate the abstract idea into a practical application. As discussed in the rejection, the high-level requirement to perform the identification “based on output of the one or more machine learning models…” (claims 2, 10, and 18) and/or “provided as input to the one or more machine learning models” (claims 3 and 11) and/or “using a first machine learning model” (claims 5 and 13) provides nothing more than mere instructions to implement an abstract idea on a generic computer and/or merely indicates a field of use or technological environment in which the judicial exception is performed. Again, this does not amount to a “specific computer implementation”.
Applicant specifically argues that
3) “As an example with respect to Claim 2, applicants submit that various claim limitations in at least Claim 2 amount to significantly more than the alleged abstract idea in the Office Action. For example, applicants submit that multiple claim elements, especially in combination, are individually or collectively a "specific limitation or combination of limitations that are not well- understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present," as examiners are instructed to consider at Step 2B in the revised guidance. Applicants submit that the claim elements amounting to significantly more than the alleged abstract idea include the elements quoted above with respect to applicants' practical application remarks, among others. For reasons similar to those discussed above with respect to the reasons that Claim 2 integrates the abstract idea into a practical application, the specific claim recitations above ensure that Claim 2 is more than a drafting effort designed to monopolize the judicial exception.”
Examiner respectfully disagrees with Applicant’s third argument.
Applicant’s argument is entirely conclusory, and therefore cannot be persuasive. For example, Applications argument that “multiple claim elements, especially in combination, are individually or collectively a "specific limitation or combination of limitations that are not well- understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present," as examiners are instructed to consider at Step 2B in the revised guidance” fails to identify which additional claim elements, alone and/or in combination, are other that what is well understood, routine, and conventional. Examiner stresses that Step 2B involves an analysis of “additional” elements. Per MPEP 2106.05 “Although the courts often evaluate considerations such as the conventionality of an additional element in the eligibility analysis, the search for an inventive concept should not be confused with a novelty or non-obviousness determination. See Mayo, 566 U.S. at 91, 101 USPQ2d at 1973…As made clear by the courts, the "‘novelty’ of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter."
Applicant’s argument that “the claim elements amounting to significantly more than the alleged abstract idea include the elements quoted above with respect to applicants' practical application remarks, among others” is not persuasive. As discussed above, the argued limitation(s) of "determining two or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item based on an…analysis of the image data or video data of the first media item, wherein the two or more factors comprise (a) at least one of size, clarity, or duration of the first object within the first media item, and (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item” is not an “additional element”. These are descriptions of “factors” that are used to adjust (in some way) an initial media value estimate to determine a sponsorship value to the first sponsor. These limitations are part of the abstract idea, and do not integrate the judicial exception into a practical application.
Further, as discussed above, that the analysis of the image data is required to be “an automated analysis” at most is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer.
Further, as discussed above, that the claim requires that the portions of the media item in which the first object is depicted are identified “based on output of the one or more machine learning models”, does nothing to further integrate the abstract idea into a practical application. As discussed in the rejection, the high-level requirement to perform the identification “based on output of the one or more machine learning models…” (claims 2, 10, and 18) and/or “provided as input to the one or more machine learning models” (claims 3 and 11) and/or “using a first machine learning model” (claims 5 and 13) provides nothing more than mere instructions to implement an abstract idea on a generic computer and/or merely indicates a field of use or technological environment in which the judicial exception is performed. Again, this does not amount to a “specific computer implementation”.
v Applicant’s arguments, with respect to the rejection of claims 2-18 for Double Patenting have been fully considered and are not persuasive. Applicant has not argued any of the rejections other than generally traversing the Double Patenting rejection over US Patent 10,929,752. However, although the claims in this patent involve additional features (e.g., related to creating a digital video fingerprint and causing a display device at a physical location to present visual sponsorship data), these claims still recite a great majority of the features of the instant claims, such as those involving ingestion of media data into ML model(s), object detection, initial media valuations, and adjustment to the valuation to determine a value to a sponsor associated with appearance of their object in the media item. The instant claims do include a few additional features detailing factors that may be used to determine the sponsorship value, but it would have been obvious to modify the invention of US Patent 10,929,752 to include these features as discussed in the Double Patenting rejection below. The rejections of claims 2-18 for Double Patenting have been maintained.
v Applicant’s arguments, with respect to the rejection of amended claims 2, 10, and 18 under 35 U.S.C. §103 have been considered, but are not persuasive. Applicant argues that the Office Action acknowledged that Cohen-Solal and Hay “do not appear to disclose ‘wherein the one or more factors comprise a determination of the position of the first object within the first media item relative to a reference object appearing within the first media item’”. Although this is true, the newly added feature to claims 2, 10, and 18 is different than this limitation. The newly added feature is the requirement of “position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item”, not a “position of the first object within the first media item relative to a reference object appearing within the first media item’. As such, although Cohen-Solal in view of Hay may not have disclosed the previously-recited limitation of claim 9, this combination discloses the newly added limitation.
For example, Hay discloses “wherein the two or more factors comprise…(b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item” ([0010]-[0011] “each searching step includes the step of determining a respective scale value in dependence upon the scale of said part of the captured frame relative to the mask…the brand exposure value is calculated in dependence upon both the determined correlation values and the respective determined scale values. In this way, a large scale display of a trade mark in a frame can provide a greater contribution to the brand exposure value than a smaller scale display of the mark…Preferably, each searching step includes the step of determining a respective position value in dependence upon the position of said part of the captured frame relative to the complete frame, and, in the calculating step, the brand exposure value is calculated in dependence upon both the determined correlation values and the respective determined position values. In this way, the display of a trade mark at the centre of a frame can provide a greater contribution to the brand exposure value than al the edge of the frame.” – as such, one of the factors of visibility or prominence of the first object (the logo) comprises position of the first object within at least one image or frame of the first media item relative to the center of the frame (i.e., a focal point determined for the at least one image or frame of the first media item, consistent with Applicant’s own specification at [0089] “ In some embodiments…the middle of the frame may be used as the default focal point”)), [0086] “more complex relationships may be used. The position weighting value P may simply one of two values, for example 1.00 if the portion of the frame which produced the hit is within a predetermined central region of the frame, and a predetermined lesser value if it is not.”). As such, Hay discloses the newly added factor.
Per the previous Office Action, Cohen-Solal discloses the two or more factors comprise (a) at least one of size, clarity, or duration of the first object within the first media item ([0012] “non-subjective analysis of the frequency, duration and degree of prominence of display of target logos…the invention can assign a value to each appearance of the logo, with the value varying in accordance with the clarity and/or size of the display of the logo, thereby informing the advertiser of the logo view-ability” – therefore the system determines one or more factors regarding visibility or prominence of the first object (logo) depicted in the at least a portion of the first media item based on an automated analysis of the image data or video data of the first media item, wherein the one or more factors comprise at least one of size, clarity, duration or position of the first object within the first media item, [0034] “the processor 124 also uses the tracking data to perform output processing 124c. This includes, for example, compiling the amount of time the logo appears in the datastream, among other analysis” – duration of the first object within the first media item, [0049]-[0050] “determining…partially obscured logo…total time that the logo is visible in the datastream for the event, the percentage of time the logo is visible…ongoing indication…tracking relating to the size, perspective, illumination, etc., of the logo in the image… also keep track of the quality of the logo’s visibility during the event”).
v Applicant’s arguments, with respect to the rejection of amended claims 9 and 17 under 35 U.S.C. §103 have been considered, but are moot in view of a new grounds of rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
v Claim(s) 2-6, 8-14, and 16-18 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1:
Claim(s) 2-6, 8, and 9 is/are drawn to methods (i.e., a process), claim(s) 10-14, 16, and 17 is/are drawn to systems (i.e., a machine/manufacture), and claim(s) 18 is/are drawn to non-transitory computer-readable storage media (i.e., a machine/manufacture). As such, claims 2-6, 8-14, and 16-18 is/are drawn to one of the statutory categories of invention (Step 1: YES).
Step 2A - Prong One:
In prong one of step 2A, the claim(s) is/are analyzed to evaluate whether it/they recite(s) a judicial exception.
Claim 2 (representative of independent claim(s) 10 and 18) recites/describes the following steps;
retrieving a plurality of media items, wherein the plurality of media items include at least one of images or videos, and wherein each of the plurality of media items has been distributed by at least one of: being posted to one or more social media network services, being streamed by one or more streaming media networks, or being broadcast by one or more broadcast networks;
identifying, from among the plurality of media items, at least a portion of a first media item that depicts a first object associated with a first sponsor;
determining an initial media value associated with the first media item, wherein the initial media value is based at least in part on data associated with a social media network, a streaming media network, or a broadcast network by which the first media item has been distributed;
determining two or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item based on an…analysis of the image data or video data of the first media item, wherein the two or more factors comprise (a) at least one of size, clarity, or duration of the first object within the first media item, and (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item and
determining a sponsorship value to the first sponsor attributable to appearance of the first object in at least the first media item, wherein the sponsorship value is determined based at least in part on the initial media value as adjusted based on the two or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item
These steps, under its broadest reasonable interpretation, describe or set-forth a business process for determining a sponsorship value to a sponsor attributable to an appearance of a first object (e.g., the sponsor’s logo) in at least a first media item (e.g., a picture, a TV broadcast). More specifically, the steps describe a business process for determining a sponsorship value to a sponsor attributable to an appearance of a first object (e.g., the sponsor’s logo) in at least a first media item based on detecting/identifying objects (logos) associated with sponsors from a plurality of distributed media items (e.g., images or videos), identifying portions of the media items depicting a first object/logo associated with a first sponsor, determining an initial media value associated with the first media item (based at least in part on data associated with a social media network, a streaming media network, or a broadcast network by which the first media item has been distributed), determining two or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item based on an analysis of the image data or video data of the first media item (wherein the two or more factors comprise (a) at least one of size, clarity, or duration of the first object within the first media item, and (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item), and determining the sponsorship value based at least in part on the initial media value as adjusted based on the two or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item. This amounts to a commercial or legal interaction (specifically, an advertising, marketing or sales activity or behavior). These limitations therefore fall within the “certain methods of organizing human activity” subject matter grouping of abstract ideas.
Additionally and/or alternatively, the above-recited steps of the business process for determining a sponsorship value to a sponsor attributable to an appearance of a first object (e.g., the sponsor’s logo) in at least a first media item, under their broadest reasonable interpretation, encompass a human manually (e.g., in their mind, or using paper and pen) performing one or more observations, evaluations, judgments, opinions, but for the recitation of generic computer components. For example, each of the “identifying…determining… determining… determining…” steps amount to one or more observations, evaluations, or judgments a human being is capable of performing in the mind and/or with pen and paper. If one or more claim limitations, under their broadest reasonable interpretation, covers performance of the limitation(s) in the mind but for the recitation of generic computer components, then it falls within the “mental processes” subject matter grouping of abstract ideas.
As such, the Examiner concludes that claim 1 recites an abstract idea (Step 2A – Prong One: YES).
Independent claim(s) 10 and 18 recite/describe nearly identical steps (and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this/these claim(s) is/are therefore determined to recite an abstract idea under the same analysis.
Each of the depending claims likewise recite/describe these steps (by incorporation - and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this/these claim(s) is/are therefore determined to recite an abstract idea under the same analysis. Any element(s) recited in a dependent claim that are not specifically identified/addressed by the Examiner under step 2A (prong two) or step 2B of this analysis shall be understood to be an additional part of the abstract idea recited by that particular claim. The same reasoning is similarly applicable to the limitations in the remaining dependent claims, and their respective limitations are not reproduced here for the sake of brevity.
Step 2A - Prong Two:
In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the exception into a practical application of that exception. An “addition element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception.
The claim(s) recite the additional elements/limitations of
“computer-implemented…based on an automated analysis” (claim 2)
“a computing system comprising; a memory; and a processor in communication with the memory and configured with processor- executable instructions to perform operations…based on an automated analysis” (claim 10)
“a non-transitory computer-readable medium storing computer executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform operations…based on an automated analysis” (claim 18)
“providing at least portions of the plurality of media items as input into one or more machine learning models trained to detect objects associated with one or more sponsors of sporting events or sports teams; based on output of the one or more machine learning models…” (claims 2, 10, and 18)
“provided as input to the one or more machine learning models” (claims 3 and 11)
“using a first machine learning model” (claims 5 and 13)
The requirement to execute the claimed steps/functions using “computer-implemented…based on an automated analysis” means (claim 2) and/or “a computing system comprising; a memory; and a processor in communication with the memory and configured with processor- executable instructions to perform operations…based on an automated analysis” (claim 10) and/or “a non-transitory computer-readable medium storing computer executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform operations…based on an automated analysis” (claim 18) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. Applicant’s own as-filed disclosure explains that these elements may be embodied as a general-purpose computer (e.g., [0139] “All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more general purpose computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device….In addition, the components referred to herein may be implemented in hardware, software, firmware or a combination thereof.”, see also paragraphs [0037]-[0042] and [0140]-[0142]). This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(f)).
The recitation of “providing at least portions of the plurality of media items as input into one or more machine learning models trained to detect objects associated with one or more sponsors of sporting events or sports teams; based on output of the one or more machine learning models…” (claims 2, 10, and 18) and/or “provided as input to the one or more machine learning models” (claims 3 and 11) and/or “using a first machine learning model” (claims 5 and 13) provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis. MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The trained machine learning model(s) is/are used to generally apply the abstract idea without placing any limits on how the trained machine learning model(s) function. Rather, these limitations only recite the outcome of “detect objects associated with one or more sponsors of sporting events or sports teams” and “identifying, from among the plurality of media items, at least a portion of a first media item that depicts a first object associated with a first sponsor” and “determining…a real-world location or time on which the fist object associated with the first sponsor was depicted in a portion of video content” and do not include any details about how these functions is/are accomplished. See MPEP 2106.05(f) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis.
The recitation of “providing at least portions of the plurality of media items as input into one or more machine learning models trained to detect objects associated with one or more sponsors of sporting events or sports teams; based on output of the one or more machine learning models…” (claims 2, 10, and 18) and/or “provided as input to the one or more machine learning models” (claims 3 and 11) and/or “using a first machine learning model” (claims 5 and 13) also merely indicates a field of use or technological environment in which the judicial exception is performed. Although these additional element limits the identified judicial exceptions to use of one or more trained machine learning model(s), this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learned models) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis. This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)).
The recited element(s) of “retrieving a plurality of media items, wherein the plurality of media items include at least one of images or videos, and wherein each of the plurality of media items has been distributed by at least one of: being posted to one or more social media network services, being streamed by one or more streaming media networks, or being broadcast by one or more broadcast networks” (claims 2, 10, and 18), even if treated as an “additional” element for the purpose of this eligibility analysis, would simply append insignificant extra-solution activity to the judicial exception, (e.g., mere pre-solution activity, such as data gathering, in conjunction with an abstract idea). The term “extra-solution activity” is understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. The recited additional element(s) do are deemed “extra-solution” because all uses of the recited judicial exceptions require such data gathering, and because such data gathering steps have long been held to be insignificant pre/post-solution activity. This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(h) and (g)).
Furthermore, although the claims recite a specific sequence of computer-implemented functions, and although the specification suggests certain functions may be advantageous for various reasons (e.g., business reasons), the Examiner has determined that the ordered combination of claim elements (i.e., the claims as a whole) are not directed to an improvement to computer functionality/capabilities, an improvement to a computer-related technology or technological environment, and do not amount to a technology-based solution to a technology-based problem. For example, Applicant’s as-filed specification suggests that it is advantageous to implement the claimed business process for determining a sponsorship value to a sponsor attributable to an appearance of a first object (e.g., the sponsor’s logo) in at least a first media item, because doing so can help to provide an accurate/reliable/comprehensive measurement of a sponsorship value for an advertiser/sponsor (and/or broadcaster) (see, for example, paragraphs [0003], [0017], [0020], [0035]-[0036], [0043], and [0047] of Applicant’s as-filed disclosure). These are non-technical business advantages/improvements. At most, the ordered combination of claim elements is directed to a non-technical improvement to an abstract idea itself (e.g., an improved process for determining a sponsorship value to a sponsor attributable to an appearance of a first object (e.g., the sponsor’s logo) in at least a first media item ).
Dependent claims 4, 6, 8, 9, 12, 14, 16, and 17 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims 4, 6, 8, 9, 12, 14, 16, and 17 is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). For example, claim 4 recites “wherein the plurality of videos depict two or more sports with respect to which the first sponsor places advertisement content”. This is an abstract limitation which further sets forth the abstract idea encompassed by claim 4. This limitation is not an “additional element”, and therefore it is not subject to further analysis under Step 2A- Prong Two or Step 2B. The same logic applies to each of the other dependent claims, whose limitations are not being repeated here for the sake of brevity and clarity.
The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claim(s) is/are directed to an abstract idea (Step 2A – Prong two: NO).
Step 2B:
In step 2B, the claims are analyzed to determine whether any additional element, or combination of additional elements, is/are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for an "inventive concept." An "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself. Alice Corp., 134 S. Ct. at 2355, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966)
As discussed above in “Step 2A – Prong 2”, the requirement to execute the claimed steps/functions using “computer-implemented…based on an automated analysis” means (claim 2) and/or “a computing system comprising; a memory; and a processor in communication with the memory and configured with processor- executable instructions to perform operations…based on an automated analysis” (claim 10) and/or “a non-transitory computer-readable medium storing computer executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform operations…based on an automated analysis” (claim 18) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations therefore do not qualify as “significantly more” (see MPEP 2106.05(f)).
As discussed above in “Step 2A – Prong 2”, the recitation of “providing at least portions of the plurality of media items as input into one or more machine learning models trained to detect objects associated with one or more sponsors of sporting events or sports teams; based on output of the one or more machine learning models…” (claims 2, 10, and 18) and/or “provided as input to the one or more machine learning models” (claims 3 and 11) and/or “using a first machine learning model” (claims 5 and 13) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations therefore do not qualify as “significantly more” (see MPEP 2106.05(f)).
As discussed above in “Step 2A – Prong 2”, the recited additional element(s) of “providing at least portions of the plurality of media items as input into one or more machine learning models trained to detect objects associated with one or more sponsors of sporting events or sports teams; based on output of the one or more machine learning models…” (claims 2, 10, and 18) and/or “provided as input to the one or more machine learning models” (claims 3 and 11) and/or “using a first machine learning model” (claims 5 and 13) serves merely to generally link the use of the judicial exception to a particular technological environment or field of use. These limitations therefore do not qualify as “significantly more” (see MPEP 2106.05(g)).
As discussed above in “Step 2A – Prong 2”, the recited element(s) of “retrieving a plurality of media items, wherein the plurality of media items include at least one of images or videos, and wherein each of the plurality of media items has been distributed by at least one of: being posted to one or more social media network services, being streamed by one or more streaming media networks, or being broadcast by one or more broadcast networks” (claims 2, 10, and 18), even if treated as an “additional” element for the purpose of this eligibility analysis, would simply append insignificant extra-solution activity to the judicial exception, (e.g., mere pre-solution activity, such as data gathering, in conjunction with an abstract idea). These additional element(s), taken individually or in combination, additionally amount to well-understood, routine and conventional activities previously known to the industry, specified at a high level of generality, appended to the judicial exception. These additional elements, taken individually or in combination, are well-understood, routine and conventional to those in the field of sponsorship valuation. These limitations therefore do not qualify as “significantly more”. (see MPEP 2106.05(d)).This conclusion is based on a factual determination. The determination that receiving data/messages over a network is well-understood, routine, and conventional is supported by Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362; TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014), and MPEP 2106.05(d)(II), which note the well-understood, routine, conventional nature of receiving data/messages over a network. Furthermore, Examiner takes Official Notice that these steps were well-understood, routine, and conventional at the effective filing date of the claimed invention. Furthermore, the lack of technical detail/description in Applicant’s own specification provides implicit evidence that these steps were well-understood, routine, and conventional.
Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer, generally link the abstract idea to a particular technological environment or field of use, append the abstract idea with insignificant extra solution activity associated with the implementation of the judicial exception, (e.g., mere data gathering, post-solution activity), and appended with well-understood, routine and conventional activities previously known to the industry.
Dependent claims 4, 6, 8, 9, 12, 14, 16, and 17 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims 4, 6, 8, 9, 12, 14, 16, and 17 is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea identified by the Examiner to which each respective claim is directed).
The Examiner has therefore determined that no additional element, or combination of additional claims elements is/are sufficient to ensure the claim(s) amount to significantly more than the abstract idea identified above (Step 2B: NO).
Examiner notes that claims 7 and 15 recite one or more additional elements such that the claims as a whole are integrated into a practical application of the recited abstract idea in a manner that imposes manner that imposes a meaningful limit on the abstract idea. Specifically, these claims require a combination of “based on output of the one or more machine learning models, visually marking the first object within video data, wherein visually marking the sponsor logo comprises at least one of (a) modifying at least one frame of the video data to visually indicate an in-frame location of a sponsor logo or (b) generating overlay content presented over the at least one frame of the video data during playback to indicate the in-frame location of the sponsor logo” in combination with the remaining steps/functions of the independent claims.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
v Claims 2-18 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-19 of US Patent No. 12,124,509 (corresponding to US Application 18/065,325), and claims 1-19 of US Patent No. 11,556,963 (corresponding to US Application 17/180,527), claims 1-15 of US Patent No. 10,929,752 (corresponding to US Application 15/709,225), and claim 1-17 of US Patent No. 10,255,505 (corresponding to US Application 15/709,151), Although the conflicting claims are not identical, they are not patentably distinct from each other.
With respect to claims 2-20, although the conflicting claims are not identical, they are not patentably distinct from each other. Any differences between claims 2-20 and claims 1-17 of Application 15/709,151 would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention in view of Cohen-Solal et al. (U.S. PG Pub No. 2003/0091237, May 15, 2003 - hereinafter "Cohen-Solal”) and/or Hay (U.S. PG Pub No. 2002/0056124, May 9, 2002 - hereinafter "Hay”) and/or James et al. (U.S. PG Pub No. 2015/0082203 March 19, 2015 - hereinafter "James”) and/or Overton et al. (U.S. PG Pub No. 2003/0012409 January 16, 2003 - hereinafter "Overton”) for analogous reasons to those discussed below with regard to the rejections of the claims under 35 U.S.C. 103 (e.g. it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify each of the claims of the above-identified Patents with respective teaching of the prior art cited above to arrive at each of claims 2-18 for reasons analogous to those discussed below for why it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Cohen-Solal to arrive at each of claims 2-18). Each of the claims (and their respective modifications/rationales for combination) is not being reproduced here for the sake of brevity.
It is noted that Applicant has filed Terminal Disclaimers in each of the parent applications.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
v Claims 2-4, 7, 8, 10-12, 15, 16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen-Solal et al. (U.S. PG Pub No. 2003/0091237, May 15, 2003 - hereinafter "Cohen-Solal”) in view of Hay (U.S. PG Pub No. 2002/0056124, May 9, 2002 - hereinafter "Hay”)
With respect to claims 2, 10, and 18, Cohen-Solal teaches a computer-implemented method, a computing system, and a non-transitory computer-readable medium storing computer executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform operations comprising;
a memory; and ([0018] “system…comprises…a processor…processing software”, [0030] “digital processor…associated memory”))
a processor in communication with the memory and configured with processor- executable instructions to perform operations comprising: ([0018] “system…comprises…a processor…processing software”, [0030])
retrieving a plurality of media items, wherein the plurality of media items include at least one of images or videos, and wherein each of the plurality of media items has been distributed by at least one of: being posted to one or more social media network services, being streamed by one or more streaming media networks, or being broadcast by one or more broadcast networks; ([0011]-[0015] “in real-time…track…analysis of exposure time of the logo during an event…can assign value to each appearance of the logo…detecting and analyzing the presence of a logo in one or more data streams… at least one video datastream is received…the time the logo is detected during the event is compiled…two or more separate video datastreams…” – therefore the system retrieves a plurality of video datastreams (i.e., media items including videos) having been broadcast by one or more broadcast networks, [0004]-[0005] “television viewers…practically every sporting event…baseball stadiums…football stadiums…basketball court floors” – the event associated with the media items may be sporting events broadcast by one or more broadcast networks, [0019] “the software receives as input digital representations of images that comprise at least one video datastreams of an event”, [0027] “broadcast the event, such as a sporting event” [0030] “video broadcast is, for example, a digital video datastream. If the broadcast is an analog signal, then processor 124 or another component of the system 120 may include an ND converter”, [0008] “broadcast event…television stations and broadcast events” broadcast by one or more broadcast networks – Examiner note Hay also discloses this limitation)
providing at least portions of the plurality of media items as input to one or more machine learning models trained to detect objects associated with one or more sponsors of sporting events or sports teams; ([0014] “one or more regions of interest…for the logo in the one or more images are identified…analyzed to detect if the logo is present…”, [0017] “the step of …detect if the logo is present….comprises…radial basis function (RBF) classification modeling…training using images of the logo…” – therefore one or more classification models trained to detect depiction of one or more sponsor logos in image/video data (i.e., one or more machine learning models) and at least portions of the plurality of media items are provided as input to the ML model(s) to detect objects (e.g., a sponsor logo) within the video data, [0003] & [0052] “broadcast events, such as sporting events, is one of the most effective ways to expose products and brand logos…money spent on advertising during broadcast events”… “sponsor’s logo” – therefore the object (e.g., logo) is associated with one or more sponsors of sporting events or sports teams [0019] “the software receives as input digital representations of images that comprise at least one video datastreams of an event…monitors the presence of the detected logo”, [0039]-[0044] “programmed with …classification models…classifier…for identification of objects in an image, which may be a logo…after proper training…training may include…” – various object recognition models may be used including different ML classifiers or a neural network, see also [0032]-[0034])
based on output of the one or more machine learning models, identifying, from among the plurality of media items, at least a portion of a first media item that depicts a first object associated with a first sponsor; ([0014] “one or more regions of interest…for the logo in the one or more images are identified…analyzed to detect if the logo is present…”, [0017] “the step of …detect if the logo is present….comprises…radial basis function (RBF) classification modeling…training using images of the logo…”, [0019] “provides an output regarding detection of the presence of the logo”, [0042]-[0044] “outputs a value which indicates…probability that it is the logo…logo is tracked”, see also [0032]-[0034])
determining an initial media value associated with the first media item ([0012] “the invention can assign a value to each appearance of the logo”)
determining two or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item based on an automated analysis of the image data or video data of the first media item, wherein the two or more factors comprise (a) at least one of size, clarity, or duration of the first object within the first media item ([0012] “non-subjective analysis of the frequency, duration and degree of prominence of display of target logos…the invention can assign a value to each appearance of the logo, with the value varying in accordance with the clarity and/or size of the display of the logo, thereby informing the advertiser of the logo view-ability” – therefore the system determines two or more factors regarding visibility or prominence of the first object (logo) depicted in the at least a portion of the first media item based on an automated analysis of the image data or video data of the first media item, wherein the two or more factors comprise at least one of size, clarity, or duration of the first object within the first media item, [0034] “the processor 124 also uses the tracking data to perform output processing 124c. This includes, for example, compiling the amount of time the logo appears in the datastream, among other analysis” – duration of the first object within the first media item, [0049]-[0050] “determining…partially obscured logo…total time that the logo is visible in the datastream for the event, the percentage of time the logo is visible…ongoing indication…tracking relating to the size, perspective, illumination, etc., of the logo in the image… also keep track of the quality of the logo’s visibility during the event”)
determining a sponsorship value to the first sponsor attributable to appearance of the first object in at least the first media item, wherein the sponsorship value is determined based at least in part on the initial media value as adjusted based on the one or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item ([0012] “…the invention can assign a value to each appearance of the logo, with the value varying in accordance with the clarity and/or size of the display of the logo, thereby informing the advertiser of the logo view-ability”, [0049]-[0051] “determining…partially obscured logo…total time that the logo is visible in the datastream for the event, the percentage of time the logo is visible…ongoing indication…tracking relating to the size, perspective, illumination, etc., of the logo in the image… also keep track of the quality of the logo’s visibility during the event…the amount charged to a company may be based on the amount of time its logo is visible during the event” – therefore the system determined a sponsorship value to the first sponsor attributable to appearance of the first object in at least the first media item, wherein the sponsorship value is determined based at least in part on the initial media value as adjusted based on the one or more factors regarding visibility or prominence of the first object depicted in the at least a portion of the first media item (e.g., based on at least one of size, clarity, duration or position of the first object within the first media item))
Cohen-Solal does not appear to disclose,
wherein the initial media value is based at least in part on data associated with a social media network, a streaming media network, or a broadcast network by which the first media item has been distributed
wherein the two or more factors comprise (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item
However, Hay discloses
wherein the initial media value is based at least in part on data associated with a social media network, a streaming media network, or a broadcast network by which the first media item has been distributed ([0012] “further includes the step of providing an audience rating value…brand exposure value is calculated in dependence upon…the audience rating value…display of a trademark during prime time when the audience rating is high can provide greater contribution to the brand exposure value…audience rating value…may be varied for different frames…” – therefore the system determines an initial marketing value is based in part on data associated with a broadcast network by which the first media item has been distributed (e.g., the audience ratings), [0054] “Audience ratings for the broadcast supplied at 2 are stored in storage device 12 and are also used by processor 10 to compute the brand exposure in accordance with a stored algorithm”, also per equations in [0064]-[0070] and/or [0090] “brand exposure value…contributions for each entry in the mask hit table…correlation value C, scale value S and position value P…mask weighting value W…relevant audience rating A…” shows that the audience rating value is equivalent to an initial media value that is adjusted (weighted) based on other factors regarding visibility or prominence such as size (scale) size, clarity, duration and position as discussed throughout the disclosure)
wherein the two or more factors comprise (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item ([0010]-[0011] “each searching step includes the step of determining a respective scale value in dependence upon the scale of said part of the captured frame relative to the mask…the brand exposure value is calculated in dependence upon both the determined correlation values and the respective determined scale values. In this way, a large scale display of a trade mark in a frame can provide a greater contribution to the brand exposure value than a smaller scale display of the mark…Preferably, each searching step includes the step of determining a respective position value in dependence upon the position of said part of the captured frame relative to the complete frame, and, in the calculating step, the brand exposure value is calculated in dependence upon both the determined correlation values and the respective determined position values. In this way, the display of a trade mark at the centre of a frame can provide a greater contribution to the brand exposure value than al the edge of the frame.” – as such, one of the factors of visibility or prominence of the first object (the logo) comprises position of the first object within at least one image or frame of the first media item relative to the center of the frame (i.e., a focal point determined for the at least one image or frame of the first media item, consistent with Applicant’s own specification at [0089] “ In some embodiments…the middle of the frame may be used as the default focal point”)), [0086] “more complex relationships may be used. The position weighting value P may simply one of two values, for example 1.00 if the portion of the frame which produced the hit is within a predetermined central region of the frame, and a predetermined lesser value if it is not.”).
Hay suggests it is advantageous to include wherein the initial media value is based at least in part on data associated with a social media network, a streaming media network, or a broadcast network by which the first media item has been distributed and wherein the two or more factors comprise (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item because doing so can create a fair and equitable valuation for the sponsor reflective of a more accurate value/cost for the logo impressions that is based on an amount of viewership of the media items ([0010]-[0012] & [0064]-[0070], [0090], [0017]-[0018])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method, system, and medium of Cohen-Solal to include wherein the initial media value is based at least in part on data associated with a social media network, a streaming media network, or a broadcast network by which the first media item has been distributed and wherein the two or more factors comprise (b) position of the first object within at least one image or frame of the first media item relative to a focal point determined for the at least one image or frame of the first media item, as taught by Hay, because doing so can create a fair and equitable valuation for the sponsor reflective of a more accurate value/cost for the logo impressions that is based on an amount of viewership of the media items2Applicant: Jeffrey L. NanusApplication No.: 141593,177 Docket No.: 1377-9Preliminary Amendment.
With respect to claims 3 and 11, Cohen-Solal teaches the method of claim 2 and the system of claim 10;
further comprising determining an overall value to the first sponsor based on appearance of one or more logos of the sponsor across a plurality of videos provided as input to the one or more machine learning models (Fig 4 plurality of data streams from plurality of videos and aggregate logo detection value based on analysis of the plurality of videos, [0022] “track…logos…in one or more data streams”, [0018] “the processor receives two or more separate video datastreams…for broadcasting the event”, [0027])
Examiner notes Hay also discloses this limitation [0003] & [0007]-[0009] & [0021] & [0070].
With respect to claims 4 and 12, Cohen-Solal and Hay teach the method of claim 3 and the system of claim 11. Cohen-Solal suggests that the logos may be detected in videos of different sports ([0005] “practically every sporting event…baseball stadiums…football stadiums…basketball court floors”). However, Cohen-Solal does not appear to disclose,
wherein the plurality of videos depict two or more sports with respect to which the sponsor places advertisement content
However, Hay also discloses wherein the process may be repeated for a plurality of different broadcasts which may be different sports ([0021] & [0070] & [0091]).
As such, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include wherein the plurality of videos depict two or more sports with respect to which the sponsor places advertisement content, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One of ordinary skill in the art would have recognized that doing so would enable aggregate value determination for a sponsor for a plurality of events (including multiple different sporting events) rather than only one event. Examiner further notes that this limitation amounts merely to repeating the process a plurality of times for different sporting events, which is merely a duplication of the process already taught by the combination of references.
With respect to claims 7 and 15, Cohen-Solal and Hay teach the method of claim 2 and the system of claim 10. Cohen-Solal does not appear to disclose,
further comprising, based on output of the one or more machine learning models, visually marking the first object within video data, wherein visually marking the sponsor logo comprises at least one of (a) modifying at least one frame of the video data to visually indicate an in-frame location of a sponsor logo or (b) generating overlay content presented over the at least one frame of the video data during playback to indicate the in-frame location of the sponsor logo
However, Hay discloses
based on output of the one or more machine learning models, visually marking the first object within video data, wherein visually marking the sponsor logo comprises at least one of (a) modifying at least one frame of the video data to visually indicate an in-frame location of a sponsor logo or (b) generating overlay content presented over the at least one frame of the video data during playback to indicate the in-frame location of the sponsor logo ([0091] “the results are presented…frame is then displayed with the hits for the respective mask highlighted in the frame…any hits for other masks for the same brand also highlighted” – therefore the system visually marks the sponsor logo within the video data, wherein visually marking the sponsor logo comprises at least one of (a) modifying at least one frame of the video data to visually indicate an in-frame location of the sponsor logo)
Hay suggests it is advantageous to include based on output of the one or more machine learning models, visually marking the first object within video data, wherein visually marking the sponsor logo comprises at least one of (a) modifying at least one frame of the video data to visually indicate an in-frame location of a sponsor logo or (b) generating overlay content presented over the at least one frame of the video data during playback to indicate the in-frame location of the sponsor logo, because doing so can provide a sponsor with visual confirmation within the video frames itself of where the system automatically detected the sponsor logo, which can permit visual auditing of the system’s analysis ([0091]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method, system, and medium of Cohen-Solal to include based on output of the one or more machine learning models, visually marking the first object within video data, wherein visually marking the sponsor logo comprises at least one of (a) modifying at least one frame of the video data to visually indicate an in-frame location of a sponsor logo or (b) generating overlay content presented over the at least one frame of the video data during playback to indicate the in-frame location of the sponsor logo, as taught by Hay, because doing so can provide a sponsor with visual confirmation within the video frames itself of where the system automatically detected the sponsor logo, which can permit visual auditing of the system’s analysis2Applicant: Jeffrey L. NanusApplication No.: 141593,177 Docket No.: 1377-9Preliminary Amendment.
With respect to claims 8 and 16, Cohen-Solal teaches the method of claim 2 and the system of claim 10;
wherein the one or more factors comprise an elapsed time that the first object appears in the first media item relative to a total duration of the first media item [0049]-[0050] “determining…partially obscured logo…total time that the logo is visible in the datastream for the event, the percentage of time the logo is visible…the detected time may be used with the total event time to give the percentage of time the logo is visible over the course of the event” – percentage of time the logo appears in the first media item is an elapsed time that the first object appears in the first media item relative to a total duration of the first media item)
Examiner notes Hay also discloses this limitation at [0019].
v Claims 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen-Solal in view of Hay, as applied to claims 2 and 10 above, and further in view of James et al. (U.S. PG Pub No. 2015/0082203 March 19, 2015 - hereinafter "James”)
With respect to claims 9 and 17, Cohen-Solal teaches the method of claim 2 and the system of claim 10. Cohen-Solal does not appear to disclose,
wherein the focal point is determined to be a reference object appearing within the first media item, wherein the reference object comprises a ball or a puck
However, James discloses
wherein the focal point is determined to be a reference object appearing within the first media item, wherein the reference object comprises a ball or a puck ([0039] “Nike has recognized these issues and has worked with ESPN to install the system 100 during their televised basketball events. With the analytic capability occurring in real time during the basketball event, the system 100 is able to perform object recognition of the basketball and generate a heat map (another analytics computation) of the areas where the basketball is spending the most time on the court. Nike then pays ESPN for an advertising spot to place the Nike logo on the court where the ball is most likely to occur, and during the next camera close-up of that area the real-time manipulation hardware 106 inserts an advertisement to appear as if it were part of the basketball court, without obscuring the basketball players or the basketball” – as such, James discloses that a ball is a reference object that is a focal point within a media item, and that position relative to this focal point/object is a factor that contributes to sponsorship value )
James suggests it is advantageous to include wherein the focal point is determined to be a reference object appearing within the first media item, wherein the reference object comprises a ball or a puck, because a ball is a reference object that is a focal point within a media item, and because position relative to this focal point/object is a factor that contributes to sponsorship value (i.e., a position that is closer to a ball/puck) is more valuable because a viewer’s attention is more focused on this area ([0039]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Cohen-Solal in view of Hay to include wherein the focal point is determined to be a reference object appearing within the first media item, wherein the reference object comprises a ball or a puck, as taught by James, because a ball is a reference object that is a focal point within a media item, and because position relative to this focal point/object is a factor that contributes to sponsorship value (i.e., a position that is closer to a ball/puck) is more valuable because a viewer’s attention is more focused on this area.
v Claims 5, 6, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen-Solal in view of Hay, as applied to claims 2 and 10 above, and further in view of Overton et al. (U.S. PG Pub No. 2003/0012409 January 16, 2003 - hereinafter "Overton”)
With respect to claims 5 and 13, Cohen-Solal and Hay teach the method of claim 2 and the system of claim 10. Although Cohen-Solal and Hay both disclose that the logos being tracked are on real-world objects/locations/items (e.g., arena/court floors), and disclose automated/learned determination of the depiction of the logos in the video content, Cohen-Solal and Hay do not appear to disclose determining the real-world objects/locations/items on which the logo was depicted. Cohen-Solal does not appear to disclose,
determining, using a first machine learning model, a real-world location or item on which the first object associated with the first sponsor was depicted in a portion of video content
However, Overton discloses
determining, using a first machine learning model, a real-world location or item on which the first object associated with the first sponsor was depicted in a portion of video content ([0052] “The target area may be, for example, an area within a sporting arena in which advertising displays will be synthetically inserted within the processed image stream. Alternatively, the target area may be, for example, an area within the sporting arena in which physical signage is present at the site and may be captured by the image stream.” & [0057]-[0058] “A target area mask 502 is generated by a mask builder 134 using the reference image generated by model renderer 128. To generate the mask image 900, target area mask 502 within mask image 900 has all pixels therein set to a predefined value and all pixels outside target area mask 502 are set to another predefined value….target area mask 502 is used by a background/target area reference image separator 138 to separate or mask each target area reference image 506 within baseline image 700 generated by model renderer 128. In the example illustrated in FIG. 4, the mask will be used to separate target area reference image 506 within baseline image 700 from the rest of the image, resulting in a masked reference image 700a shown in FIG. 6. Target area mask 502 is used to locate and separate within the original image the target area/s from the rest of the non-target areas of the image, which will be referred to as the background. This function is performed by background/target area separator 136” & [0074] “within each final image, a target area 504 having a target image 604 therein is included in the final image 400e. The image 400e depicted in FIG. 18 may be analyzed according to the teachings described hereinabove to determine a duration calculation of target areas included within the image stream that includes final image 400e” – therefore the system uses a first machine learning model to determine a real-world location or item on which the first object associated with the first sponsor was depicted in a portion of video content (i..e, target area in which a synthetic advertisement/logo was inserted in the broadcast), [0070] “receives a final image 400e from the image combiner 146. In a model having a plurality of target areas, each target area may be assigned a unique index i for identification thereof. The model analyzes the frame for inclusion of target area.sub.i, at step 1702”, [0090] “during a sporting event telecast, a synthetic image assigned for insertion in a target area may have a pre-defined desired exposure duration defined prior to transmission of the image stream. Real-time calculation of the exposure time of the duration the target area having the synthetic image assigned thereto may indicate, at some point during the image stream transmission, that the target area may not be included within the image stream for the desired duration of the advertiser. The synthetic image may be reassigned to another target area that is determined to provide a greater likelihood to be included within the image stream for the original, pre-defined duration associated with the image data…During transmission of the baseball game, analysis of duration measurements, metrics or other statistical data collected during capture of the image stream and calculated for the target area having the image data assigned thereto may indicate that the target area will likely be included for a duration that is less than the desired duration indicated and agreed upon by the advertiser prior to transmission of the image stream from the venue. Accordingly, the image data assigned to the target area may be reassigned to another target area. For example, if during transmission of the image stream it is determined that the actual duration the target area having image data assigned thereto is likely to have a duration that is less than a predefined desired duration, the image data assigned to the target area may be reassigned to another target area, for example a target area in a more visible area of the venue, such as a center field wall, having duration calculations indicating a greater exposure duration relative to the target area having the image data originally assigned thereto.”, See also [0060] & [0064]-[0066])
Overton suggests it is advantageous to include determining, using a first machine learning model, a real-world location or item on which the first object associated with the first sponsor was depicted in a portion of video content, because advertisers may pay to have an object (e.g., logo, advertisement) be depicted on a real-world location or item in a portion of video content for a predetermined duration, and determination that a real-world location or item on which an object associated with a sponsor (e.g., logo, advertisement) was depicted in a portion of video content can be used to determine that the the object was depicted in the media content/video content on the real-world object (and for how long), that different real-world objects or items may be associated with different probabilities of being depicted in the media content, and because doing so can enable the system to depict the object on another real-world location in the media/video content (e.g., one having a higher probability of being depicted) if it looks likely the advertiser will not have their object depicted on the specified real-world object for the desired predefined duration ([0002]-[0004] & [0010]-[0011] & [0015] & [0037] & [0066] & [0090])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Cohen-Solal in view of Hay to include determining, using a first machine learning model, a real-world location or item on which the first object associated with the first sponsor was depicted in a portion of video content, as taught by Overton, because advertisers may pay to have an object (e.g., logo, advertisement) be depicted on a real-world location or item in a portion of video content for a predetermined duration, and determination that a real-world location or item on which an object associated with a sponsor (e.g., logo, advertisement) was depicted in a portion of video content can be used to determine that the object was depicted in the media content/video content on the real-world object (and for how long), that different real-world objects or items may be associated with different probabilities of being depicted in the media content, and because doing so can enable the system to depict the object on another real-world location in the media/video content (e.g., one having a higher probability of being depicted) if it looks likely the advertiser will not have their object depicted on the specified real-world object for the desired predefined duration.
With respect to claims 6 and 14, Cohen-Solal teaches the method of claim 5 and the system of claim 13;
wherein the real-world location or item comprises one of: an arena floor, a basketball stanchion, football goalposts, a soccer goal, a hockey goal, a tennis net, arena rafters, arena tunnel, interview backdrop, or digital signage within a sports stadium ([0005] “logo placement…walls of baseball stadium…basketball court floors)
Examiner notes Hay also discloses this limitation ([0004] “boardings along side pitch”)
Examiner notes Overton also discloses this limitation ([0052] “The target area may be, for example, an area within a sporting arena in which advertising displays will be synthetically inserted within the processed image stream. Alternatively, the target area may be, for example, an area within the sporting arena in which physical signage is present at the site and may be captured by the image stream.”, [0003] “target areas for insertion of graphic or video images. The target areas may be real areas of the site, for example a dasher board of a hockey rink, or may be imaginary surfaces, for example synthetic billboards”, [0002] “Ice hockey also includes advertising banners typically displayed on the dasher boards of the hockey rink as well as beneath the ice itself.. football fields often include sponsor logos painted on the football field in addition to logos on the stadium walls, [0055] “A target area 504 in this example is a predefined area of the surface of the dasher board 402….an imaginary banner hung from the ceiling of the hockey arena”, [0090] “target area…such as a right field wall of a baseball field”)
Prior Art of Record
The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure.
Pereira et al. (U.S. Patent No. 10,007,863 June 26, 2018 - hereinafter "Pereira”) discloses retrieving video data and detecting logos using trained classifiers present in the video data of a sporting event and wherein the classifier comprises a convolutional neural network trained to identify, within input image data, a plurality of logos or advertisement content associated with at least the first company
Deng et al. (U.S. PG Pub No.2009/0123025 May 14, 2009 - hereinafter "Deng”) discloses determining the clarity of the first object in the first media item by using computer vision techniques to compare a reference image of sponsor content to image data of the first object as depicted in the first media item, and wherein the table or list includes the clarity of the first object in the first media item ([0074] “brand recognizer…collects appearance data corresponding to the detected/recognized brand identifiers…logos…such reported data may include…match quality, visual quality” & [0063] “report measurements and other information about…logos recognized and/or detected in the example media stream…report generator…collects the brand identifiers along with…associated appearance parameters…and produces a report…output using…a display” & [0100] “characteristics…are obtained from the information…quality…” & [0072] “matching quality”)
Cline , Jr. et al. (U.S. PG Pub No. 2006/0111968, May 25, 2006 - hereinafter "Cline”) teaches automated classification software used to detect logos in video/images of broadcasts and factors associated with these exposures to determine a value to the sponsor for the exposure.
Adams et al. (U.S. PG Pub No. 2008/0219504, September 11, 2008) teaches automated classification software used to detect logos in video/images of broadcasts and factors associated with these exposures to determine a value to the sponsor for the exposure.
Wexler et al. (U.S. PG Pub No. 2016/0140146 May 19, 2016 - hereinafter "Wexler”) discloses using trained neural network classifiers to detect objects in video/image data and wherein company/brand-specific classifiers and/or topic/sport-specific classifiers are trained and used to detect objects in video/image data ([0019])
Patton et al. (U.S. PG Pub No. 2016/0034712 February 4, 2016 - hereinafter "Patton”) discloses using trained neural network classifiers to detect objects in video/image data and wherein event-specific classifiers are used to detect objects associated with those types of events.
Sharifi (U.S.Patent No. 10,515,133 December 24, 2019 - hereinafter "Sharifi”) discloses using classifiers trained to detect objects in video/image data and wherein different classifiers are associated with different classifications and wherein classifications may include sports contexts.
Zazza et al. (U.S. PG Pub No. 2010/0318406 December 16, 2010 - hereinafter "Zazza”) discloses wherein the aggregated media value is displayed and continuously updated during presentation of the video data.
Schiffman et al. (U.S. Patent No. 10,600,060 March 24, 2020 - hereinafter "Schiffman”) discloses wherein visually marking the sponsor logo comprises displaying a visual bounding shape around the sponsor logo.
Nielsen Twitter TV Rating (Published in 2014 and accessible online at http://en-us.nielsen.com/sitelets/cls/documents/nntv/NNTV-NielsenTwitterTVRatings-FAQ.pdf) discloses determining popularity of broadcast content based on matching content posted on social media and determination of valuation for the broadcast content based on this analysis.
Conclusion
No claim is allowed
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M DETWEILER whose telephone number is (571)272-4704. The examiner can normally be reached on Monday-Friday from 8 AM to 5 PM ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraf can be reached at telephone number (571)-270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/JAMES M DETWEILER/Primary Examiner, Art Unit 3621